text
stringlengths
14
5.77M
meta
dict
__index_level_0__
int64
0
9.97k
Os torneios de handebol nos Jogos Sul-Americanos de 2010 ocorraram entre 20 e 30 de março no Coliseo Unidad Deportiva Ditaires, em Itagüí. O evento masculino teve a inscrição de cinco equipes e o feminino de seis. Calendário Medalhistas Torneio masculino Todos os jogos seguem a hora oficial de Medellín (UTC-5) |style ="border-left:1px solid darkgray;" width=9px| | |} Torneio feminino Todos os jogos seguem a hora oficial de Medellín (UTC-5) |style ="border-left:1px solid darkgray;" width=9px| | |} Ligações externas Handebol 2010 Jogos Sul-Americanos
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,754
Q: Swift: Calculated md5 hash of image file doesn't match with terminal and other hash generator I want to calculate hash of an image, first I convert image to data and then with help of this function I will calculate hash of image file (Data), but the generated hash doesn't match with online generator and other language convertors like (Java), even I tried other libraries but i get same result, I think while Im converting to Data something happend to my file so hash doesn't match with other convertors. but when i calculate a plain text hash, it matches with all online convertor and other languages convertor but its not the same with Image? generated hash in terminal is different thanks for any help func md5(url: URL) { let bufferSize = 1024*1024 do { let file = try FileHandle.init(forReadingFrom: url) defer { file.closeFile() } var context = CC_MD5_CTX.init() CC_MD5_Init(&context) while case let data = file.readData(ofLength: bufferSize), data.count > 0 { data.withUnsafeBytes { (poiner) -> Void in _ = CC_MD5_Update(&context, poiner, CC_LONG(data.count)) } } // Calculate the MD5 summary var digest = Data(count: Int(CC_MD5_DIGEST_LENGTH)) digest.withUnsafeMutableBytes { (pointer) -> Void in _ = CC_MD5_Final(pointer, &context) } let result = digest.map { (byte) -> String in String.init(format: "%02hhx", byte) }.joined() print("result: \(result)") } catch let error as Error { print("calculation error: \(error.localizedDescription)") // Where is the try, where is the error? } } A: Ok, here is a little test. To do this I dragged a png and some random file into the project. They are highlighted in the image below. Note that I added the files directly to the project, not to assets. That answer I linked to in the comments actually mentions that you can not get the image directly but, as you indicate, it is also saved as png data. The project is a standard iOS SwiftUI app and I added the code below to the autogenerated ContentView.swift file. import SwiftUI import CommonCrypto struct ContentView: View { var body: some View { Text("Hello, world!") .padding() .onAppear { md5 ( url : Bundle.main.url(forResource: "t", withExtension: "png" ) ) md5 ( url : Bundle.main.url(forResource: "something", withExtension: "ext" ) ) } } } func md5 ( url : URL? ) -> Void { if url == nil { print ( "Skipping empty url" ) return } else { print ( "Summing \( url )" ) } let bufferSize = 1024*1024 do { let file = try FileHandle.init(forReadingFrom: url!) defer { file.closeFile() } var context = CC_MD5_CTX.init() CC_MD5_Init(&context) while case let data = file.readData(ofLength: bufferSize), data.count > 0 { data.withUnsafeBytes { (poiner) -> Void in _ = CC_MD5_Update(&context, poiner, CC_LONG(data.count)) } } // Calculate the MD5 summary var digest = Data(count: Int(CC_MD5_DIGEST_LENGTH)) digest.withUnsafeMutableBytes { (pointer) -> Void in _ = CC_MD5_Final(pointer, &context) } let result = digest.map { (byte) -> String in String.init(format: "%02hhx", byte) }.joined() print("result: \(result)") } catch let error as Error { print("calculation error: \(error.localizedDescription)") // Where is the try, where is the error? } } struct ContentView_Previews: PreviewProvider { static var previews: some View { ContentView() } } This code is pretty much your MD5 function together with some code to test those files added to the project. Now, the MD5 for the file works but it does not work for the image! I presume Apple add some stuff to the image. I print the URL and checked - the image is a lot smaller than the original so it is probably some stripped down version with all or most of the metadata removed but this is just a guess. Anyhow, I share your pain ... Why do these images differ - [img drawInRect:] problem here Apple changes an image that you draw in what appears to be an arbitrary way. I presume these kinds of tweaks are to improve UI or UX but personally I do not appreciate them. Your md5 function works though fwiw. But you should be careful to test the same file as you do with the other MD5 functions. Thus you should not convert it in any way that would change even just one byte from the original. This is why pngData is a bad idea. But at the same time, I would expect if you copy it into your project that it would stay the same but apparently not. EDIT FWIW, if you rename the image to something without extension png and then add it to the project it stays the same. I duplicated and renamed that image to z.z and added that to the project and then the MD5 (and z.z) is the same as the original.
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,887
Q: Excel Addin and ASP.net I want to add functionality with my our website, when you request to change a set of date (Shown in Gridview). that when you click "Edit" that it would open up in Excel. When that is completed and the user made changes he/she should click on a addin that would post it back to the website and update in the database. (For example the Team Foundation Server, you can request to open the tasks in Excel and when it saves it would update TFS) Would this be possible and can anyone redirect me to some examples to do this Kind Regards:) A: TFS achieves this with a plugin that in installed on the computer running excel and handles the update and integration. The support for Sharepoint is also built into Office OOB. You would need you would need to create your own plugin for excel and distribute it to your users.
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,399
Sarah Khorasi, Managing Editor Georgia's governor race had both political parties scrambling for the win. Previous to the elections, Stacy Abrams was winning the early election polls. Shortly after, the polls began to fluctuate, causing interest to arise: would Abrams make American history? If results had been similar to the stude... Kalee Wiley, Co-Editor In Chief PTSA Reflections is a contest where students enter artwork centered around a theme. The 2018 theme was "Heroes Around Us" There are many entry categories, ranging from Dance Choreography, Literature, Photography, and Visual Arts. In addition, there is a division for a Special Artist in the school.... Leon Christan, Backpage Editor For decades, California has been known as an innovator in legal reform. From San Francisco's barrier-shattering school system to the state's views on immigration, California has been seen as a role model and also as a petri dish for new laws. In the past two years, the state has passed many groundbreaking laws that have finally gone into effect this past January ('18). Three such laws are AB 10, Propositi... Emily Ogbodo, Staff Writer Turn on any major news channel, and there is a good chance that they are discussing the current relations between North Korea and the U.S. With president Donald Trump and Kim Jong Un hurling insults back and forth, one may not know what to make of the situation. Some worry about the imminent possibility... Candler Clark, Opinions Editor Parkview suffered a close loss to the Brookwood Broncos on the 20th, 27 - 30. Parkview had a mediocre defensive performance in the first half, giving up 21 points. That left it at a 21 - 7 disadvantage. Even after 20 points in the second half, the team couldn't make up the lost ground. As the Panthers... Catie Gelting, Features Editor Every year, the teachers at Parkview elect a Teacher of the Year. To be elected, the teacher must first be nominated by his peer teachers and fill out an application which will be anonymously reviewed by students, teachers, parents, and community members. This process narrows the pool down to three contestants,... karen Ye, Editor in chief Some of the most important people in life work behind the scenes. Counselors are some of the most invaluable, providing students with the assistance they need to succeed at school and in life. Parkview's Dr. Samela Reid is one example of such excellence. On October 9th, Parkview Counseling Department... Anika Akbar, Editor in Chief of operations Virginia Tech, 2007: 32 dead and 17 injured. Sandy Hook Elementary, 2012: 27 dead, 2 injured. Pulse Nightclub, 2016: 49 dead, 53 injured. Every year, the hearts of Americans get heavier and heavier as the loss of lives become greater. And more recently, a new event has been added to the annual "Deadliest... Thuy Pham, Editor in Chief Of Design It came down to the battle between Gryffindor and Hufflepuff during the finale of Quidditch. In the end, despite the Seeker for Hufflepuff capturing the Snitch, the Gryffindor won the game. Players in Gryffindor promptly cheered and raced to hug their teammates. The Gryffindor acquired 50 points from... Parkview's Special Education Department has been a longstanding symbol of the school's dedication toward ensuring excellent education for all. With their hardworking teachers and friendly student body, the department has earned many commendations for their efforts. The department has become the...
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,331
Q: Apple Mach-O Linker Error No Such File or Directory 'UIKit' I'm getting an Apple-Mach-O Linker Error saying it can't find UIKit. I'm not quite sure as how to proceed. When I delete the contents of my framework search paths it doesn't give the error anymore but of course logically it doesn't find my other frameworks because it needs some framework search paths! I've also tried cleaning everything and deleting my derived data. This is the full output: Ld /Users/Eytan/Library/Developer/Xcode/DerivedData/Schedule-dykchbcjtvfkeacentxdqbecmizy/Build/Products/Debug-iphonesimulator/Schedule.app/Schedule normal x86_64 cd /Users/Eytan/Desktop/xcodeProjects/iOS/Schedule export IPHONEOS_DEPLOYMENT_TARGET=7.1 export PATH="/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/usr/bin:/Applications/Xcode.app/Contents/Developer/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin" /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang -arch x86_64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator8.4.sdk -L/Users/Eytan/Library/Developer/Xcode/DerivedData/Schedule-dykchbcjtvfkeacentxdqbecmizy/Build/Products/Debug-iphonesimulator -F/Users/Eytan/Library/Developer/Xcode/DerivedData/Schedule-dykchbcjtvfkeacentxdqbecmizy/Build/Products/Debug-iphonesimulator -F. -FFrameworks -FQuickSchedule -FSchedule -FSchedule\ WatchKit\ App -FSchedule\ WatchKit\ Extension -FScheduleTests -FSchedule/Images.xcassets -FSchedule\ WatchKit\ App/Images.xcassets -FSchedule\ WatchKit\ Extension/Images.xcassets -FSchedule/Images.xcassets/AppIcon.appiconset -FSchedule/Images.xcassets/LaunchImage.launchimage -FSchedule\ WatchKit\ App/Images.xcassets/AppIcon.appiconset -filelist /Users/Eytan/Library/Developer/Xcode/DerivedData/Schedule-dykchbcjtvfkeacentxdqbecmizy/Build/Intermediates/Schedule.build/Debug-iphonesimulator/Schedule.build/Objects-normal/x86_64/Schedule.LinkFileList -Xlinker -objc_abi_version -Xlinker 2 -ObjC UIKit -fobjc-arc -fobjc-link-runtime -Xlinker -no_implicit_dylibs -mios-simulator-version-min=7.1 -Xlinker -sectcreate -Xlinker __TEXT -Xlinker __entitlements -Xlinker /Users/Eytan/Library/Developer/Xcode/DerivedData/Schedule-dykchbcjtvfkeacentxdqbecmizy/Build/Intermediates/Schedule.build/Debug-iphonesimulator/Schedule.build/Schedule.app.xcent -framework UIKit -lsqlite3 -lz -framework SystemConfiguration -framework StoreKit -framework Security -framework QuartzCore -framework CoreLocation -framework CoreGraphics -framework CFNetwork -framework AudioToolbox -framework ParseFacebookUtils -framework ParseUI -framework ParseFacebookUtilsV4 -framework ParseCrashReporting -framework ParseTwitterUtils -framework Foundation -framework Bolts -framework Parse -Xlinker -dependency_info -Xlinker /Users/Eytan/Library/Developer/Xcode/DerivedData/Schedule-dykchbcjtvfkeacentxdqbecmizy/Build/Intermediates/Schedule.build/Debug-iphonesimulator/Schedule.build/Objects-normal/x86_64/Schedule_dependency_info.dat -o /Users/Eytan/Library/Developer/Xcode/DerivedData/Schedule-dykchbcjtvfkeacentxdqbecmizy/Build/Products/Debug-iphonesimulator/Schedule.app/Schedule And this is my current framework search paths for the target that this is happening in: $(SRCROOT) Recursive A: If it's fixed when you clear your search paths, it's probably missing something that's inherited. In the search path editor, above $(SRCROOT), add $(inherited). Does that help?
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,587
Got it now. Thanks for the clarification. The workaround is not ideal, but it works for now. Please let me know when I can download the fixed version to test it. Wow, it looks fantastic. Thank you for sharing it! Does OpenMV support the ipproto_sec parameter or is that only for the WiPy? Thanks Ibrahim. It works now. I was able to update the firmware to 2.6, but I get this error when I run the WINC Firmware Update Script. Yeah, the firmware was released earlier but the next IDE will package the v2.6 firmware. Will the new version of the IDE be released this week? I've just finished compiling the latest ide binaries with this new firmware. Everything will be released this week. Does everything mean a new IDE release in conjunction with the 2.6 firmware (released on Dec 4)? The LED Control documentation seems to be out of sync with the example code. That will be great. Would you mind posting back here when it is available? Thanks Ibrahim! Is there a fritzing part available for the OpenMV M7? @kwagyeman, pretty much the same components as shown in the above picture. In our case, we'll use to turn a light switch when the camera is operating in the dark hours. @iabdalkader, I agree. However, it'll be nice to have a shield that is ready for plug-and-play. Are there any plans to develop a Relay Shield for the OpenMV board? Something like the one for the RFDuino. Separate from the capability of processing images; how does the processing power + memory of the M7 board (using the WiFi shield) compares to other boards like the WiPy 2.0 or the Adafruit HUZZAH with ESP8266 WiFi? Has anyone run benchmark tests to compare them? Thanks Ibrahim. The sleep(10) in the original servo_control.py example was too short for my servo. Your snapshot code uses sleep(100) and with that value, it works. You might want to update the servo_control.py example. Anyway, thank you both, for looking into this. I didn't send you my code since I was able to reproduce it with the pixy_uart_emulation.py example. I just tested using the pixy_uart_emulation.py example and the same problem occurred. It refuses to run it (I sent you video as PM). It works fine with smaller .py examples like the helloworld.py. I running with them without the SD card inserted though. Is there a limit on the size of the python script being run from the IDE? I'm using the OpenMV-7 board and my .py file consist of 198 lines and it's size is ~5KB, but I'm now reaching a point where I have to remove lines to be able to run it from the IDE. If you run /examples/14-WiFi-Shield/mjpeg_streamer.py from the IDE and if you stop it while it is Waiting for connections.., then you cannot eject the M7 from its USB connection. I'm testing it on mac with OSX 10.10.5. The M7 is running with the latest firmware and the WiFi shield. How does the mjpeg_streamer example generate this HTML page? Can it be overridden with my own or customized? Are the connections below correct to control one(1) servo using the M7 board with the WiFi shield only? Thank you for offering a replacement. I used a Windows 10 machine as you suggested, but the LED still continues flashing white after running the Bootloader from the IDE and it completes. Uploaded the openmv.dfu file, but the LED still flashes white after the reprogramming dialog completed. I'm pretty much experiencing the same problem/behavior described here: http://forums.openmv.io/viewtopic.php?f=6&t=208#p1144 When you wrote, "DFU is painfully slow." How slow? 5 mins, 1 hour, a day? The LED is flashing rapidly and it looks like a white color. There is not USD inserted. Thanks. Sorry Kwabena, I'm not sure I follow. The servo has 3 wires, but are you saying that I need to connect the Vin and GND to an external power source and only use the servo signal wire to control it? The M7 has one free pin when the WiFi shield is used for servos. Servo channel 3. Or P9. Great, thanks Kwabena. One more question. Can I power the board and the servo via the USB connector using an external 5V 2A power bank? Do I need the servo shield if I want to control just one hitec servo like the HS-85MG? Can I just connect it directly to pins one of the OpenMV board to control it with Servo.py script? Did you order the OpenMV Cam M7? Yes, we did. It should be under the name Guy Power. kwagyeman, did you work up a py script to output MAVLink from the OpenMV module? I just wanted to share that uFactory has a crowdfunding on Indiegogo and their video promotes the OpenMV board. Thanks Dave. Your last post did it. It works now and it's cool that I can build it from my Mac OSX. Cheers! kwagyeman wrote: It just isn't linked into the build. How do you link the modubinascii into the build? Do I need to just add it to the make file? No, I don't have a linux machine so let me know when you have custom image for me to test it. Thanks! Why doesn't OpenMV support ubinascii? Topic: Hw reset and/or reboot? Re: Hw reset and/or reboot? I used sensor.py to simplify things. We do use a different name. It seems that is how MP work. Loads imported modules and cache them in memory, which forces you to hard reset the board if you need test new changes in those modules.
{ "redpajama_set_name": "RedPajamaC4" }
5,215
Last updated: 10:35 AM ET, Sun May 05 2019 Destinations Home | North America | United States | Oregon Portland Oregon downtown city skyline. (photo via JPLDesigns / iStock / Getty Images Plus) Portland, Oregon, is the state's largest city, and among the fastest-growing cities in the U.S. Yet this Pacific Northwest burgh near the confluence of the Willamette and Columbia Rivers has not morphed into a megalopolis. Rather, it has become known for strong land-use planning, light rail, the arts, culinary excellence, microbreweries and fine wine. Portland consists of 10 neighborhoods, each with a distinct character. Among them are the multicultural Alberta Arts District; the bohemian Hawthorne and Belmont neighborhood; quirky, retro Hollywood; affluent Nob Hill; the industrial-turned-hip and trendy Pearl District; and Old Town/Chinatown, the original heart of the city. The arts scene is alive and well in Portland, with more than 150 galleries and lots of museums, like the Portland Art Museum and the one-of-a-kind 3-D Center of Art and Photography, as well as monthly art walks in the Pearl and Aberta Districts. Among other cultural attractions in Portland are the Oregon History Museum, with exhibits on topics like the Oregon Trail and Native culture; the interactive World Forestry Center Discovery Museum, where visitors learn about the sustainability of forests and trees; and the Portland Underground (aka Shanghai Tunnels), a series of tunnels and catacombs that were once used to shanghai unsuspecting sailors, loggers and ranchers. Known as the City of Roses, Portland also is home to several public rose gardens, including the International Rose Test Gardens. Thirty minutes from Portland is one of the country's prime wine growing regions, the Willamette Valley, with hundreds of wineries. Stretching 150 miles, the Willamette Valley is the largest of Oregon's 16 American viticultural areas, which produce many varieties, including Oregon's renowned pinot noir. Many wineries offer tasting rooms, and wine tours are available from several local operators. Portland's cuisine spans the ethnic spectrum, but the city is best known for regional, farm-to-table foods, with an emphasis on sustainable produce and fresh fish and meats, paired with locally produced wine and beer for a range of budgets. Portland also is a hub for microbreweries and brew pubs, where visitors can enjoy locally crafted beer with their meals. Besides hip bistros, fine dining and casual spots, Portland offers several culinary festivals, including the Oregon Seafood and Wine Festival in February, the Oregon Brewers Festival in July and Bite of Oregon in August. Portland International Airport is served by 13 carriers and offers car rentals from major companies. In addition, visitors can take Amtrak to Portland Union Station. Getting around is easy, with the TriMet bus system and the MAX light rail system providing service around the city and the Portland Streetcar providing rides in the downtown and adjacent areas. Interstate 5 runs through Portland and connects with I-205. While Portland's annual precipitation averages 36 inches, rain falls mostly during winter. Summers are warm and dry, making for prime visiting conditions. The average daytime temperature in July is 76, while the mercury drops to as low as 40 in January. Portland News gallery icon New WestJet Flights, Europe Hotel News And Cayman Savings: This Week's JimBits gallery icon The Best Airports for a Long Layover Travel Oregon Puts on a Show in Toronto Continent North America
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,447
The Bjärby Runestones are two Viking Age memorial runestones located near Grästorp, Sweden, in Bjärby synod, which was in the historic province of Västergötland. The two stones are memorials to men who held the titles thegn and drengr, and one has a depiction of the hammer of the Norse pagan deity Thor. Vg 113 Västergötland Runic Inscription 113 or Vg 113 is the Rundata listing for a runestone located in Lärkegapet, which is about one-half kilometer east of Grästorp The inscription, which is on a gneiss stone that is 2.5 meters in height, consists of two vertical bands of runic text with the sides of the runic bands forming the handle of a hammer, which is considered to be a depiction of Thor's hammer Mjöllnir. Because of the length of the text bands, the hammer has a long shaft with the head located at the top of the stone. Thor's hammer was used on several memorial runestones in Sweden and Denmark, perhaps as a parallel to or a pagan reaction to the use of the cross by Christians. Other surviving runestones or inscriptions depicting Thor's hammer include runestones U 1161 in Altuna, Sö 86 in Åby, Sö 111 in Stenkvista, Öl 1 in Karlevi, DR 26 in Laeborg, DR 48 in Hanning, DR 120 in Spentrup, and DR 331 in Gårdstånga. The inscription is classified as being carved in runestone style RAK, which is the classification for inscriptions where the ends of the runic band do not have any attached serpent or beast heads. The runic text states that the stone was raised as a memorial to his kinsman Bjôrn and describes the deceased man as being "a very good thegn." The term thegn was used in the late Viking Age in Sweden and Denmark to describe a class of retainer. About fifty memorial runestones described the deceased as being a thegn. Of these, the runic text on sixteen other runestones use the same Old Norse phrase harða goðan þegn, Vg 59 in Norra Härene, Vg 62 in Ballstorp, Vg 102 in Håle gamla, Vg 115 in Stora Västölet, Vg 151 in Eggvena, Vg NOR1997;27 in Hols, DR 86 in Langå, DR 106 in Ørum, DR 115 in Randers, DR 121 in Asferg, DR 123 in Glenstrup, DR 130 in Giver, DR 213 in Skovlænge, DR 278 in Västra Nöbbelöv, DR 294 in Baldringe, and DR 343 in Östra Herrestads. In addition, four inscriptions use a different word order, þegn harða goðan, include Vg 74 in Skolgården, Vg 152 in Håkansgården, Vg 157 in Storegården, and Vg 158 in Fänneslunda. The name of the sponsor, Dagr, which is an Old Norse word which means "day," also appears on Vg 101 in Bragnum and Ög 43 in Ingelstad, which uses an ideogram for the name, and is the personification of day in Norse mythology. Inscription Transliteration of the runes into Latin characters takh : risþi : stn : þaisi : ʀfti : burn : frita : harþa : kuþih : þikn : Transcription into Old Norse Dagʀ ræisti stæin þannsi æftiʀ Biorn frænda, harða goðan þegn. Translation in English Dagr raised this stone in memory of Bjôrn, (his) kinsman, a very good Þegn. Vg 114 Västergötland Runic Inscription 114 or Vg 114 is the Rundata listing for a runestone located in Börjesgården, which is about one-half kilometer northeast of Grästorp The inscription, which is on a stone that is 2.5 meters in height and made of gneiss, consists of runic text within a single text band in the shape of a hook. The inscription, similar to Vg 113, is classified as being carved in runestone style RAK. The runic text states that the stone is a memorial raised by Þórir in memory of his brother Tóki. The deceased man is described as being harða goðan dræng or "a very good valiant man," using the term drengr. A drengr in Denmark was a term mainly associated with members of a warrior group. It has been suggested that drengr along with thegn was first used as a title associated with men from Denmark and Sweden in service to Danish kings, but, from its context in inscriptions, over time became more generalized and was used by groups such as merchants or the crew of a ship. Other runestones describing the deceased using the words harþa goþan dræng in some order include DR 1 in Haddeby, DR 68 in Århus, DR 77 in Hjermind, DR 127 in Hobro, DR 268 in Östra Vemmenhög, DR 276 in Örsjö, DR 288 and DR 289 in Bjäresjö, Sm 48 in Torp, Vg 61 in Härlingstorp, Vg 90 in Torestorp, Vg 112 in Ås, the now-lost Vg 126 in Larvs, Vg 130 in Skånum, Vg 153 and Vg 154 in Fölene, Vg 157 in Storegården, Vg 162 in Bengtsgården, Vg 179 in Lillegården, Vg 181 in Frugården, Vg 184 in Smula (using a plural form), the now-lost Ög 60 in Järmstastenen, Ög 104 in Gillberga, and possibly on U 610 in Granhammar. Inscription Transliteration of the runes into Latin characters * þuri : risþi : stin : þonsi : ift- : tuka : bruþur : sin : harþa : kuþan : trik : Transcription into Old Norse Þoriʀ ræisti stæin þannsi æft[iʀ] Toka, broður sinn, harða goðan dræng. Translation in English Þórir raised this stone in memory of Tóki, his brother, a very good valiant man. References External links Photograph of Vg 113 in 1995 - Swedish National Heritage Board Photograph of Vg 114 in 1995 - Swedish National Heritage Board Runestones in Västergötland
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,720
{"url":"https:\/\/kullabs.com\/classes\/subjects\/units\/lessons\/notes\/note-detail\/7590","text":"Notes on Measure of dispersion | Grade 12 > Business Math > Measures of Dispersion | KULLABS.COM\n\nNotes, Exercises, Videos, Tests and Things to Remember on Measure of dispersion\n\nPlease scroll down to get to the study materials.\n\n\u2022 Note\n\u2022 Things to remember\n\nThe measures that describe the degree of scatteredness (spread) of a data set are called the measure of dispersion. A measure of central tendency do not explain the nature of distribution of the data, it indicates the location of the central position of the given data.\n\nProperties of a good measure of dispersion:\n\n1. It should be rigidly defined.\n2. It should be easy to calculate and understand.\n3. It should be based on all the observations.\n4. It should be suitable for the further mathematical treatment.\n5. It should be least affected by the fluctuation in sampling.\n6. It should not be affected by extreme observations.\n\nAbsolute and Relative Measure of Dispersion:\n\nThe measure of dispersion whose unit is same as the unit of the given data is called the absolute measure of dispersion. Range, quartile deviation, mean deviation and standard deviation are the absolute measure of dispersion. Similarly, The relative measure of dispersion is obtained as the ratio of an absolute measure of dispersion to suitable average.\n\ni.e Relative measure of dispersion = $$\\frac{Absolute measure of dispersion}{Average}$$\n\nTherefore, we can say that relative measure is independent of units. It is also called as the coefficient of dispersion. So, a coefficient of range, a coefficient of quartile deviation, a coefficient of mean deviation and coefficient of variation are the relative measure of dispersion.\n\nMethods of measuring Dispersion\n\nThe following are the commonly used measures of dispersion,\n\n1. #### Range\n\nThe range is the easiest method of measure of dispersion. It is the difference between the largest item and smallest item of the series. It is denoted by R. Mathematically,\nRange () = L \u2013 S\nWhere,\nL = largest item\nS = smallest item\nCoefficient of Range = $$\\frac{L - S}{L + S}$$\n2. #### Quartile Deviation\n\nThe half of the interquartile range is known as the quartile deviation. The interquartile range is the difference between the 1st and 3rd quartiles and is defined as the interquartile range. i.e Q3 \u2013 Q1. This interval contains the middles 50% of the distribution.\nQuartile deviation (Q.D) =$$\\frac{Q_3 - Q_1}{2}$$\nCoefficient of quartile deviation = $$\\frac{Quartile deviation}{Median}$$\n$$\\frac{Q_3 \u2013 Q_1}{Q_3 + Q_1}$$\n3. #### Mean Deviation\n\nIt is also known as the Average deviation. It can be defined as the arithmetic mean of the absolute deviation of various items from an average either from mean or median or mode. It is denoted by M.D.\nCalculation of mean deviation\na) For individual series:\n\nMean deviation from mean = $$\\frac{\u2211\u00e2\u201d\u201ax- \\overline{x}\u00e2\u201d\u201a}{n}$$\nMean deviation from median = $$\\frac{\u2211\u00e2\u201d\u201ax- M_d|}{n}$$\nMean deviation from mode = $$\\frac{\u2211\u00e2\u201d\u201ax- M_o|}{n}$$\nWhere n = total number of items.\nb) For discrete or continuous series\nMean deviation from mean = $$\\frac{\u2211\u00e2\u201d\u201ax- \\overline{x}\u00e2\u201d\u201a}{N}$$\/\nMean deviation from median = $$\\frac{\u2211x- M_d}{N}$$\nMean deviation from mode = $$\\frac{\u2211\u00e2\u201d\u201ax- M_o}{N}$$\nWhere, N = \u2211f\nThe relative measure of mean deviation is defined as follows:\nCoefficient of M.D from mean = $$\\frac{M.D from mean}{Mean}$$\nCoefficient of M.D from median = $$\\frac{M.D from median}{Median}$$\nCoefficient of M.D from mode = $$\\frac{M.D from mode}{Mode}$$\n4. #### Standard deviation\n\nThe positive square root of the arithmetic mean of the square of the deviation of the given observation taken from their arithmetic mean is called standard deviation. It is denoted by the Greek letter $$\\sigma$$ (Sigma).\n\nReferences:\n\nJoshi, Amba Datt, et al. \"Measures of Dispersion.\" Business Maths. Kathmandu: Dreamland Publication, 2013 AD. 311 - 316.\n\n.","date":"2020-07-13 15:09:04","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6753985285758972, \"perplexity\": 1196.4255472515827}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-29\/segments\/1593657145436.64\/warc\/CC-MAIN-20200713131310-20200713161310-00109.warc.gz\"}"}
null
null
Home Afghanistan In conversation with Chomsky: American culture and politics-I In conversation with Chomsky: American culture and politics-I The US invasion of Iraq was aimed at setting up permanent military bases there and to benefit energy corporations and weapons industry, comments Professor Noam Chomsky, an internationally acclaimed American linguist in a conversation with Hassan Mirza, published in The Express Tribune Blogs section. The conversation is based on correspondence between the two over a period of four years. Prof Chomsky, a staunch critic of imperialism and capitalism has responded to the Germany-based Pakistani's questions about almost every subject from American culture and politics to media, intellectuals, imperialism, science, language, human nature, religion, spirituality, the Indian subcontinent, climate change, and to the migration crisis. "My email exchange with Professor Noam Chomsky began in 2017," says Mr Mirza. "I had read many of his writings and was curious about his views on a variety of topics. I sent him an email out of curiosity and what had started as an occasional email exchange at first soon turned into a habit. As a result, I kept writing to him for the next few years, and he was always generous enough to answer my questions. What I eventually learned from him is that the world is a complex and dynamic place that does not lend itself easily to stereotypes, dichotomies and simplistic explanations. Also, I learned that having an open mindset is an antidote to many, if not all, of the problems we face as a species." The interview presented below is a compilation of some of the most interesting questions and responses over the course of our correspondence write Mr Mirza. Professor Chomsky himself has read and approved the material and was satisfied with the quality and content. I hope you derive as much value from this conversation as I have, says Mirza. Hassan Mirza (HM): Let's talk about the United States; I am interested in the US economy for several reasons. Its economics departments and STEM research centres attract the best students from the developing world. Noam Chomsky (NC): It makes good sense to be concerned with what goes on in the US, whether in academic departments of economics or in the general economy and society. Namely, US power, which is unparalleled. HM: What were the real reasons which caused the US to wage wars in Iraq and Afghanistan? Was it for oil and natural resources or only for power projection? Did these wars benefit the US financially? NC: Both. In 2007, when Bush had to reach a status of forces agreement, a formal declaration made the war aims pretty clear: permanent military bases and preferential treatment for US energy corporations. Both were rejected. Costs to society were huge. Some benefited from the war and occupation, of course. Arms manufacturers, contractors, etc. HM: Which major or famous American and British newspapers do you trust and read the most? Which newspapers do you think are less trustworthy? NC: I read the major national press: New York Times, Washington Post, Wall Street Journal, foreign press on matters that interest me, journals from across the spectrum. Everything has a point of view. Up to the reader to understand and compensate for it. HM: Are there any significant differences between the Democratic and Republican positions on politics, economics, other domestic policies (support for big business, abandonment of working classes) and foreign policy affairs of the US? NC: There are great differences, in many areas. Simply compare the policies. The gulf now is far wider. Moderate Republicans used to be quite similar to liberal Democrats. Now they barely exist. HM: Why was the Democratic party not able to strengthen American labour unions in the last half a century? There are many Democrats who want, or at least claim they want, to improve the situation of the American working classes. What is stopping them from doing so? NC: The Democratic Party shifted to the right during the neo-liberal period. The leadership has separated from the base. It now consists mostly of Clintonite New Democrats, who are rather like former moderate Republicans. Rather like the New Labour in Britain. HM: I was wondering, was there any moment in your life when you met any American president and they informed you of their opinion about your criticisms regarding American foreign policy? NC: I've testified in the Senate, and met senators and representatives, but never presidents of the US – sometimes other countries. HM: Was Barack Obama not an exceptional president and incorruptible? Better than Bill Clinton, George W Bush, Ronald Reagan, etc.? NC: Better than Bush, Reagan is a very low bar. Arguably better than Clinton. Nowhere near as good as he could have been. HM: Where did Obama go wrong? NC: Obama could have done a great deal to prevent the current mood. He was elected with working-class support, people who believed his message of "hope" and "change." He quickly sold them out, and many turned to Donald Trump in despair that the Democrats, Obama included, would care about their fate. Jumping from the frying pan into the fire. HM: Yet he is routinely lionised by the American entertainment industry, late-night show hosts, liberal newspapers? NC: It's true that Obama is lionised, very much like John F Kennedy. And in other circles, like Reagan and Trump HM: Why is it so difficult to have a democracy and so easy to have an autocracy, monarchy or other kinds of authoritarian forms of government? NC: It's easier for one or a few individuals to amass power than for a community to act together in an informed and cooperative way. Nevertheless, there has been great progress. For much of the world, highly autocratic structures are a thing of the past. HM: It is a common narrative in the Western world that a secular state is necessary for the co-existence of different groups in a state. Can a nation-state still be inclusive, peaceful and democratic without being secular? NC: It depends on whether the religious character of the state is more than symbolic. HM: Income inequality appears to be increasing in the developed world. Why does the situation appear to be grimmer in the US than in Western Europe? America appears to be spiralling down the most. NC: By design. But those who matter, the 1%, are spiralling up, also by design. The tax bill is practically a caricature of ruling class savagery, but it's far more general, including European "austerity". To an unusual extent, the US is a business-run society, for historical reasons. The US has many admirable features, but also deep flaws. HM: Who is carrying out these neo-liberal reforms? NC: The neo-liberal reforms are supported by powerful sectors of the corporate world and private wealth. All over the world. It takes a powerful and popular counter-force to resist them, as always. HM: Does affirmative action at the universities work for minority groups and help improve society in general? Is the idea of affirmative action in academia sensible? NC: Every civilised country, rightly, tries to compensate for discrimination and repression in many ways, one is to assign some degree of priority to students from severely disadvantaged groups. The US does it much less than others, including much poorer countries like Brazil and India. HM: There are many prominent intellectuals (conservatives) in the Anglo American media like Roger Scruton, Jordon Peterson, etc. who say that most North American and British universities are infiltrated by mostly left wing, Marxist and anti-capitalist intellectuals who have a contempt for even the moderate of conservative values. They discriminate against conservative scholars and hire only the liberal ones as tenured professors in the social sciences departments. The conservatives are only left with privately funded think tanks and forums like the Hoover Institute. Is this all true in your opinion? NC: Scruton is worth reading. Peterson is an utter fraud. They both know very well that there is virtually no Marxist, anti-capitalist faculty in the universities. The faculty consists mostly of moderate liberals (in the US sense of the term – moderate social democrats in the European sense) and conservatives. There is a small fringe tolerated on the left, something considered outrageous by those who demand nothing less than total conformity to the doctrines of the powerful. One part of the far-right lament is true: they do have very well-funded centres and think tanks, something lacking outside the right-wing. At my own university, for example, though it is a state university, the Republican state legislature provides almost no funding, apart from lavish funding for a "Freedom Center" established by the far-right Koch Brothers oil magnates and academic programmes supported and funded by it, teaching doctrine so far to the right that Scruton would approve of it (I ignore Peterson). In general, those who expect total control consider it an intolerable disaster if anything escapes from complete conformity, rather like a spoiled three-year-old who wants all the toys and thinks the world is coming to an end if one of them falls into another child's hands. HM: It is astonishing to me that you are saying that Scruton is worth reading. I actually liked his book How to Be a Conservative. Do you like him as an author? I am asking because he denounced you in his article If Only Chomsky Had Stuck to Syntax. He has also attacked Edward Said in his books. NC: I would pay attention to what he says about me if it made any sense. Since it was simply ignorant ranting, I couldn't care less. And I'm sure he produces stupidities about Said and probably others. But he at least makes an effort to present a reasoned form of conservatism. HM: You said in an interview that a lot of funding for the Massachusetts Institute of Technology (MIT) comes from the Pentagon, if I remember correctly, and you also mentioned that a lot of departments at MIT are 'compromised'. Is the university's independence compromised? NC: The Pentagon was the main funder for a long time, but that had little impact on teaching or research. The only department I know of that was 'compromised' was the Political Science department some years ago (no longer), but that was not under any pressure: it was their choice. Same with the Koch's. It's highly unlikely that they had any impact on research or teaching. Where one does find an impact is in the (perfectly open) corporate research grants, which typically have specified short-term goals. HM: Does this mean that funding by the Pentagon or the Koch brothers at MIT was not actually a bad thing? You told me that the University of Arizona has a Freedom Centre financed by the Koch family. Maybe this is an example of a compromise? NC: One has to look at each particular case. Pentagon funding for universities, particularly in the 1950s and 1960s, was part of a general national development programme, which created the modern high tech economy and greatly enriched educational resources. MIT, where I was, was at the heart of it. There was no Pentagon involvement in research or teaching, though of course, the Pentagon is interested in anything that comes out from anthropology to zoology. The Koch funding of the cancer research centre at MIT I'm sure involves no interference. The University of Arizona Freedom Centre is different. It's an explicitly ideological institution, fostering right-wing libertarian ideas. As far as I know, there's no direct interference in what faculty members do, but the general framework is explicit. (to be continued) Courtesy: Express Tribune Blogs Hassan Mirza is working as an applied scientist in Germany and specialises in Computer Simulations, Applied Artificial Intelligence, and Energy Modelling. In his free time, he reads extensively in multiple languages (Urdu, English, and German) and is interested in writing about scientific and socio-economic issues. TagsIranIraqlinguisticsMEPoliticssubcontinentWorld Previous article Death toll rises to 100 in weather-related incidents Next article Five soldiers among 15 more killed by avalanches in Astore, Neelum Valley More In Afghanistan Venal ruling class By Dr Aasim Sajjad Akhtar It is all so predictable. The raids on homes of opposition leade… Race, culture, terrorism and 'deep structures' Akbar Ahmad and Noam Chomsky examine examine issues of race in America which engender diff… The forever war By Aasim Sajjad Akhtar A year after the ignominious withdrawal of American troops from Afg… Afghan women's woes In addition to needing a male chaperone to go places within the war-torn country, women ar… Load More In Afghanistan
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,692
\section{Introduction} Deep-water surface waves are among the most studied examples of nonlinear physical systems. The nonlinearity stems from kinematic and dynamic boundary conditions at the air-fluid interface. Triplet \cite{mcgoldrick1965} and quadruplet \cite{Zakharov1968} interactions can be included and a general integro-differential equation derived. A suitable expansion of the integral kernel \cite{Trulsen2000,Stiassnie1984} was shown to connect it to simple propagation models, like the nonlinear Schr{\"o}dinger equation (NLS) \cite{SulemNLSBOok} and its generalizations (modified NLS---MNLS), the best known among which is the Dysthe equation \cite{Dysthe1979}. The universal NLS possesses many remarkable properties and solutions, such as the modulation instability (MI, also known as Benjamin-Feir instability---BFI) \cite{Benjamin1967,Lo1985,Zakharov2009}, solitons and breathers, the clearest examples of the balance of dispersion and nonlinearity. In the propagation of deep-water waves, gravity effects dominate for long waves (small wave-numbers) and surface tension for short waves. Moreover, viscosity is the ubiquitous damping mechanism \cite{Lamb1945hydrodynamics,Landauvol6} (other routes to dissipation include impurities, obstacles, wave-breaking\dots) {and plays a role in fundamental studies of analogue gravity \cite{Rousseaux2018}}. It induces a vorticity in the fluid and requires therefore to solve the full Navier-Stokes (NS) equations. For small viscosity, the vorticity is significant only in a small boundary layer close to the surface \cite{Lamb1945hydrodynamics,Landauvol6,Longuet-Higgins1953,Longuet-Higgins1960}. For small amplitude waves, i.e.~when nonlinearities are negligible, the velocity potential formulation valid for an inviscid flow can be corrected to include the effects of viscosity \cite{Ruvinsky1991,Joseph2004,Wang2006}. Different loss rates and corrections to the group velocity can be found in Ref.~\cite{Padrino2007}. The availability of a quasi-potential formulation greatly simplifies the numerical solution of the hydrodynamic problem \cite{West1987,Dommermuth1987}. {Moreover, it is often assumed without justification that the linear dissipation is so small that no nonlinear correction is required for the formulation of a modified NLS including damping \cite{Wu2006,Dias2008}. Alternatively, nonlinear damping mechanisms were proposed in the form of a Landau damping \cite{Fabrikant1980} or in a phenomenological way \cite{Kato1995,Schober2015a}. } In this work we quantify the nonlinear corrections to propagation equations stemming from kinematic viscosity and justify therefore the conventional assumption of Refs.~\cite{Wu2006,Dias2008}. Our approach is based on the dissipation method detailed in \cite{Prosperetti1976, Padrino2007}, but can be generalized to any expression for the dispersion relations and damping mechanisms. After recalling some simple considerations about dispersion relations and justifing the choice of a specific form (Section \ref{sec:VDR}), we propose a modified set of hydrodynamic equations consistent with our approach (section \ref{sec:HyEq}). These are straightforwardly adapted in Section \ref{sec:sNLS} to generalize the 1D MNLS \cite{Trulsen2000} \begin{equation} \frac{\partial \hat B}{\partial t} + i\left(\omega(k_0+\kappa)- \omega_0\right)\hat B + i\frac{\omega_0 k_0^2}{2} \mathcal{F}_\kappa\left[|B|^2B\right]=0, \label{eq:MNLS} \end{equation} with {$\hat{B}(\kappa,t)\equiv\mathcal{F}_\kappa[B(x,t)]=\frac{1}{\sqrt{2\pi}}\int\mathrm{d} x B(x,t)e^{-i\kappa x}$ the Fourier transform from real space of $x$ (in the co-moving frame) to the relative wavenumber space of $\kappa$. In fact, the envelope $B(x,t)$ of the surface elevation is centered around the carrier wavenumber $k_0$; $\omega(k)$ the real-valued dispersion relation ($\omega_0\equiv\omega(k_0)$). } We quantify the nonlinear corrections to the recurrence period and spectral-mean downshift to be less than one percent in typical experimental conditions of a water tank. We also verify that the full dispersion relation plays a key role in the nonlinear evolution of the MI, e.g., in explaining the frequency downshift first observed in \cite{Lake1977}. Finally (section \ref{sec:tNLS}), we show that the considered dispersion relation can be inverted so that not only the space-like, but also the time-like formulation of the MNLS naturally generalizes to the dissipative case. The same approach can be adapted to high-order NLS, such as the Dysthe \cite{Dysthe1979} or the recently proposed compact \cite{Dyachenko2011} and super-compact \cite{Dyachenko2017} equations. Conclusions and outlook complete our manuscript. \section{Viscous dispersion relations} \label{sec:VDR} We first briefly review how to express the dispersion relation for deep-water capillary-gravity waves at the surface of an incompressible viscous fluid. The solution of the linearized hydrodynamic equations for 1D propagation in the $x$-direction is a plane wave $\exp(i k x - i \omega(k)t)$. We decompose $\omega(k)=\omega_\mathrm{R}(k) + i \omega_\mathrm{I}(k)$, where $\omega_\mathrm{R}>0$ (resp.~$\omega_\mathrm{R}<0$) represents forward- (resp.~backward-) propagating waves {(for $k>0$, otherwise the opposite applies)} and $\omega_\mathrm{I}<0$ is the damping rate. {In the limit of infinite depth,} the dispersion relation can be shown to be the solution $\omega(k)$ of the implicit equation \cite{Lamb1945hydrodynamics,Landauvol6} \begin{equation} \left(2-i\frac{\omega}{\nu k^2}\right)^2 + \frac{|k|(g + sk^2)}{\nu^2 k ^4} = 4\left(1-i\frac{\omega}{\nu k^2}\right)^{\frac{1}{2}} \label{eq:disprelimp} \end{equation} where $g$ is the standard acceleration due to gravity, $s \equiv T/\rho_\mathrm{f}$ with $T$ the surface tension (in $\mathrm{N m^{-1}}$) of the fluid-air interface and $\rho_\mathrm{f}$ is the density of the fluid. The gravity and capillary contributions dominate, respectively, for small and large $k$, i.e.~for large and small wavelength. Finally, $\nu$ denotes the kinematic viscosity of the fluid (in $\mathrm{m^2/s}$) and is the physical origin of $\omega_\mathrm{I}$. The detailed derivation of Eq.~\eqref{eq:disprelimp} from the linearized Euler equations for an incompressible fluid, along with their kinematic and boundary conditions, consists in including a vorticity field and assuming that the mass transport occurs only in a boundary layer close to the air-fluid interface \cite{Lamb1945hydrodynamics,Landauvol6,Longuet-Higgins1953, Longuet-Higgins1960, Ruvinsky1991, Longuet-Higgins1992a,Dias2008}. Eq.~\eqref{eq:disprelimp} being quartic, two forward-traveling wave branches exist: one is unphysical (it corresponds to a velocity potential diverging for infinite depth $z\to-\infty$, see also in section \ref{sec:HyEq}). \begin{figure}[hbtp] \centering \includegraphics[width=.45\textwidth]{omegareal_water6} \caption{Dispersion relation for gravity-capillary waves propagating at the interface of air and water in the presence of kinematic viscosity. Numerical solution of Eq.~\eqref{eq:disprelimp} for forward propagating waves ($\omega_R\ge 0$). {On the left axis (in logarithmic scale), the solid blue and dotted lines refer to $\mathrm{Re}\,{\omega}$ and its inviscid counterpart $\tilde\omega$. The red dotted and green dashed-dotted lines pertain to the right axis (linear scale) and show the two branches of $\mathrm{Im}\,{\omega}$. The vertical dotted line marks to the cut-off $k_c$ beyond which $\omega_\mathrm{R}=0$ and $\omega_\mathrm{I}$ bifurcates in two standing wave branches, yielding different damping rates.} } \label{fig:waterdisp} \end{figure} The physical solution is shown in Fig.~\ref{fig:waterdisp} for a gravity-capillary wave propagating at the interface of air and water ($s = 7.28\times 10^{-5}\mathrm{\,m^3\,s^{-2}}$) in the presence of kinematic viscosity $\nu=1\times 10^{-6}\mathrm{\,m^2\,s^{-1}}$. {For large $k$, in the $\mathrm{\mu m}^{-1}$ range}, $\omega_\mathrm{R}$ rises and then drops abruptly. At $k=k_c=1.25\times 10^{8}\,\mathrm{m}^{-1}$ it cuts off, i.e.~the viscous damping is so strong that $\omega_\mathrm{R} = 0$: the mode is a standing wave. For $k>k_c$ two branches with different damping are admitted, see the right axis of Fig.~\ref{fig:waterdisp}. The least-damped branch (solid line) is expected to take the leading physical role at those wave-numbers, {the other (dashed line) being dissipated much more rapidly \cite{Landauvol6} and not behaving as a surface wave \citep{Mainardi1987}.} Relying on physical arguments\cite{Ruvinsky1991,Joseph2003,Joseph2004,Wang2006,Padrino2007}, it was shown that simpler explicit formulas can be obtained, by expressing the vorticity in terms of the velocity potential and surface elevation by assuming linear propagation and small viscosity. Nevertheless, a simple algebraic manipulation of Eq.~\eqref{eq:disprelimp} allows us to re-derive them easily. Let us define $\theta\equiv -i\frac{\omega}{\nu k^2}$ and $\tilde \theta\equiv -i\frac{\tilde\omega}{\nu k^2}$ (related to the Reynolds number defined for water waves), with $\tilde\omega(k)\equiv\sqrt{|k|(g+sk^2)}$ (i.e.~the inviscid dispersion relation). Eq.~\eqref{eq:disprelimp} is written compactly as \begin{equation} \left(\frac{2+\theta}{\tilde \theta}\right)^2 - 1 = \frac{4}{\tilde \theta^{\frac{3}{2}}}\left(\frac{1+\theta}{\tilde \theta}\right)^\frac{1}{2} \label{eq:diseprelimp1} \end{equation} {Notice that for $\nu k^2\ll 1$, $\tilde{\theta}\gg 1$. Moreover, for small $k$ it can be safely assumed that the quantity between parentheses in the LHS is of order one, i.e.~$\theta\approx\tilde\theta$, while the RHS can be considered of higher order, by virtue of its prefactor.} The standard dispersion relation is indeed obtained by neglecting the RHS of Eq.~\eqref{eq:diseprelimp1}: we write $\theta = \pm\tilde \theta-2$, i.e.~ \begin{equation} \omega = \pm\tilde\omega -2i\nu k^2, \label{eq:stddisp} \end{equation} which we refer to as (small-$k$) Lamb approximation \cite{Lamb1945hydrodynamics}. It was shown that its physical justification can be traced back to the smallness of the vortical contribution to pressure and the fact that the surface deformation and boundary layer (where the vorticity is non-negligible) are small \cite{Ruvinsky1991,Dias2008}. However, the viscosity enters only in the imaginary part, so no cut-off for traveling waves can possibly appear. Lamb derived also the two branches of the damping rate in the opposite case of small $\tilde{\theta}$, which read respectively $\omega_\mathrm{I} = -\frac{\tilde\omega^2}{2\nu k^2}$ and $\omega_\mathrm{I} = -0.91 \nu k^2$, the former being the most physically important. The cut-off is estimated from Eq.~\eqref{eq:diseprelimp1}, by looking for a real solution $\theta$ of double multiplicity. {We solve the system composed by Eq.~\eqref{eq:diseprelimp1} and its derivative with respect to $\theta$. We obtain $\theta=\beta$, with $\beta$ the real root of $\beta^3 +5\beta^2+8\beta+3=0$. Thus, $\theta$ is eliminated and $k_c$ is the solution of $\tilde\omega^2(k_c)=(4/\alpha-\alpha^2)(\nu k_c^2)^2\approx 0.58 (\nu k_c^2)^2$, where $\alpha\equiv\beta+2$.} In order to have a single expression for small and large $k$, different approximations of the RHS of Eq.~\eqref{eq:disprelimp} are shown to behave better than Eq.~\eqref{eq:stddisp}. Indeed, if we let $1+\theta \approx 1$ on the RHS of Eq.~\eqref{eq:diseprelimp1}, we re-obtain the result of the dissipation method (DM), \cite{Prosperetti1976,Padrino2007}, i.e.~ \begin{equation} \theta=-2\pm\sqrt{\tilde \theta^2+4}. \label{eq:DM} \end{equation} In contrast to Eq.~\eqref{eq:stddisp}, this relation exhibits a cut-off in the real part of $\omega$, for $\tilde{\theta}=-2i$. Beyond, the two signs represent the two branches of dissipation of the standing mode. Further, we notice that, in the alternative approach proposed in \cite{Joseph2004} (viscous potential flow---VPF), only the dynamic boundary condition (Bernoulli equation) is modified, by introducing viscosity as an external pressure perturbation, analogously to \cite{Wu2006}. It can be reproduced by the expansion $(1+\theta)^\frac{1}{2} \approx 1 + \theta/2$ and its solution is written, in our notation \begin{equation} \theta =-1\pm\sqrt{\tilde \theta^2+1}. \label{eq:VPF} \end{equation} Finally, we may ask ourselves what is the result of expanding the RHS of Eq.~\eqref{eq:diseprelimp1} to the second order: we write a third approximation, \begin{equation} \theta=-\frac{2}{3}\pm\frac{2}{3}\sqrt{ \frac{3}{2}\tilde\theta^2 + 1}, \label{eq:MVPF} \end{equation} which we will refer to as modified VPF (MVPF). {The results are summarized in Table \ref{tab:formulas}. Among the dispersion relations \eqref{eq:DM}--\eqref{eq:MVPF}, only DM corresponds asymptotically to Lamb solution as far as damping at small $k$ is concerned; $k_\mathrm{c}$ is not accurate, though.} The real part of the VPF is a better approximation at the cut-off and matches very well with the conventional dispersion relation, but the damping is half of the expected one at small $k$, while standing waves ($k>k_c$) of lesser damping have the correct asymptotic form (compared to the estimation made by Lamb, see above). Finally the MVPF behaves better at the cut-off, but does not reproduce the behavior of either $\omega_\mathrm{R}$ or $\omega_\mathrm{I}$ for small values of $k$, so it is of little practical use in the most accessible oceanic regimes. In \cite{Wang2006}, it was shown that irrotational theories fail to provide a good approximation around $k_c$. The MVPF shows instead that a good approximation of the cut-off is incompatible to a satisfactory asymptotic behavior at small and large $k$ simultaneously. \begin{table} \centering \begin{tabular}{|r|c|c|c|c|} \hline & $\omega_\mathrm{R}(k\ll 1)$ & $\omega_\mathrm{I}(k\ll 1)$ & cut-off & $\omega_\mathrm{I}(k\gg 1)$\\ \hline Implicit \eqref{eq:diseprelimp1} & $\tilde \omega$ & $-2\nu k^2$ & $\tilde\omega^2=0.58(\nu k^2)^2$ & \vtop{\hbox{\strut$-\frac{\tilde\omega^2}{2\nu k^2}$}\hbox{\strut$-{0.91\nu k^2}$} }\\ \hline DM \eqref{eq:DM}& $\tilde \omega$ & $-2\nu k^2$ & $\tilde\omega^2=4(\nu k^2)^2$ &\vtop{\hbox{\strut $-\frac{\tilde\omega^2}{4\nu k^2}$}\hbox{\strut $-4\nu k^2$}}\\ \hline VPF \eqref{eq:VPF}& $\tilde \omega$ & $-\nu k^2$ & $\tilde\omega^2=(\nu k^2)^2$ & \vtop{\hbox{\strut $-\frac{\tilde\omega^2}{2\nu k^2}$}\hbox{\strut $-2\nu k^2$} }\\ \hline MVPF \eqref{eq:MVPF}& $\sqrt{\frac{3}{2}}\tilde \omega$ & $-\frac{2\nu k^2}{3}$ & $\tilde\omega^2=\frac{2}{3}(\nu k^2)^2$ & \vtop{\hbox{\strut $-\frac{\tilde\omega^2}{2\nu k^2}$}\hbox{\strut $-\frac{4}{3}\nu k^2$}}\\ \hline SDM \eqref{eq:SDM}& $\tilde \omega$ & $-2\nu k^2$ & n/a & $-\frac{\tilde\omega^2}{2\nu k^2}$\\ \hline \end{tabular} \caption{Comparison of the asymptotic behavior of the implicit and approximated dispersion relations. The classical estimations of Lamb on the implicit formula, the DM, VPF, and MVPF are simply derived in the text from the full dispersion relation, SDM is the simplified DM found by neglecting the viscous terms under the square-root in Eq.~\eqref{eq:denominators}. The first two are also discussed in \cite{Padrino2007}. } \label{tab:formulas} \end{table} The numerical values of cut-off for the fluids considered in \cite{Padrino2007} are reported in Table \ref{tab:cutoff}, where we include also the MVPF results, for the sake of completeness. \begin{table}[h] \begin{tabular}{|r|c|c|c|} \hline & Water & Glycerin & SO10000\\ \hline Implicit \eqref{eq:diseprelimp1}& $1.25\times 10^8$ & 445.18 & 54.30 \\ \hline DM \eqref{eq:DM} & $1.82\times 10^7$ & 196.81 & 28.50\\ \hline VPF \eqref{eq:VPF}& $7.28\times 10^7$ & 344.64 & 45.29 \\ \hline MVPF \eqref{eq:MVPF}& ${1.09\times 10^8}$ & 416.10 & 51.86\\ \hline \end{tabular} \label{tab:cutoff} \caption{Cut-off values (in $\mathrm{m^{-1}}$) for the three examples of Ref.~\cite{Padrino2007} and their comparison to the MVPF result. {SO10000 stands for Silicone oil of viscosity 10000 cSt, for which in SI units $\rho_\mathrm{f}=9.69 \times 10^2 \,\mathrm{kg\, m^{-3}}$, $\nu=1.02\times 10^{-2}\mathrm{\,m^2\,s^{-1}}$, $s = 2.10\times 10^{-5}\mathrm{\,m^3\,s^{-2}}$)} } \end{table} In order to visually confirm the formulas of table \ref{tab:formulas}, we compare the different approximations in Fig.~\ref{fig:glycerindisp} for a surface between glycerin and air ($\rho_\mathrm{f}=1.26 \times 10^3 \, \mathrm{kg\, m^{-3}}$, $\nu=6.21\times 10^{-4}\mathrm{\,m^2\,s^{-1}}$, $s = 5.03\times 10^{-5}\mathrm{\,m^3\,s^{-2}}$). The larger viscosity allows us to have a cut-off for relatively small $k$ and observe both short and long wave ranges in linear scale. The same behavior applies to other fluids. \begin{figure} \centering \includegraphics[width=.45\textwidth]{omega_glycerin4} \caption{Dispersion relation for gravity-capillary waves propagating at the interface of air and glycerin. (a) real part; (b) imaginary part. Numerical solutions of Eq.~\eqref{eq:disprelimp} for forward propagating waves ($\omega_R\ge 0$) are shown as a solid dark yellow line. The conventional approximation is shown by a green dotted line. The different approximations are represented by dashed lines (black---DM, red---VPF, pink---MVPF). Finally, the dashed-dotted lines correspond to the SDM: it does not exhibit a clear cut-off, but provides a good approximation of damping at both $k$ limits. } \label{fig:glycerindisp} \end{figure} We include also a further simplification. Suppose we choose the DM relation of Eq.~\eqref{eq:DM}, which provides the best approximation of damping and group velocity for long waves. We rewrite it as \begin{equation} \omega(k) = \frac{\tilde\omega}{\hat{D}(k)} \label{eq:DM1} \end{equation} with \begin{equation} \hat{D}(k)\equiv {\pm\sqrt{1-\left(\frac{2\nu k^2}{\tilde{\omega}}\right)^2} + \frac{2i\nu k^2}{\tilde{ \omega}}}. \label{eq:denominators} \end{equation} For small $\nu k^2$, we can neglect the term proportional to $\nu^2$ under the square root in Eq.~\eqref{eq:denominators} and obtain the simplified DM (SDM) \begin{equation} \omega\approx \frac{\tilde\omega}{\pm 1 + \frac{2i\nu k^2}{\tilde{ \omega}}}, \label{eq:SDM} \end{equation} It is trivial to verify---see also table \ref{tab:formulas}---that Eq.~\eqref{eq:SDM} provides a better asymptotic behavior than Eq.~\eqref{eq:DM} for damping at large $k$, at the expense of a smooth transition around the cut-off, as shown in Fig.~\ref{fig:glycerindisp} as a cyan dash-dotted line. That is, the series expansion of $\tilde\omega/\omega$ provides a more robust approximation than the conventional direct Taylor expansion of $\omega$ (i.e.~the low-$k$ Lamb approach, green dotted lines in Fig.~\ref{fig:glycerindisp}), which predicts ever increasing phase velocity $v_\mathrm{p}\equiv\omega_\mathrm{R}/k$ and damping. This is similar to the fit of a dispersion relation by means of a Pad{\'e} approximant, well known in optics \cite{Amiranashvili2010a}. \section{Hydrodynamic equations} \label{sec:HyEq} Viscosity induces vorticity, thus making the solution of the nonlinear hydrodynamic problem (NS equations) extremely complicated. The most practical workaround is to extend the use of a velocity potential $\phi(x,z,t)$, where $z$ is the depth coordinate and $x$ is the longitudinal propagation direction, to the viscous case. By denoting the free surface elevation $\eta(x,t)$, the system of hydrodynamic equations in an inviscid and infinitely deep fluid reads as \begin{equation} \begin{aligned} \phi_{xx} + \phi_{zz} &= 0 &\mathrm{for}\; -\infty<z<\eta \\ \nabla \phi &\to 0 &\mathrm{for}\; z\to-\infty \\ \eta_t + \phi_x \eta_x - \phi_z & =0 &\mathrm{for}\; z=\eta \\ \phi_t + \frac{1}{2}\left(\phi_x^2 + \phi_z^2\right) + g \eta &\\ -s \frac{\eta_{xx}}{\left(1+\eta_x^2\right)^{\frac{3}{2}}} & = 0 &\mathrm{for}\; z=\eta \\ \end{aligned} \label{eq:HydroInviscid} \end{equation} These are respectively the Laplace equation in the fluid, the rigid bottom condition, the kinematic and the dynamic boundary conditions at the free surface. In the linear limit, the solutions of Eq.~\eqref{eq:HydroInviscid} are plane waves, $\eta = \eta_0\exp(ikx- i\omega t)$ and $\phi = \phi_0\exp(ikx- i\omega t + |k|z)$. The inviscid dispersion relation $\omega = \tilde\omega(k)$ is the compatibility condition of the homogeneous system \begin{equation} \begin{bmatrix} i\omega & |k|\\ -(g+sk^2) & i\omega \end{bmatrix} \begin{bmatrix} \eta_0 \\ \phi_0 \end{bmatrix} = 0. \label{eq:invhomog} \end{equation} Eq.~\eqref{eq:stddisp} is obtained by the substitution $i\omega\to i\omega -2\nu k^2$ in Eq.~\eqref{eq:invhomog}. In the wavenumber domain, we obtain a correction $-2\nu k^2 \eta_0$ to the kinematic boundary condition and $-2\nu k^2 \phi_0$ to the dynamic one. To transform them back to the spatial domain, we notice that $\eta$ does not depend on $z$ and $\phi$ is the solution of the Laplace equation. The operator correspondence $ik\leftrightarrow \frac{\partial}{\partial x}$ (established by the plane-wave definition) is correct for both terms and allows us to obtain the well-known weakly viscous hydrodynamic system \cite{Ruvinsky1991,Dias2008} \begin{equation} \begin{aligned} \phi_{xx} + \phi_{zz} &= 0 &\mathrm{for}\; -\infty<z<\eta \\ \nabla \phi &\to 0 &\mathrm{for}\; z\to-\infty \\ \eta_t + \phi_x \eta_x - \phi_z &= 2\nu \eta_{xx} &\mathrm{for}\; z=\eta \\ \phi_t + \frac{1}{2}\left(\phi_x^2 + \phi_z^2\right) + g \eta &\\ -s \frac{\eta_{xx}}{\left(1+\eta_x^2\right)^{\frac{3}{2}}} = 2\nu \phi_{xx} & = -2\nu \phi_{zz} &\mathrm{for}\; z=\eta \\ \end{aligned} \label{eq:HydroDDZ} \end{equation} Its physical motivation is that the velocity field can be decomposed in a potential and a vorticity contribution, $(u,w)=(\phi_x-\Omega^y_z,\phi_z+\Omega^y_x)$. The vorticity pseudo-vector has only the $y$-component, $\Omega\equiv (0,\Omega^y,0)$, and is expressed as a function of $\phi$ and $\eta$ by using the linearized boundary conditions and by assuming $\Omega^y_z\approx 0$ \cite{Ruvinsky1991,Dias2008}. {Strictly speaking, neglecting this term violates the conservation of mass. This will be discussed in details in a future publication.} The fully nonlinear NS equations couple different velocity components and is hard to write in as simple terms as Eqs.~\eqref{eq:HydroInviscid} and \eqref{eq:HydroDDZ}. We propose a solution to this difficulty based on the approximation presented in the previous section. The principle behind our choice is that, in Eq.~\eqref{eq:HydroInviscid} as well as in the unidirectional models derived from it (e.g.~the NLS or the Dysthe equation \cite{Dysthe1979}), the energy is the sum of a kinetic part, which depends on dispersion, and a potential part, which is ascribed to nonlinear interaction. A periodic exchange between the two parts characterizes, e.g., the nonlinear stage of MI. Both are damped by kinematic viscosity is thus plausible. Since the Hamiltonian density which encompasses the different energy terms is associated to the evolution in time, the associated operator must be modified as a whole. This allows us to better quantify the role of dissipation in the non-linear propagation of waves. Consider Eq.~\eqref{eq:DM1}: it is the solution of \[ \begin{vmatrix} i \omega \hat D(k) & |k|\\ -(g+sk^2) & i\omega \hat{D}(k) \end{vmatrix} = 0. \] Thus, in Eq.~\eqref{eq:HydroInviscid}, we make the substitution $\frac{\partial}{\partial t}\to \bar\partial_t\equiv \mathcal{D}(-i\frac{\partial}{\partial x})\frac{\partial}{\partial t}$, where $\mathcal{D}(-i\frac{\partial}{\partial x})$ is the operator in the physical space associated in the Fourier space to $\hat{D}(k)$ in Eq.~\eqref{eq:denominators}. This allows us to formally obtain an alternative form for the hydrodynamic equations \begin{equation} \begin{aligned} \phi_{xx} + \phi_{zz} &= 0 &\mathrm{for}\; -\infty<z<\eta \\ \nabla \phi &\to 0 &\mathrm{for}\; z\to-\infty \\ \bar\partial_t\eta + \phi_x \eta_x - \phi_z & =0 &\mathrm{for}\; z=\eta \\ \bar\partial_t \phi+ \frac{1}{2}\left(\phi_x^2 + \phi_z^2\right) + g \eta &\\ -s \frac{\eta_{xx}}{\left(1+\eta_x^2\right)^{\frac{3}{2}}} & = 0 &\mathrm{for}\; z=\eta \\ \end{aligned} \label{eq:HydroNew} \end{equation} The proposed system reduces to Eq.~\eqref{eq:HydroDDZ} for $\nu k^2\ll \tilde\omega$, by replacing $-1/i\tilde\omega\leftrightarrow \int\mathrm{d} t$ and $-k^2\leftrightarrow \frac{\partial^2}{\partial x^2}$. Eq.~\eqref{eq:HydroNew} provides however an alternative set of equations to be employed in full hydrodynamic solvers, like the high-order spectral method (HOSM) \cite{West1987,Dommermuth1987,Touboul2010,Kharif2010}. {We stress that the choice of DM or SDM is not binding. A similar approach could be used with the other approximations discussed in Sec.~\ref{sec:VDR}, the numerical solution of Eq.~\eqref{eq:disprelimp}, as well as a different fit based on experimental data.} In the next section we will discuss how to derive a nonlinear NLS-like propagation equation and show that small albeit measurable differences between the linear and nonlinear dissipation terms exist. \section{A space-like propagation equation} \label{sec:sNLS} In order to generalize the MNLS of Eq.~\eqref{eq:MNLS} to the dissipative case, we can follow the approach of \cite{Trulsen2000}, based on Zakharov's method \cite{Zakharov1968}, to provide the justification for an NLS in which a general dispersion relation and not a polynomial truncation thereof is included. The free-surface elevation is reconstructed from the envelope $B(x,t)$ as $\eta= \frac{1}{2}\left[b(x,t)+\mathrm{c.c.}\right]$, where $b \equiv B e^{ -i(k_0 x -\omega_0 t) }$ is the free-wave component and c.c.~denotes the complex conjugate. Notice that the plane wave factors are defined to be real: $\omega_0\equiv\omega_\mathrm{R}(k_0)$. Eq.~\eqref{eq:MNLS}, where the nonlinearity is assumed small and viscosity is neglected ($\omega_R=\tilde\omega$), is written for $\hat{b}(k,t)\equiv\mathcal{F}_k[b(x,t)]$ as \begin{equation} \hat{b}_t +i\tilde\omega(k)\hat b +i\frac{\omega_0 k_0^2}{2}\mathcal{F}_k[|b|^2 b] =0 \end{equation} from which, by means of the substitution $\partial_t \to \bar\partial_t$, we derive the equation of motion for \begin{equation} \frac{\partial \hat b}{\partial t} + i\frac{\tilde\omega(k)}{\hat D(k)}\hat b + i\frac{\omega_0 k_0^2}{2\hat D(k)} \mathcal{F}_k\left[|b|^2b\right]=0. \label{eq:VNP} \end{equation} Back to the slowly-varying variable $B$, we can write, for $\hat B(\kappa,t)=\mathcal{F}_\kappa[B(x,t)]$ \begin{equation} \frac{\partial \hat B}{\partial t} + i\left[\frac{\tilde\omega(k_0+\kappa)}{\hat D(k_0 + \kappa)}-\omega_0\right]\hat B + i\frac{\omega_0 k_0^2}{2\hat D(k_0 + \kappa)} \mathcal{F}_\kappa\left[|B|^2B\right]=0. \label{eq:VNLS} \end{equation} This represents our generalization of Eq.~\eqref{eq:MNLS} in the dissipative case. By Taylor-expanding it to fourth order in $\kappa$ and writing the resulting terms in the physical space by replacing powers of $\kappa$ by derivatives in $x$, we obtain the linear terms of Ref.~\cite{Carter2016}. This approach may prove convenient also for generalizing a forced MNLS model \cite{Eeltink2017}. Below, in Figs.~\ref{fig:NLSMIcomp}-\ref{fig:NLSdownshift}, we show that the dispersive contribution explains alone most of the frequency downshift observed in experiments. Our approach provides also a nonlinear viscous damping (the imaginary part of the nonlinearity), valid for small $k$ where the bound modes (small corrections to $\eta$ oscillating at integer multiples of $k_0$ and enslaved, for pure gravity waves, to the free-modes $B$) are not resonantly excited. It also introduces a wavenumber-dependent correction to nonlinearity (proportional to $\nu^2$ under the square root in $\hat{D}$, see Eq.~\eqref{eq:DM1}). \begin{figure}[hbtp] \centering \includegraphics[width=0.45\textwidth]{nu1m6_k010_eps0p1_comparison_21032018} \caption{Comparison of the nonlinear evolution of MI with linear and nonlinear damping. Time is normalized by nonlinear time $T_0=(2\Eps^2k_0)^{-1}$. $k_0=10$, $\varepsilon=0.1$, $\nu=1\times 10^{-6}$, $\alpha = 0.01$, surface tension is neglected. (a) Oscillations of the central peak (Stokes wave); (b) spectral mean $\kappa_\mathrm{m}$. The solid blue lines represent the full model (Eq.~\eqref{eq:VNLS}), the dashed red (resp.~black dotted) lines are obtained by neglecting $\hat D(k)$ in the denominators of the nonlinear (resp.~linear) part.} \label{fig:NLSMIcomp} \end{figure} We expand the nonlinear damping coefficients as $ \frac{2 \nu k^2}{\omega} \frac{\omega_0 k_0^2}{2} \approx \nu k_0^3\left[ k_0+ \left(2- \frac{v_\mathrm{g}^0}{v_\mathrm{p}^0}\right) \kappa\right] $, where $v_\mathrm{p}^0\equiv\tilde\omega(k_0)/k_0$ and $v_\mathrm{g}^0\equiv \tilde\omega'(k_0)$ are the phase and group velocities at $k_0$, respectively, neglecting viscosity. The first contribution is a homogeneous nonlinear damping (which can be obtained independently by the method of multiple scales \cite{Eeltink2018}), while the second is a derivative damping term, i.e., the dissipative counterpart of the self-steepening in nonlinear optics \cite{Agrawal2012}. Its effect is small, because energy dissipation caused by linear attenuation limits the bandwidth of $B$. As an example, we show in Fig.~\ref{fig:NLSMIcomp} the solution for a harmonically perturbed Stokes wave propagating at the water-air interface ($B(x,0)=B_0\left[\sqrt{1-\alpha}+\sqrt{2\alpha}\cos{\kappa_0 x}\right]$), with initial steepness $\varepsilon\equiv \frac{B_0 k_0}{\sqrt{2}}=0.1$, $k_0 = 10$, $\alpha=1\times 10^{-2}$ and neglecting surface tension. The perturbation wave-number is $\kappa_0 = 2\varepsilon k_0\sqrt{2B_0}$, i.e.~the maximally unstable mode predicted by the NLS. We notice that the linear and nonlinear dissipation scale as $2\nu k_0^2$ and $2\nu k_0^4|B|^2\approx 2\nu k_0^2 \varepsilon^2$, respectively. We thus expect that only at extreme steepness and breather peaks is the nonlinear damping non-negligible with respect to the linear one. In Fig.~\ref{fig:NLSMIcomp} we compare the dynamics of Eq.~\eqref{eq:VNLS} (full model), with the results of neglecting $\hat{D}(k)$ in either the linear or nonlinear part (we refer to them as nonlinear and linear damping, respectively). The comparison shows both the energy attenuation and downshift of the spectral mean $\kappa_{\rm m}\equiv P/N$, with \begin{equation} N= \int\mathrm{d} x |B|^2,\, P= \mathrm{Im}\,\int\mathrm{d} x B_xB^*, \label{eq:NPspace} \end{equation} respectively, the norm and the momentum of the field. $\kappa_\mathrm{m}$ depends mainly on the linear damping, see App.~\ref{app:downshift}. As shown in \cite{Armaroli2018}, the slightest amount of dissipation causes the recurrence period to stretch and a period-1 orbit to be attracted to a period-2 one. This behavior is found in results of the full model (blue solid lines) and in the linearly damped (dashed red lines) simulations. The discrepancy in recurrence periods is just a contraction of about 1$\%$ (per period). In panel (a), we notice also that the nonlinear damping alone (black dotted line) leads instead to a behavior more similar to the undamped result. As far as the downshift of $\kappa_\mathrm{m}$ is concerned [panel (b)], the full model exhibits about 1$\%$ more shift than the linearly damped one. Notice that the difference between the two nearly equals the pure nonlinear contribution (black dotted line). The blue solid lines exhibits the same behaviour if Eq.~\eqref{eq:SDM} is used instead of Eq.~\eqref{eq:DM1} (not shown). The discrepancies between the solid and dashed lines, observed in Fig.~\ref{fig:NLSMIcomp}, are mainly explained by the nonlinear damping $\nu k_0^4 |B|^2 B$: viscous corrections to group velocity have an even smaller impact on nonlinear coefficients. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{nu1m6_t7p28em5_k075_eps0p1_comparison_19032018} \caption{Comparison of results with purely linear (solid lines) and purely nonlinear (dashed lines) damping. Parameters are $k_0=75$, $\varepsilon=0.1$, $\nu=1\times 10^{-6}$, $s=7.28\times 10^{-5}$, $\alpha = 0.01$. (a) Central mode $\kappa=0$; (b) main unstable modes $\kappa=\pm\kappa_0$. We notice that the downshifted mode at $-\kappa_0$ soon acquires most of the energy. While the linear damping irreversibly stops recurrence, the nonlinear damping is not sufficient. } \label{fig:NLSdownshift} \end{figure} This behavior does not qualitatively change if we include {Dysthe nonlinear terms} in Eq.~\eqref{eq:VNLS}, as in, e.g., \cite{Stiassnie1984}: the downshift is associated to a transient upshift at each recurrence cycle \cite{Armaroli2018}. A different dynamics is observed for shorter wavelengths even though stronger damping limits the impact of nonlinear phenomena. We thus simulated the same initial conditions as above, with $k_0=75$, i.~e.~near the transition from gravity to capillary waves. We show in Fig.~\ref{fig:NLSdownshift} the results of the purely linear and purely nonlinear dissipation models. In panel (a) we plot the central mode ($\kappa=0$) and in panel (b) the two main unstable modes at each side of it ($\kappa=\pm\kappa_0$). Even during the initial MI phase the energy is converted into a pair of unstable sidebands in an asymmetric fashion, which favors the lower wave-number. As soon as most of the energy is located at $-\kappa_0$ (at $t/T_0\approx2.5$), we observe a remarkable difference: the central mode cannot recover, not even partially, its initial condition and stabilizes below 0.5 for pure linear damping (solid lines), while a regime of more erratic recurrence, in which every 5-6 cycles the energy is transferred back to the central mode (above 0.6), is observed for pure nonlinear damping (dashed lines). {The full model (with linear and nonlinear damping, not shown) exhibits almost the same behaviour as the purely linear case.} This is consistent with the interplay of surface tension and viscosity to favor a permanent downshift of the spectral peak, observed in several full numerical simulations of the Euler system \cite{Skandrani1996,Dias1999}. We finally remark that the present nonlinear viscous damping does not exclude the existence of other nonlinear loss mecahnisms. We would like to mention the Landau damping \cite{Fabrikant1980}, which plays a role at large $k$, where the bound modes are resonantly excited and dissipated by viscous damping. Its even symmetry prevents however a downshift of either the spectral peak or mean. Alternatively, a loss mechanism like the $\beta$-term introduced in \cite{Kato1995,Schober2015a} explains the frequency downshift observed in the nonlinear stage of MI, by virtue of its odd symmetry but has no clear physical origin: it may represent a model of wave-breaking, not included here. Nevertheless, we have shown that the spectral mean shifts permanently simply due to the variation of damping with frequency. Both examples, Figs.~\ref{fig:NLSMIcomp} and \ref{fig:NLSdownshift}, show that an MNLS where the full dispersion (with dissipation) is included in the linear part is sufficient to explain at least qualitatively the partial recurrence and spectral downshift in the nonlinear behavior of MI. Thus the implicit assumption of \cite{Dias2008} that a nonlinear correction due to viscosity plays a minor role in deep-water wave propagation is confirmed in its physical soundness. {In App. \ref{app:downshift}, we derive the expressions for the rate of change of the spectral mean and motivate more rigorously the downshifting trend. } \section{A time-like propagation equation} \label{sec:tNLS} Laboratory conditions usually imply the measurement of the temporal profiles at different positions along a wave-tank by means of wave-gauges. The time-like formulation of the NLS and its generalizations are thus more practical than their space-like counterparts for describing and interpreting measurements. In order to obtain such a formulation, we need to invert the dispersion relation and derive an explicit expression for $k(\omega)$. The method we used above to derive the DM approximation provides a straightforward solution, at least for $s=0$, that reads \begin{equation} k(\omega)=\frac{\tilde k}{\hat D_k(\omega)} \label{eq:invdisp} \end{equation} with $\tilde k (\omega) \equiv \frac{\omega^2}{g}$, the conventional dispersion of deep-water gravity waves, and $\hat D_k(\omega)\equiv\sqrt{1+16i\frac{\nu\omega^3}{g^2}}$ the correction due to viscosity, which contributes to both the real and imaginary parts of $k(\omega)$. Eq.~\eqref{eq:invdisp} can be simplified as $k(\omega)\approx \frac{\omega^2}{g} \left[1-4i\frac{\nu\omega^3}{g^2}\right]^{-1}$. As shown in Sec.~\ref{sec:sNLS}, the nonlinear damping is negligible in most cases. We write the dissipative time-like MNLS in the frequency domain---with respect to the detuning $\Omega\equiv\omega-\omega_0$ (here $\hat{B}(x,\Omega)\equiv\mathcal{F}_\Omega[B(x,t)]$)---as \begin{equation} \frac{\partial \hat B}{\partial x} - i\left[\frac{\tilde k(\omega_0+\Omega)}{\hat D_k(\omega_0 + \Omega)}-k_0\right]\hat B - i\frac{k_0^3}{2} \mathcal{F}_\Omega\left[|B|^2B\right]=0. \label{eq:tVNLS} \end{equation} This model can be applied to assess the effects of the dispersive damping in a wave-tank experiment. \section{Conclusions} \label{sec:Concl} After having recalled the different forms of dispersion relation for deep-water gravity-capillary waves in the presence of viscosity and unified their derivation, we discussed their physical validity all over the wavenumber/frequency range. We exploited these results to reformulate the hydrodynamic equations and quantify the impact of kinematic viscosity on nonlinear damping. We showed that the simplest NLS model with full dispersion (in both the real and imaginary part) provides most of the justification for the downshift of the spectral mean during the nonlinear stage of evolution of the MI: corrections in the nonlinear behavior have only a small effect. This provides an \emph{a posteriori} justification of the choice of using dispersion relations only in the linear part of a nonlinear propagation equation \cite{Dias2008,Trulsen2000}. {However, this does not forbid consideration of other damping mechanisms, e.g.~wave-breaking.} Experiments are needed to determine the best form for a rational dispersion relation, which could be used even in fully nonlinear hydrodynamic simulations. \begin{acknowledgments} We acknowledge the financial support from the Swiss National Science Foundation (Projects Nos.~200021-155970 and 200020-175697). We would like to thank John D.~Carter for fruitful discussions. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
9,588
\section{Introduction} Cosine similarity has been largely used as a measure of word relatedness, since vector space models for text representation appeared to automatically optimize the task of information retrieval \cite{salton1983introduction}. While other distance measures are also commonly used, such as Euclidean distance \cite{witten2005practical}, for cosine similarity only the vector directions are relevant, and not their norms. More recently, pre-trained word representations, also referred to as embeddings, obtained from neural network language models, starting from word2vec (W2V) \cite{mikolov2013distributed}, emerged as the main source of word embeddings, and are subsequently used in model performance evaluation on tasks such as word similarity \cite{toshevska2020comparative}. Datasets such as SimLex-999 \cite{hill2015simlex} and WordSim-353 \cite{finkelstein2001placing}, which score similarity between word-pairs according to the assessment of several humans annotators, have become the benchmarks for the performance of a certain type of embedding on the task of word similarity \cite{recski2016measuring,dobo2020comprehensive, speer2017conceptnet, banjade2015lemon}. For $\vec{n}_a$ and $\vec{n}_b$, the vector representations of two distinct words $w_a$ and $w_b$, cosine similarity takes the form \begin{equation}\label{cosine} cos_{ab}= \frac{\vec{n}_a\cdot \vec{n}_b}{|| \vec{n}_a|| \: ||\vec{n}_b||}, \end{equation} with the Euclidean \textit{inner product} between any two vectors $\vec{n}_a$ and $\vec{n}_b$ given as \begin{equation} \label{euclideaninner} \vec{n}_a \cdot \vec{n}_b = \sum_{i} \vec{n}^i_a \vec{n}^i_b, \end{equation} and the \textit{norm} of a vector $\vec{n}_a$ given as \begin{equation} ||\vec{n}_a||= \sqrt{\vec{n}_a\cdot\vec{n}_a}, \end{equation} dependent on the inner product \cite{axler1997linear}. Using this measure of similarity, improvements can only take place if the vectors that represent the words change. However, the assumption that the vectors interact using a Euclidean inner product becomes less plausible when it comes to higher order vectors. If, differently, we consider that the vector components are not described in a Euclidean basis, then we enlarge the possible relationships between the vectors. Specifically in the calculation of the inner product, on which the cosine similarity depends, we can use an intermediary \textit{metric} tensor. By challenging the assumption that the underlying metric is Euclidean, cosine similarity values can be improved \textit{without changing vector representations}. We identify two main motivations to search for improved cosine similarity measures. The first motivation has to do with the cost of training larger and more refined language models \cite{bender2021dangers}. By increasing the performance on a task simply by changing the evaluation measure without changing the pre-trained embeddings, we expect that better results can be achieved with more efficient and interpretable methods. This is particularly true of contextualized datasets, with benefits not only for tasks such as word similarity, but also others that use cosine similarity as a measure of relatedness, such as content based recommendation systems \cite{schwarz2017analysis}, and where it can be particularly interesting to explore the different metrics that emerge as representations of vector relatedness. The second motivation comes from compositional distributional semantics, where words of different syntactic types are represented by tensors of different ranks, and representations of larger fragments of text are produced via tensor contraction \cite{coecke2010mathematical,grefenstette2011experimental,grefenstette2011experimenting,milajevs2014evaluating,baroni2014frege,paperno2014practical}. This framework has proved to be a valuable tool for low resource languages, enhancing the scarce available data with a grammatical structure for composition, providing embeddings of complex expressions \cite{abbaszadeh2021parametrized}. As these contractions depend on an underlying metric that is usually taken to be Euclidean, improvements have only been achieved, once again, by modifying word representations \cite{wijnholds2019evaluating}. As proposed by \citet{correia2020density}, another way to improve on these results consists in using a different metric to mediate tensor contractions. Metrics obtained in tasks such as word similarity can be transferred to tensor contraction, and thus we expect this work to open new research avenues on the compositional distributional framework, providing a better integration with (contextual) language models. This paper is organized as follows. In $\S$\ref{model} we introduce an extended cosine similarity measure, motivating the introduction of a metric on the hypothesis that it can optimize the relationships between the vectors. In $\S$\ref{methods} we explain our experiment on contextualized and non-contextualized datasets to test whether improvements can be achieved. In $\S$\ref{results} we present the results obtained in our experiments and in $\S$\ref{conclusion} we discuss these results and propose further work. Our contributions are summarized below: \begin{itemize} \item Use of contextualized datasets to explore contextualized dynamic embeddings and evaluate the viability of contextualized similarity measures; \item Expansion of the notion of cosine similarity, motivating our model theoretically, contributing to a conceptual simplification that yields interpretable improvements. \end{itemize} \subsection{Related Literature} Variations on similarity metrics on the contextualized dataset of \citet{richie2020spatial} have been first explored in \citet{richie2021similarity}, but only on static vector representations and diagonal metrics. Other analytical approaches to similarity learning have been identified in \citet{kulis2013metric}. The notion of soft cosine similarity of \citet{sidorov2014soft} presents a relevant extension theoretically similar to ours, but motivated and implemented differently. Using count-base vector space models with words and n-grams as features, the authors extract a similarity score between features, using external semantic information, that they use as a distance matrix that can be seen as a metric; however, they do not implement it as in Eq. (\ref{generalinner}), but instead they transform the components by creating a higher dimensional vector space where each entry is the average of the components in two features, multiplied by the metric, whereas we, by contrast, learn the metric automatically and apply it to the vectors directly. \citet{hewitt2019structural} also use a modified metric for inner product to probe the syntactic structure of the representations, showing that syntax trees are embedded implicitly in deep models' vector geometry. Context dependency in how humans evaluate similarity, which we based our study on, has been widely supported in the psycholinguistic literature. \citet{tversky1977features} shows that similarity can be expressed as a linear combination of properties of objects, \citet{barsalou1982context} looks at how context-dependent and context-independent properties influence similarity perception, \citet{medin1993respects} explore how similarity judgments are constrained by the very fact of being requested, and \citet{goldstone1997similarity} test how similarity judgments are influenced by context that can either be explicit or perceived. \section{Model}\label{model} A metric is a tensor that maps any two vectors to an element of the underlying field $\mathbb{K}$, which in this case will be the field of real numbers $\mathbb{R}$. This element is what is known as the \textit{inner product}. To this effect, the metric tensor can be represented as a function, not necessarily linear, over each of the coordinates of the vectors it acts on. In geometric terms, the metric characterizes the underlying geometry of a vector space, by describing the projection of the underlying manifold of a non-Euclidean geometry to a Euclidean geometry $\mathbb{R}^n$ \cite{wald2010general}. The inner product between two vectors is informed by the metric in a precise way, and is representative of how the distance between two vectors should be calculated. A standard example consists of two unit vectors on a sphere, which is an $\mathbb{S}^2$ manifold that can be mapped onto $\mathbb{R}^3$. If the vectors are represented in spherical coordinates, which are a map from $\mathbb{S}^2$ to $\mathbb{R}^3$, the standard method of computing the angle between the vectors using Eq. (\ref{cosine}) will fail to give the correct value. The vectors need to be transformed by the appropriate non-linear metric to the Euclidean basis in $\mathbb{R}^3$ before a contraction of the coordinates can take place. To illustrate this, take as an example a triangle drawn on the surface of a sphere $\mathbb{S}^2$. If it is projected onto a planisphere $\mathbb{R}^3$, a naive measurement of its internal angles will exceed the known 180 degrees, which corresponds to a change in the inner product between the vectors tangents to the triangle corners (see \citet{sphere} for a demonstration). To preserve this inner product, and thus recover the equivalence between a triangle on a spherical surface and a triangle on a Euclidean plane, the coordinates need to be properly transformed by the appropriate metric before they are contracted. By the same token, we explore here the possibility that the shortcomings of the values obtained using cosine similarity when compared with human similarity ratings are not due to poor vector representations, but to a measure that fails to assess the distance between the vectors adequately. To test this hypothesis, we generalize the inner product of Eq. (\ref{euclideaninner}) to accommodate a larger class of relationships between vectors, modifying it using a metric represented by the distance matrix $d$, once a basis is assumed, that defines the inner product between two vectors as \begin{equation} \label{generalinner} \vec{n}_a \cdot_d \vec{n}_b=\sum_{ij} \vec{n}^i_a d^{ij} \vec{n}^j_b, \end{equation} where $\vec{n}^i_a$ is the $i$th component of $\vec{n}_a$. Using a metric of this form, the best we can achieve is a linear rescaling of the components of the vectors, which entails the existence of a non-orthogonal basis. The metric $d$ is required to be bilinear and symmetric, which is satisfied if \begin{equation} d^{sym}=B^TB, \end{equation} such that Eq. (\ref{generalinner}) can be rewritten as \begin{equation}\label{simmod} \vec{n}_a \cdot_d \vec{n}_b = \left( B \vec{n}_a \right)^T \cdot \left( B \vec{n}_b \right). \end{equation} We can thus learn the components of a metric for a certain set of vectors by fitting it to the goal of preserving a specified inner product. In the case of word similarity, the matrix $B$ can be learned supervised on human similarity judgments, towards the goal that a contextualized cosine similarity applied to a set of word embeddings, using Eq. (\ref{simmod}), returns the correct human assessment. An advantage of this approach is that the cosine is symmetric with respect to its inputs, which is a nice property that this extension preserves by requiring that symmetry of the metric. \section{Methods} \label{methods} The general outline of our experiment is as follows. First, we learn contextualized cosine similarity measures for related (contextualized) pairs of words, and afterwards for unrelated (non-contextualized) pairs of words. A schematic representation can be found in Fig. \ref{flowchart}. We then test whether these learned measures are transferable and provide improvements on word pairs that were not seen during training, when compared with the standard cosine similarity baseline. \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{Flowchart.pdf} \caption{Schematic representation of the experiment leading up to the results in Tables \ref{tab:3} and \ref{tab:4}.} \label{flowchart} \end{figure*} \subsection{Datasets} For a contextualized assessments of word similarity, we use the dataset of \citet{richie2020spatial}, where 365 participants were asked to judge the similarity between English word-pairs that are co-hyponyms of eight different hypernyms (Table \ref{tab:categories}). Participants were assigned a specific hypernym and were asked to rate the similarity between each co-hyponym pair from 1 to 7, with the highest rating indicating the words to be maximally similar. The number of annotators varies per hypernym, but each word-pair is rated by around 30 annotators, such that for the largest categories each annotator only saw a fraction of the totality of the word-pairs. As examples from the hypernym `Clothing', the word-pair `hat/overalls' was rated by 32 of the 61 annotators, resulting in an average similarity of 1.469, while `coat/gloves' had an average similarity rating of 3.281 and `coat/jacket' of 6.438, also by 32 annotators. The average similarity was computed for all word-pairs and rescaled to a value between 0 and 1, to be used as the target for supervised learning. Besides trying to fit a contextualized similarity measure to each hypernym, we also considered the entire all-hypernyms dataset, in order to test whether training on the hypernyms separately would result in a better cosine measure compared with when the hypernym information was disregarded. To test whether similarity measures can be learned if the similarity of words is not assessed within a specific context, we use the WordSim-353 (WS353) \cite{finkelstein2001placing} and part of the SimLex-999 (SL999) \cite{hill2015simlex} datasets, where the word-pairs bear no specific semantic relation. From the SL999 dataset only the nouns were included, resulting in a dataset of 666 word-pairs. Additionally, we use these datasets to verify whether the similarity metric learned by training on the whole dataset of \citet{richie2020spatial} can be transferred to other, more general, datasets. \begin{table}[t]\centering \caption{Number of words, word-pairs and human annotators per hypernym.} \small \label{tab:categories} \begin{tabular}{l|c|c|c} \hline Hypernym & Words & Pairs & Annotators \\ \hline Birds & 30 & 435 & 54 \\ Clothing & 29 & 406 & 61 \\ Professions & 28 & 378 & 67 \\ Sports & 28 & 378 & 61 \\ Vehicles & 22 & 231 & 28 \\ Fruit & 21 & 210 & 31 \\ Furniture & 20 & 190 & 33 \\ Vegetables & 20 & 190 & 30 \\ \hline All & 198 & 2418 & 365 \\ \hline \end{tabular} \end{table} \subsection{Word embeddings} To fine-tune the cosine similarity measure, we start from different pre-trained word representations. We do that for two classes of embeddings, static and dynamic. Static embeddings were obtained from a pre-trained word2vec (W2V) model \cite{mikolov2013distributed} and a pre-trained GloVe model \cite{pennington2014glove}, each used to encode each word in the pair. Dynamic embeddings were obtained from two Transformers-based models, pre-trained BERT \cite{devlin2019bert} and GPT-2 models \cite{radford2019language} (see Table \ref{repstable}). Here the representation of each word was taken to be the average representation of sub-word tokens when necessary, excluding the [CLS] and [SEP] tokens. \begin{table}[t] \centering \label{tab:Representations} \resizebox{\columnwidth}{!}{% \begin{tabular}{l|c|c|c} \hline \textbf{Representation} & \textbf{Corpus} & \textbf{Corpus size} & \textbf{Dim} \\ \hline word2vec & Google News & 100B & 300 \\ \hline GloVe & GigaWord Corpus \& Wikipedia & 6B & 200 \\ \hline BERT\textsubscript{base-uncased} & BooksCorpus \& English Wikipedia & 3.3B & 768 \\ \hline GPT-2\textsubscript{medium} & 8 million web pages & $\sim$ 40 GB & 768 \\ \hline \end{tabular}% } \caption{Pre-trained embeddings obtained from different source language models, with BERT and GPT-2 implemented using the Huggingface Transformers library.}\label{repstable} \end{table} \begin{table}[t]\centering \small \begin{tabular}{l|l} \hline Hypernym & Context words \\ \hline Birds & \texttt{small, migratory, other, } \\ & \texttt{water, breeding} \\ \hline Clothing & \texttt{cotton, heavy, outer, winter,}\\ & \texttt{leather} \\ \hline Professions & \texttt{health, legal, engineering, } \\ & \texttt{other, professional} \\ \hline Sports & \texttt{youth, women, men, ea, boys} \\ \hline Vehicles & \texttt{military, agricultural, motor,}\\ & \texttt{recreational, commercial} \\ \hline Fruit & \texttt{citrus, summer, wild, sweet,} \\ & \texttt{passion} \\ \hline Furniture & \texttt{wood, furniture, modern,} \\ & \texttt{antique, office} \\ \hline Vegetables & \texttt{some, wild, root, fresh, green} \\ \hline \end{tabular} \caption{Five most likely words for masked token preceding hypernym token using BERT.}\label{tab:contexts} \end{table} \begin{figure} \centering \begin{subfigure}{0.4\textwidth} \includegraphics[width=\textwidth]{output.png} \caption{} \label{fig:first} \end{subfigure} \hfill \begin{subfigure}{0.4\textwidth} \includegraphics[width=\textwidth]{output1.png} \caption{} \label{fig:second} \end{subfigure} \hfill \begin{subfigure}{0.4\textwidth} \includegraphics[width=\textwidth]{correlation.png} \caption{} \label{fig:third} \end{subfigure} \hfill \begin{subfigure}{0.4\textwidth} \includegraphics[width=\textwidth]{correlation1.png} \caption{} \label{fig:fourth} \end{subfigure} \caption{Distributions of pairwise human similarity judgments $sim_{hum}$ and cosine similarity measures using either BERT representations ($\cos(\text{BERT})$) or contextualized BERT representations ($\cos(\text{BERT}_{ctxt})$). In (a) and (b) the absolute difference of scores, ordered per hypernym, is shown, while (c) and (d) represent the distribution of different similarity scores with respect to each other. Comparing the first two plots we can see a regularization effect by contextualizing the representations, and between the last two plots we can see a clustering effect.} \label{fig:distribution} \end{figure} The token representations provided by the BERT model, as a bidirectional dynamic language model, can change depending on the surrounding context tokens. As such, additional contextualized embeddings were retrieved, BERT$_{ctxt}$, to test whether performance could be improved relative to the baseline cosine metric by using the hypernym information, as well as when compared with the hypernym cosine metric learned on non-contextualized representations. In this way we test whether leveraging the contextual information intrinsic to this dataset can in itself improve similarity at the baseline level, without the need of further training. The contextualized vectors of BERT$_{ctxt}$ were obtained by first having BERT predict the five most likely adjectives that precede each hypernym using (\texttt{[MASK] <hypernym>}), and then using those adjectives to obtain five contextualized embeddings for each co-hyponym, subsequently averaged over. Most of the predicted words were adjectives, and the few cases that were not were filtered out. For instance, for the category `Clothing', the most likely masked tokens were `cotton', `heavy', `outer', `winter' and `leather'. The contextualized representation of each hyponyms of `Clothing' was thus calculated as its average representation in the context of each of the adjectives, so that, for instance, for 'coat' we first obtained its contextualized representation in `cotton coat', `heavy coat', `outer coat', `winter coat', and `leather coat', performing a final averaging. The full list of context words can be found in Table \ref{tab:contexts}. Figs. \ref{fig:first} and \ref{fig:second} show that this transformation reduces the absolute extreme values of the difference between the values of the standard cosine similarity and the corresponding human similarity assessments, while regularizing the bulk of the differences closer to the desired value of 0. We tested other forms of contextualizing, such as (\texttt{<hypernym> is/are [MASK]}), but the resulting representations did not show as much improvement. The WS353 and SL999 datasets were only trained with non-contextualized embeddings, since we cannot obtain contextualized embeddings for the nouns in these datasets using the same method. For consistency, the models that were learned with contextualized representations were not tested on these datasets at the final step of our experiment. \subsection{Model} A linear model was implemented on the PyTorch machine learning framework to learn the parameters of $B$, without a bias, such that a word initially represented by $\text{input}_a$ is transformed to $\text{input'}_a=B \text{input}_a$. The forward function of this model takes two inputs and returns \begin{equation}\label{forward} \frac{\left(\text{input'}_a \right)^T \cdot \text{input'}_b}{\sqrt{\left(\text{input'}_a\right)^T \cdot \text{input'}_1}\sqrt{\left(\text{input'}_b\right)^T \cdot \text{input}_b}}, \end{equation} where $a$ and $b$ correspond to the indices of the words of a given word-pair\footnote{\url{ https://github.com/maradf/Contextualized-Cosine}}. \subsection{Cross-validation} The number of co-hyponyms per hypernym is small when compared with the number of parameters in $B$ to be trained, which depends on the square of the dimension \textbf{Dim} of each representation. To ensure that the models did not overfit, a k-fold cross-validation was used during training \cite{raschka2015python}, which divided each dataset in k training sets and non-overlapping development sets. Additionally, early stopping of training was implemented in the event that the validation loss increased for ten consecutive epochs after it dropped below 0.1 \cite{bishop2006pattern}. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{learningcurve3.png} \caption{Example of learning curve, showing losses over epochs, from a fold training on the hypernym \textbf{Clothes} on the GloVe embeddings. In this case, training was stopped early at 397 epochs.} \label{fig:learningcurve} \end{figure} \subsection{Hyperparameter selection} Per each dataset $h$ (each hypernym, all hypernyms, WS353 or SL999) and learning rate $l_r$, k models $B^h_{i,l_r}$ were trained, with $i \in \{1,...,k\}$ and with k corresponding validation sets $val_i$. The training was done using two 16 cores (64 threads) Intel Xeon CPU at 2.1 GHz. A fixed seed was used to find the best combination of the learning rate $l_r$ ($1\times 10^{-5}$, $1 \times 10^{-6}$, and $1\times 10^{-7}$) and the number of folds (5, 6 and 7) for the k-fold cross-validation. The regression to the best metric was done using the mean square error loss function and the Adam optimizer. The maximum number of training epochs was set to $500$, as most models converged at that point as per preliminary learning curve inspection (Fig.\ref{fig:learningcurve}). The implementation of early stopping resulted in \textit{de facto} variation of the number of epochs required to train each model. \begin{table*}[h] \begin{subtable}[h]{\textwidth} \centering \caption{Pearson correlations.} \small \begin{tabular}{lrrrrrrrrrrr}\toprule \multirow{2}{*}{\textbf{Dataset (h)}} &\multicolumn{2}{c}{\textbf{BERT}} &\multicolumn{2}{c}{\textbf{BERT$_{ctxt}$}} &\multicolumn{2}{c}{\textbf{GPT-2}} &\multicolumn{2}{c}{\textbf{word2vec}} &\multicolumn{2}{c}{\textbf{GloVe}} \\\cmidrule{2-11} &Model &Base &Model &Base &Model &Base &Model &Base &Model &Base \\\cmidrule{1-11} Birds & \underline{\textbf{0.311}} & 0.098 & \underline{\textbf{0.316}} & 0.042 & \textbf{0.200} & -0.023 & \underline{\textbf{0.293}} & 0.213 & \textbf{0.215} & 0.194 \\ Clothing & \underline{\textbf{0.550}} & 0.141 & \underline{\textbf{0.515}} & 0.065 & \underline{\textbf{0.501}} & \underline{0.349} & \underline{\textbf{0.529}} & \underline{0.417} & \underline{\textbf{0.574}} & \underline{0.364} \\ Professions & \underline{\textbf{0.501}} & 0.193 & \underline{\textbf{0.601}} & 0.073 & \underline{\textbf{0.651}} & \underline{0.542} & \underline{\textbf{0.635}} & \underline{0.566} & \underline{0.529} & \underline{0.529} \\ Sports & \underline{\textbf{0.452}} & 0.175 & \underline{\textbf{0.543}} & 0.139 & \underline{\textbf{0.556}} & \underline{0.324} & \underline{\textbf{0.532}} & \underline{0.418} & \underline{\textbf{0.580}} & \underline{0.386} \\ Vehicles & \underline{\textbf{0.496}} & 0.218 & \underline{\textbf{0.616}} & 0.123 & \underline{\textbf{0.645}} & \underline{0.385} & \underline{\textbf{0.738}} & \underline{0.719} & \underline{\textbf{0.703}} & \underline{0.567} \\ Fruit & \textbf{0.315} & 0.016 & \textbf{0.378} & -0.037 & \textbf{0.333} & 0.203 & \textbf{0.361} & 0.239 & \underline{\textbf{0.571}} & 0.392 \\ Furniture & \textbf{0.353} & -0.018 & \underline{\textbf{0.539}} & -0.035 & \underline{\textbf{0.568}} & 0.399 & \underline{\textbf{0.368}} & 0.333 & \underline{\textbf{0.470}} & \underline{0.462} \\ Vegetables & \textbf{0.211} & -0.059 & \textbf{0.293} & -0.044 & \textbf{0.378} & 0.144 & \underline{\textbf{0.577}} & 0.281 & \underline{\textbf{0.562}} & 0.290 \\ All hypernyms & \underline{\textbf{0.434}} & 0.100 & \underline{\textbf{0.542}} & 0.040 & \underline{\textbf{0.508}} & 0.287 & \underline{\textbf{0.483}} & 0.400 & \underline{\textbf{0.539}} & 0.397 \\ WordSim-353 & \underline{\textbf{0.517}} & 0.238 & - & - & \underline{\textbf{0.651}} & \underline{0.647} & \underline{0.637} & \underline{0.654} & \underline{\textbf{0.622}} & \underline{0.568} \\ SimLex-999 & \underline{\textbf{0.403}} & 0.161 & - & - & \underline{\textbf{0.555}} & \underline{0.504} & \underline{\textbf{0.495}} & \underline{0.455} & \underline{\textbf{0.510}} & \underline{0.408} \\ \bottomrule \end{tabular} \end{subtable} \begin{subtable}[h]{\textwidth} \centering \caption{Spearman correlations.} \small \begin{tabular}{lrrrrrrrrrrr}\toprule \multirow{2}{*}{\textbf{Dataset (h)}} &\multicolumn{2}{c}{\textbf{BERT}} &\multicolumn{2}{c}{\textbf{BERT$_{ctxt}$}} &\multicolumn{2}{c}{\textbf{GPT-2}} &\multicolumn{2}{c}{\textbf{word2vec}} &\multicolumn{2}{c}{\textbf{GloVe}} \\\cmidrule{2-11} &Model &Base &Model &Base &Model &Base &Model &Base &Model &Base \\ \cmidrule{1-11} Birds & \underline{\textbf{0.260}} & 0.102 & \underline{\textbf{0.299}} & 0.052 & \textbf{0.190} & -0.054 & \underline{\textbf{0.250}} & 0.211 & \textbf{0.238} & 0.201 \\ Clothing & \underline{\textbf{0.436}} & 0.184 & \underline{\textbf{0.467}} & 0.059 & \underline{\textbf{0.445}} & 0.276 & \underline{\textbf{0.510}} & \underline{0.414} & \underline{\textbf{0.513}} & \underline{0.384} \\ Professions & \underline{\textbf{0.501}} & 0.248 & \underline{\textbf{0.578}} & 0.170 & \underline{\textbf{0.560}} & \underline{0.473} & \underline{\textbf{0.518}} & \underline{0.410} & \underline{0.482} & \underline{0.486} \\ Sports & \underline{\textbf{0.391}} & 0.174 & \underline{\textbf{0.526}} & 0.142 & \underline{\textbf{0.540}} & 0.291 & \underline{\textbf{0.458}} & \underline{0.339} & \underline{\textbf{0.478}} & \underline{0.325} \\ Vehicles & \underline{\textbf{0.518}} & 0.238 & \underline{\textbf{0.601}} & 0.056 & \underline{\textbf{0.626}} & 0.288 & \underline{\textbf{0.709}} & \underline{0.687} & \underline{\textbf{0.680}} & \underline{0.596} \\ Fruit & \textbf{0.265} & -0.014 & \underline{\textbf{0.333}} & -0.103 & \textbf{0.365} & 0.173 & \textbf{0.368} & 0.277 & \underline{\textbf{0.491}} & 0.342 \\ Furniture & \textbf{0.353} & -0.032 & \underline{\textbf{0.491}} & -0.120 & \underline{\textbf{0.527}} & 0.393 & \underline{\textbf{0.442}} & \underline{0.402} & \textbf{0.464} & \underline{0.451} \\ Vegetables & \textbf{0.217} & -0.028 & \textbf{0.305} & 0.015 & \underline{\textbf{0.363}} & 0.089 & \underline{\textbf{0.587}} & 0.290 & \underline{\textbf{0.528}} & 0.228 \\ All hypernyms & \underline{\textbf{0.407}} & 0.111 & \underline{\textbf{0.504}} & 0.034 & \underline{\textbf{0.504}} & 0.242 & \underline{\textbf{0.446}} & 0.379 & \underline{\textbf{0.477}} & 0.377 \\ WordSim-353 & \underline{\textbf{0.543}} & 0.267 & - & - & \underline{\textbf{0.715}} & \underline{0.705} & \underline{0.675} & \underline{0.701} & \underline{\textbf{0.624}} & \underline{0.579} \\ SimLex-999 & \underline{\textbf{0.416}} & 0.180 & - & - & \underline{\textbf{0.566}} & \underline{0.513} & \underline{\textbf{0.475}} & \underline{0.445} & \underline{\textbf{0.500}} & \underline{0.374} \\ \bottomrule \end{tabular} \end{subtable} \caption{Best correlation scores between human similarity judgments and similarity scores found by the trained model, compared with baseline cosine metric values of the same hyperparameters. The underlined correlation values are the statistical significant values with a p $<$ 0.05, and the bold values correspond to model correlations that were higher than base correlations.}\label{tab:3} \end{table*} \begin{table*}[h] \begin{subtable}[h]{\textwidth} \centering \caption{Pearson correlations.} \small \begin{tabular}{lrrrrrrrrrrr}\toprule \multirow{2}{*}{\textbf{Dataset (h)}} &\multicolumn{2}{c}{\textbf{BERT}} &\multicolumn{2}{c}{\textbf{BERT$_{ctxt}$}} &\multicolumn{2}{c}{\textbf{GPT-2}} &\multicolumn{2}{c}{\textbf{W2V}} &\multicolumn{2}{c}{\textbf{GloVe}} \\\cmidrule{2-11} &$\%$ &$l_r$, k &$\%$ &$lr$, k &$\%$ &$lr$, k &$\%$ &$lr$, k &$\%$ &$lr$, k \\\cmidrule{1-11} Birds & 217 & $10^{-6}$,5 & 652 & $10^{-6}$, 5 & \textbf{770} & $10^{-5}$, 5 & 38 & $10^{-5}$, 5 & 11 & $10^{-5}$, 7 \\ Clothing & 290 & $10^{-6}$,5 & \textbf{692} & $10^{-6}$, 6 & 44 & $10^{-5}$, 6 & 27 & $10^{-5}$, 7 & 58 & $10^{-6}$, 5 \\ Professions & 160 & $10^{-6}$, 5 & \textbf{723} & $10^{-6}$, 6 & 20 & $10^{-5}$, 5 & 12 & $10^{-5}$, 7 & 0 & $10^{-5}$, 5 \\ Sports & 158 & $10^{-5}$, 6 & \textbf{291} & $10^{-6}$, 6 & 72 & $10^{-5}$, 6 & 27 & $10^{-5}$, 6 & 50 & $10^{-6}$, 7 \\ Vehicles & 128 & $10^{-6}$, 6 & \textbf{401} & $10^{-5}$, 7 & 68 & $10^{-5}$, 5 & 3 & $10^{-5}$, 5 & 24 & $10^{-6}$, 6 \\ Fruit & \textbf{1869} & $10^{-5}$, 7 & 922 & $10^{-6}$, 6 & 64 & $10^{-5}$, 7 & 51 & $10^{-6}$, 5 & 46 & $10^{-7}$, 7 \\ Furniture & \textbf{1861} & $10^{-5}$, 7 & 1440 & $10^{-6}$, 6 & 42 & $10^{-5}$, 7 & 11 & $10^{-5}$, 6 & 2 & $10^{-5}$, 6 \\ Vegetables & 258 & $10^{-5}$, 7 & \textbf{566} & $10^{-6}$, 6 & 163 & $10^{-5}$, 5 & 105 & $10^{-6}$, 7 & 94 & $10^{-6}$, 5 \\ All & 334 & $10^{-5}$, 5 & \textbf{1255} & $10^{-6}$, 7 & 77 & $10^{-5}$, 6 & 21 & $10^{-5}$, 6 & 36 & $10^{-7}$, 6 \\ WordSim-353 & \textbf{117} & $10^{-6}$, 7 & - & - & 1 & $10^{-5}$, 7 & -3 & $10^{-5}$, 6 & 10 & $10^{-5}$, 5 \\ SimLex-999 & \textbf{150} & $10^{-6}$, 7 & - & - & 10 & $10^{-5}$, 6 & 9 & $10^{-5}$, 6 & 25 & $10^{-6}$, 5 \\ \bottomrule \end{tabular} \end{subtable} \begin{subtable}[h]{\textwidth} \centering \caption{Spearman correlations.} \small \begin{tabular}{lrrrrrrrrrrr}\toprule \multirow{2}{*}{\textbf{Dataset (h)}} &\multicolumn{2}{c}{\textbf{BERT}} &\multicolumn{2}{c}{\textbf{BERT$_{ctxt}$}} &\multicolumn{2}{c}{\textbf{GPT-2}} &\multicolumn{2}{c}{\textbf{W2V}} &\multicolumn{2}{c}{\textbf{GloVe}} \\\cmidrule{2-11} &$\%$ &$lr$, k &$\%$ &$lr$, k &$\%$ &$lr$, k &$\%$ &$lr$, k &$\%$ &$lr$, k \\ \cmidrule{1-11} Birds & 155 & $10^{-6}$, 5 & \textbf{475} & $10^{-6}$, 5 & 252 & $10^{-5}$, 7 & 18 & $10^{-5}$, 5 & 18 & $10^{-7}$, 5 \\ Clothing & 137 & $10^{-6}$, 5 & \textbf{692} & $10^{-6}$, 6 & 61 & $10^{-5}$, 7 & 23 & $10^{-5}$, 7 & 34 & $10^{-6}$, 5 \\ Professions & 102 & $10^{-6}$, 7 & \textbf{240} & $10^{-6}$, 5 & 18 & $10^{-5}$, 5 & 26 & $10^{-5}$, 7 & -1 & $10^{-7}$, 6 \\ Sports & 125 & $10^{-5}$, 6 & \textbf{270} & $10^{-6}$, 6 & 86 & $10^{-5}$, 6 & 35 & $10^{-5}$, 6 & 47 & $10^{-6}$, 6 \\ Vehicles & 118 & $10^{-6}$, 6 & \textbf{973} & $10^{-6}$, 6 & 117 & $10^{-5}$, 7 & 3 & $10^{-5}$, 5 & 14 & $10^{-6}$, 6 \\ Fruit & \textbf{1793} & $10^{-6}$, 7 & 223 & $10^{-6}$, 6 & 111 & $10^{-5}$, 6 & 33 & $10^{-6}$, 6 & 44 & $10^{-7}$, 7 \\ Furniture & \textbf{1003} & $10^{-6}$, 6 & 309 & $10^{-6}$, 5 & 34 & $10^{-5}$, 5 & 10 & $10^{-5}$, 6 & 3 & $10^{-6}$, 7 \\ Vegetables & 675 & $10^{-5}$, 7 & \textbf{1933} & $10^{-6}$, 6 & 308 & $10^{-5}$, 5 & 102 & $10^{-6}$, 7 & 132 & $10^{-6}$, 5 \\ All hypernyms & 267 & $10^{-5}$, 5 & \textbf{1382} & $10^{-6}$, 7 & 108 & $10^{-5}$, 6 & 18 & $10^{-5}$, 6 & 27 & $10^{-6}$, 5 \\ WordSim-353 & \textbf{103} & $10^{-6}$, 5 & - & - & 1 & $10^{-5}$, 7 & -4 & $10^{-6}$, 5 & 8 & $10^{-5}$, 5 \\ SimLex-999 & \textbf{131} & $10^{-6}$, 7 & - & - & 10 & $10^{-5}$, 6 & 7 & $10^{-5}$, 6 & 34 & $10^{-6}$, 5 \\ \bottomrule \end{tabular} \end{subtable} \caption{Change ($\%$) in correlation from Table \ref{tab:3}, given by $(|\text{Model}|-|\text{Base}|)/|\text{Base}|$, at corresponding best hyperparameters ($lr$, k). Values in bold indicate the highest increase on a given dataset.}\label{tab:4} \end{table*} \subsection{Testing the model} Each one of the $B^h_{i,l_r}$ models was tested on the corresponding holdout validation set $val_i$, resulting in two correlation scores between the models' predicted similarity scores and the human judgment scores: a Pearson correlation score $r^{h}_{i,l_r}(val^h_i)$ and a Spearman correlation score $\rho^{h}_{i,l_r}(val^h_i)$. A final score per k and $l_r$ was calculated using the average performance on the validation sets as \begin{align}\label{modelcorrs1} & r^{h}_{k,l_r}= \frac{1}{k}\sum_{i=1}^k r^{h}_{i,l_r}(val^h_i), \\ & \rho^{h}_{k,l_r}= \frac{1}{k}\sum_{i=1}^k \rho^{h}_{i,l_r}(val^h_i). \label{modelcorrs2} \end{align} The baseline results were obtained in a similar form, but with the model $B^{std}$ corresponding to the identity matrix, returning the standard cosine similarity rating as \begin{align} & r^{h,std}_{k}= \frac{1}{k}\sum_{i=1}^k r^{std}(val^h_i), \label{baseline11} \\ & \rho^{h,std}_{k}= \frac{1}{k}\sum_{i=1}^k \rho^{std}(val^h_i). \label{baseline12} \end{align} The model results shown in Table \ref{tab:3} correspond to the best correlation values obtained using Eqs. (\ref{modelcorrs1}) and (\ref{modelcorrs2}), with the baselines given as in Eqs. (\ref{baseline11}) and (\ref{baseline12}). The hyperparameters corresponding to the best results can be found in Table \ref{tab:4}, along with the relative change in correlation performance. As the seed was fixed, the differences in performance achieved by models trained on each hypernym and on all-hypernyms of the contextualized dataset were not due to randomization errors. The final correlation per fold on the entire all-hypernyms dataset was found by first calculating the correlation per hypernym and then averaging over all eight hypernyms. To test the transferability of the metric learned on the all-hypernyms dataset to other datasets, the model that returned the best correlation scores on the validation datasets of the all-hypernyms dataset was tested on the entire WS353 and SL999 datasets. As the best performing model consists in fact of k models, each one of these was tested on the entire datasets, as \begin{align} & r^{h,test}_{k,l_r}= \frac{1}{k}\sum_{i=1}^k r^{All-hyp}_{i,l_r}(test^h), \\ & \rho^{h,test}_{k,l_r}= \frac{1}{k}\sum_{i=1}^k \rho^{All-hyp}_{i,l_r}(test^h), \end{align} with $h \in \{ \text{WS353}, \text{SL999}\}$. The baselines for these results were obtained by applying $B^{std}$ to the entire WS353 and SL999 datasets as \begin{align} & r^{h,std}= r^{std}(test^h), \label{baseline21} \\ & \rho^{h,std}= \rho^{std}(test^h) \label{baseline22}. \end{align} As the correlation functions are not linear, the results from Eqs. (\ref{baseline11}) and (\ref{baseline12}) for the WS353 and SL999 datasets are expected to differ from those obtained using Eqs. (\ref{baseline21}) and (\ref{baseline22}) for the same datasets. \section{Results} \label{results} The validation results on Table \ref{tab:3} show consistent improvements over the baselines, with statistical significance. This confirms that the modification introduced to the cosine measure worked in a principled way, and consistent with the results found by \citet{richie2021similarity}. On the individual hypernym datasets, `Vehicles' showed the best correlations, except for the Pearson correlation in GPT-2, in spite of not being the largest hypernym dataset. On the contrary, the smallest categories showed the lowest correlations. In general, the relative performance of hypernyms according to the baselines extends to the model correlations, although with better performance. With some exceptions, mainly in the `Birds' hypernym, the best performing representation was GPT-2, followed by W2V, but the relative increase as shown in Table \ref{tab:4} was clearly superior for the dynamic representations. An important observation that we make is that the model trained on all hypernyms had a better performance than the average performance on the individual hypernyms. As the seed was fixed, this means that the performance on the hypernym-specific validation sets increased if at training time the models saw more examples, from different categories, indicating that a similarity relationship was learned and transferred across different contexts. Improvements over baseline also took place if a metric was learned on datasets where the word pairs did not share a context, as was the case with WS353 and SL999, but the percentual increase was lower, as seen in Table \ref{tab:4}. Comparing the results of BERT contextualized and non-contextualized, the baseline values of the contextualized representations were worse than those obtained with the contextualized embeddings, although without statistical significance, while the improvement after training was consistently better and significant for all datasets with the contextualized representations. Figs. \ref{fig:third} and \ref{fig:fourth}, show that the distribution of points using the contextualized embeddings is more concentrated and collinear, making it more likely that a metric that acts in the same way for all points in the dataset will rotate and rescale them into a positive correlation. The percentual increases also show that BERT contextualized had the greatest increases from before to after training, suggesting that there was a cumulative effect in considering the context both in the representations and in the similarity measure. Table \ref{tab:5} shows the results of applying the best model learned on all hypernyms to the WS353 and SL999 datasets. The baseline values for the static representations are comparable with the existing literature \cite{toshevska2020comparative}. We see that our model was capable of improving on the correlation scores on the datasets, for some representations. Although the improvements did not happen across the board, they show clear evidence that the notion of similarity in the form of a modified cosine measure can be learned in one dataset and applied with positive results to an independent dataset. \\ \begin{threeparttable}[t] \centering \small \par \begin{tabular}{lrrrrrr}\toprule \multicolumn{2}{c}{\multirow{2}{*}{}} &\multicolumn{2}{c}{\textbf{Pearson}} &\multicolumn{2}{c}{\textbf{Spearman}} \\\cmidrule{3-6} & &WS353 &SL999 &WS353 &SL999 \\\cmidrule{1-6} \multirow{2}{*}{\textbf{BERT}} & Model & \textbf{0.487} & \textbf{0.375} & \textbf{0.519} & \textbf{0.384} \\ &Base & 0.239 & 0.151 & 0.267 & 0.172 \\ \multirow{2}{*}{\textbf{GPT-2}} & Model & 0.635 & \textbf{0.507} & 0.676 & 0.513 \\ &Base & 0.647 & 0.504 & 0.709 & 0.520 \\ \multirow{2}{*}{\textbf{W2V}} & Model & 0.613 & 0.472 & 0.632 & \textbf{0.457} \\ &Base & 0.653 & 0.460 & 0.700 & 0.452 \\ \multirow{2}{*}{\textbf{GloVe}} & Model & \textbf{0.593} & \textbf{0.431} & 0.558 & \textbf{0.392} \\ &Base & 0.578 & 0.408 & 0.578 & 0.376 \\ \bottomrule \\ \textbf{SOTA} & & 0.704 & 0.658 & 0.828 & 0.76 \end{tabular} \caption{Best model trained on all hypernyms, tested on SimLex-999 and WordSim-353 datasets. Bold values indicate correlation scores above baseline, and underlining indicates statistical significance. State of the art from \citet{recski2016measuring, dobo2020comprehensive, speer2017conceptnet, banjade2015lemon}.}\label{tab:5} \end{threeparttable} \section{Conclusion and Outlook} \label{conclusion} In this paper we tested whether a contextualized notion of cosine similarity could be learned, improving the similarity not only of the results for the datasets where it was learned, but of unrelated similarities. We showed that this metric improved the correlations above baseline, and that, when learned on a contextualized similarity dataset, it had an advantage when compared to one learned on a dataset with unrelated word-pairs. We furthermore showed that this framework has the potential to generalize the notion of similarity to word-pairs it has not seen during training. An important future research line towards interpretability consists in understanding the properties of the metrics that yielded the best results, particularly in identifying the distinctive features of the best metrics, such as their eigensystems. Other further directions include applying these metrics to distributional compositional contractions, including with dependency enhancements \cite{kogkalidis2019constructive}, testing this framework on larger contextualized datasets and trying out more complex, non-linear, metric forms. \section*{Acknowledgements} All authors would like to thank Juul A. Schoevers for contributions made during the early stages of the project. A.D.C. would like to thank Gijs Wijnholds, Konstantinos Kogkalidis, Michael Moortgat and Henk T.C. Stoof for the many exchanges during this research. This work is supported by the UU Complex Systems Fund, with special thanks to Peter Koeze.
{ "redpajama_set_name": "RedPajamaArXiv" }
9,601
\section{Introduction} \label{sec:intro} Cold atoms and exciton-polaritons in semiconductor microcavities are systems where their capability to constitute Bose-Einstein condensates (BECs) has been demonstrated in recent years~\cite{Davis1995,kasprzak06:nature}. These BECs, due to their dual wave-particle nature, share many properties with classical waves as, for instance, interference phenomena~\cite{Andrews1997,Hall:1998bs,Esslinger:2000aa,Bloch:2005vn}, which are crucial to gain insight into their undulatory character~\cite{BornWolf2000,Ficek2005}. One of the main differences between atomic and polariton condensates resides in the particles lifetime: the finite lifetime of polaritons, in contrast with the infinite one of atoms, can be regarded as a complication. But making virtue of necessity, a short lifetime also implies a significant advantage: polaritons have a mixed exciton-photon character~\cite{kavokin13}, their lifetime being determined by the escape of their photonic component out of the cavity. These photons are easily measured either in real- (near field spectroscopy) or momentum-space (far field spectroscopy)~\cite{novotny2006}, rendering full information about the polariton BECs wave-function and, in particular, about its coherence~\cite{kasprzak06:nature}. Our goal is to profit from these measurements in momentum space to experimentally investigate something far from accessible in atomic condensates: the interference in momentum space produced by the correlation between two components of a condensate, which are, and have always been, spatially separated. Understanding coherence is important for a large number of disciplines spanning from classic optics to quantum information science and optical signal processing~\cite{Mandel1995,pryde2008}. Pitaevskii and Stringari made a theoretical proposal to investigate experimentally these interference effects in momentum space via the measurement of their dynamic structure factor~\cite{Pitaevskii:1999fv}. In related experiments, coherence between two spatially separated atomic BECs has been indirectly obtained using stimulated light scattering~\cite{Saba:2005aa,Shin:2005aa}. In this work we perform a direct measurement of this correlation in polariton BECs, which moving in a symmetrical potential landscape, acquire a common relative phase, obtaining a positive answer to Anderson's question~\cite{anderson1984,Castin:1997aa,legget2006,Pitaevskii2003}, which opens new perspectives in the field of multi-component condensates. \section{Experimental results and discussion} \label{sec:exp} We confront this task in a quasi one-dimensional (1D) system made of a high-quality AlGaAs-based microcavity, where $20 \times 300$ $\mu$m$^2$ ridges have been sculpted. The sample, kept at 10 K, is excited with 2 ps-long light pulses from a Ti:Al$_2$O$_3$ laser. In order to create polaritons in two separated spatial regions, the laser beam is split in two, named \emph{A} and \emph{B}, impinging simultaneously at positions distanced by $d_{AB}=70$ $\mu$m. Additional experimental details are described in the Supplementary information~\cite{supple}. A crucial issue when optically creating polaritons is the excess energy of the excitation laser. There are two well explored alternatives: non-resonant excitation at very high energies~\cite{kasprzak06:nature} and strictly resonant excitation~\cite{amo2009_b}. The latter situation generally produces macroscopic polariton states with a phase inherited from that of the laser, unless special care is taken in the experiments~\cite{Amo:2011qf}. The former case is appropriate to avoid phase heritage, but it does not provide the momentum distribution, shown below, required for our experiments. In order to avoid these difficulties, we opt for a different alternative, depicted in Fig.~\ref{fig:fig1}(a): the laser beams excite the sample at the energy of bare excitons and $k_x\sim0$. The broad bands between 1.542 and 1.548 eV corresponds to excitonic emission bands; the sub-bands below 1.542 eV are the confined lower polariton branches. After energy relaxation, polariton condensates are created in a process that involves a non-reversible dressing of the excitons and therefore an erasure of the laser phase~\cite{supple}. \begin{figure}[htbp] \begin{center} \includegraphics[trim=0.0cm 0.3cm 5cm 1.5cm, clip=true,width=1\linewidth,angle=0]{figure_1.jpg} \end{center} \caption{(a) Sketch of the excitation and relaxation processes to form propagating polariton wave packets (\emph{WP}s) on a background showing the energy vs. $k_x$ emission obtained under non-resonant, low power excitation conditions. The grey ellipse depicts the excitation laser at 1.545 eV and $k_x \sim 0$. The dashed lines indicate the energy relaxation of excitons into polariton \emph{WP}s. Polariton \emph{WP}s, propagating with $k_x \approx \pm1.6 $ $\mu$m$^{-1}$ (slightly displaced for the sake of clarity), are depicted with circles, coded in colors explained in (b). The emission intensity is coded in a logarithmic, false color scale. (b) Sketch in real space of the experimental configuration. A laser beam is split into two arms, \emph{A} and \emph{B}, distanced by $d$. They create four propagating polariton \emph{WP}s, coded in different colors, $n_{1,2}^A$ (magenta, blue) and $n_{1,2}^B$ (red, green) moving along the $x$ axis of a microcavity ridge in the direction depicted by the arrows.} \label{fig:fig1} \end{figure} Above a given pump intensity threshold, polaritons with $k_x\sim0$ evolve towards two states with momenta $\pm k_x$ (Fig.~\ref{fig:fig1}(a)). As sketched in Fig~\ref{fig:fig1}(b), this procedure results in the formation of four propagating polariton wave packets (\emph{WP}s). We label the macroscopic state of the \emph{WP}s as $\psi_1^A$, $\psi_2^A$, $\psi_1^B$, $\psi_2^B$, where the superscript refers to the excitation beam, the subscript $1$($2$) is for \emph{WP}s initially moving to the left (right), i.e. with $k_x < 0$ ($k_x > 0$). The direction of propagation is determined by the presence of local effective-barrier potentials ($V_A$ and $V_B$), associated to a blue-shifted dispersion relation, coming from carrier-carrier repulsive interactions~\cite{Wertz:2010ys}. The densities of the polariton \emph{WP}s are given by $n^{A,B}_j=\left|\psi^{A,B}_j\right|^2,~j=1,2$. \emph{WP}s created by \emph{A} have never been together with those generated by \emph{B}, as sketched in Fig.~\ref{fig:fig1}(b). However, \emph{WP}s with the same subscript $j$ are in the same quantum state~\cite{note2_kx}. Using the capability of measuring directly in momentum space, a unique condition only achievable in light-matter condensates, we can assess whether or not \emph{WP}s $\psi_1^A$ and $\psi_1^B$ (or $\psi_2^A$ and $\psi_2^B$) are correlated to each other, being components of the same condensate. The two \emph{WP}s propagating to the left are described by a common macroscopic order parameter \begin{align} \Psi_1^{coh}\left(x\right)=\psi_1^{A}\left(x\right)+e^{i \phi}\psi_1^{B}\left(x\right), \label{eq:eq0} \end{align} while those propagating to the right are described by \begin{align} \Psi_2^{coh}\left(x\right)=e^{i \phi}\psi_2^{A}\left(x\right)+\psi_2^{B}\left(x\right). \label{eq:eq1} \end{align} The phases are chosen to have inversion symmetry with respect to $x=0$, because in our experiments we tune the intensities of the two lasers in order to get a symmetrical potential $V\left(x\right)=V\left(-x\right)$. In that respect, our condensates are related to each other through the symmetry of the excitation process. Furthermore, our potential landscape renders an equal motion for $\psi_j^A$ and $\psi_j^B$, i.e. equal momenta $|\left(k_x\right)_j^A|=|\left(k_x\right)_j^B|=k_x$. These are precisely the suitable conditions to observe coherence between two components spatially separated by $d$, i.e. $\psi_j^A\left(x-d/2\right)=\psi_j^B\left(x+d/2\right)=\psi_0\left(x\right)$, of a given condensate $\Psi_j^{coh}$. This coherence can be observed in $\mathbf{k}$-space as we discuss now. For the sake of clarity, we focus in the following discussion only on the left-propagating \emph{WP}s. The corresponding order parameter in \textbf{k}-space can be written as: \begin{align} \Psi^{coh}_1\left(k_x\right)=\psi^{A}_1\left(k_x\right)+e^{i\phi}\psi^{B}_1\left(k_x\right)=\notag\\ &\hspace{-40mm}e^{-i k_x d/2} \psi_0\left(k_x\right)+e^{i\left(\phi+k_x d/2\right)} \psi_0\left(k_x\right)\label{eq:eq2} \end{align} with $\psi_0\left(k_x\right)$ being the Fourier transform of $\psi_0\left(x\right)$~\cite{Pitaevskii:1999fv}. This yields a momentum distribution \begin{align} n^{coh}_1\left(k_x\right)=\left|\Psi^{coh}_1\left(k_x\right)\right|^2=2\left[1+cos\left(k_x d + \phi\right)\right]\left|\psi_0\left(k_x\right)\right|^2. \label{eq:eq3} \end{align} The coherence between the two components produces interference fringes with a period \begin{align} \Delta k_x=2\pi/d. \label{eq:eq4} \end{align} Our aim is to observe the existence of interferences in $\mathbf{k}$-space coming from this macroscopic two-component condensate. Far-field detection allows the direct measurement of momentum distributions, i.e. it gives a direct determination of the existence, and the period, of these interference fringes. It must be taken also into account that the measured total polariton density is formed by a condensed population, $n^{coh}$, coexisting with a thermal one~\cite{Valle:2009aa}, therefore the interference patterns visibility, $\nu$, is lower than 1 (see Supplemental Material~\cite{supple}). \begin{figure*}[htbp] \begin{center} \includegraphics[trim=0.4cm 0.3cm 0.4cm 0.4cm, clip=true,width=0.75\linewidth,angle=0]{figure_2.jpg} \end{center} \caption{(a) Emission in real space, along the $x$ axis of the ridge, versus time. Gray circles at $x=\pm35$ $\mu$m indicate the spatial location of the \emph{A} and \emph{B} laser beams; the trajectories of the four \emph{WP}s, $n_1^A$, $n_2^A$, $n_1^B$ and $n_2^B$, are indicated by the dashed arrows. (b) Momentum space emission, along $k_x$, versus time. The grey circle indicates that the laser beams, \emph{A} and \emph{B}, excite the ridge at $k_x \sim 0$. The dashed, black arrows indicate the acceleration of the condensates $n_1^{coh}$ and $n_2^{coh}$, as well as the deceleration of the \emph{WP}s $n_1^{B}$ and $n_2^{A}$. Intensity is coded in a normalized, logarithmic false color scale.} \label{fig:fig2} \end{figure*} Our most important result is shown in Fig.~\ref{fig:fig2}(b): we indeed observe the interference fringes in $\mathbf{k}$-space, described by Eq.~\ref{eq:eq3}, directly in the polariton emission. This certifies the correctness of our hypothesis that each couple of \emph{WP}s ($\psi_j^A$, $\psi_j^B$) constitutes a two component condensate. Figure~\ref{fig:fig2}(a) shows the actual evolution in time of the four \emph{WP}s schematically depicted in Fig.~\ref{fig:fig1}(a): our results clearly demonstrate that the distance $d$ between the two components of each condensate remains constant with time during the first $\sim70$ ps ($d= d_{AB}$), as evidenced by the dashed parallel arrows. Figure~\ref{fig:fig2}(a) contains also interesting real-space interferences when \emph{WP}s $\psi_2^A$ and $\psi_1^B$ overlap in real space at 66 ps that we shall discuss in more detail below. A peculiarity of our experiments is that we observe the dynamics of the coherence; this allows us to determine that the two components of the condensate are phase locked since there is not any drift in the interference patterns. As readily seen in Fig.~\ref{fig:fig2}(b), an initial acceleration of the four \emph{WP}s, from rest, $k_x = 0$, to $k_x=\pm1.6$ $\mu$m$^{-1}$ during the first 40 ps, is followed by a uniform motion taking place from 40 ps to 70 ps. The interference pattern of each condensate is observed until $\sim75$ ps, instant at which $\psi_1^A$ and $\psi_2^B$ disappear from the sample region imaged in the experiments. Then \emph{WP}s $\psi_1^B$ and $\psi_2^A$ are progressively slowed by the presence of the barriers at the excitation spots ($V_A$/$V_B$ halts $\psi_1^B$/$\psi_2^A$). When these two \emph{WP}s, which are the components of two different condensates $\Psi_1^{coh}$ and $\Psi_2^{coh}$, are stopped (at $\sim100$ ps) another interference appears in $\mathbf{k}$-space, but now at $k_x = 0$ as it corresponds to \emph{WP}s at rest. This means that these two condensates also interfere with each other, being remarkable that $\Psi_1^{coh}$ and $\Psi_2^{coh}$ still preserve some kind of mutual coherence, supporting the functional form of Eqs.~(\ref{eq:eq0}) and (\ref{eq:eq1}). For longer times, the two \emph{WP}s move again, as can be observed in Figs.~\ref{fig:fig2}(a,b), becoming more difficult to track their trajectories. \begin{figure*}[htbp] \begin{center} \includegraphics[trim=0.4cm 1.1cm 0.4cm 0.3cm, clip=true,width=1\linewidth,angle=0]{figure_3.jpg} \end{center} \caption{(a) Momentum distribution $n\left(\mathbf{k}\right)$, at 35 ps after the excitation, showing the condensates $n_1^{coh}$/$n_2^{coh}$ at $k_x=\mp 1.6$ $\mu$m$^{-1}$, respectively. (b) Corresponding $n\left(\mathbf{r}\right)$ distribution showing \emph{WP}s $n_1^A$, $n_2^A$, $n_1^B$ and $n_2^B$. (c) Fourier transform of $n\left(\mathbf{k}\right)$, obtaining a frequency at $\Delta X=d=70$ $\mu$m. (d) Momentum distribution $n\left(\mathbf{k}\right)$ at 66 ps showing $n_1^B$ and $n_2^A$ at $k_x=\mp 1.6$ $\mu$m$^{-1}$, respectively. (e) Real space distribution $n\left(\mathbf{r}\right)$ showing the interferences of $n_{12}$ at $x= 0$, created by the overlapping in real space of $\psi_1^B$ and $\psi_2^A$. White dashed rectangle marks the region of interest where the interference occurs. (f) Fourier transform restricted to the region of interest in $n\left(\mathbf{r}\right)$, showing a frequency at $\Delta K_x = \kappa = 3.2$ $\mu$m$^{-1}$. (g) Momentum distribution $n\left(\mathbf{k}\right)$ at 108 ps, showing the interferences $n_{12}$ at $k_x \sim 0$. (h) Corresponding $n\left(\mathbf{r}\right)$ distribution showing $n_1^B$ and $n_2^A$. (i) Fourier transform of $n\left(\mathbf{k}\right)$, obtaining a frequency at $\Delta X = d_{12} = 60$ $\mu$m. Intensities in the false color scales for momentum, real and Fourier spaces are normalized to unity. The tilt in all panels originates from the orientation of the ridge with respect to the entrance slit of the spectrometer. The white dashed arrows mark the distances in real- and momentum-space between \emph{WP}s. The full arrows show these distances in the corresponding Fourier transform. Supplementary Video S1/S2 shows the time evolution of the emission in real/momentum space~\cite{supple}.} \label{fig:fig3} \end{figure*} Note that our measurements are performed averaging over millions of shots of the pulsed laser, therefore if $\phi$ were a phase determined by the projection involved in the measurement process~\cite{Castin:1997aa,legget2006}, it would take a random value in each realization. Then, averaging over all the possible results, the interference pattern would not be observed. However, as a consequence of the symmetry $V(x)=V(-x)$ of the potential, the whole state of the four \emph{WP}s, $\Psi$, is symmetric, both in real- and momentum-space. The continuity in \textbf{k}-space of the wave-function ($\Psi (k_x)$) and of its derivative ($\partial \Psi (k_x)/\partial k_x$) sets the relative phase $\phi$ and makes the experimental realizations contribute constructively to the observed interference patterns. In other words, the spatial symmetry involved in the buildup of the condensates determines the relative phase $\phi$. In this sense, they are not independent from each other although they have never before coincided in real space. Further insight into the quantum coherence is obtained by analyzing in detail the interferences occurring in momentum- and real-space. Accordingly, we present in Fig.~\ref{fig:fig3} two-dimensional maps of the polariton emission at three consecutive, relevant times~\cite{note1_nat}. We focus on the correspondence between the period of the interference patterns in each space (real and momentum) and the separation between the \emph{WP}s in the complementary space. Figure~\ref{fig:fig3}(a) shows the momentum distribution $n\left(k_x,k_y\right)$, 35 ps after the impinging of the laser beams on the sample. The coherence of each $\Psi_j^{coh}$ is observed by the conspicuous interference patterns, $n_j^{coh}$, centered at $k_x=\pm1.6$ $\mu$m$^{-1}$. In both cases, the fringes period amounts to $\Delta k_x=0.088(5)$ $\mu$m$^{-1}$ that, according to Eq.~\ref{eq:eq4}, should correspond to a distance between \emph{WP}s of $d= 71(4)$ $\mu$m. This is in good agreement with the experimental distance seen in Fig.~\ref{fig:fig3}(b): the two components of each condensate, $n_j^A$ and $n_j^B$, are separated by $d\simeq 70$ $\mu$m (see dashed arrows). Our findings are further supported by the Fourier transform map of $n\left(k_x,k_y\right)$ shown in Fig.~\ref{fig:fig3}(c): a well-defined Fourier component at $\Delta X=d=70$ $\mu$m is obtained, in accordance with the separation directly observed in real space. Coherence in real space have been profusely studied in cold atoms~\cite{Andrews1997,Esslinger:2000aa,Hodgman:2011aa}, excitons~\cite{Snoke:2002aa,High2012} and polariton condensates~\cite{kasprzak06:nature,Balili2007,Roumpos2012,Manni2012,Rahimi-Iman:2012ij,Spano2012}. Our experiments also show interferences in real space between two condensates, similar to those reported in atomic BECs~\cite{Andrews1997,Esslinger:2000aa}. This is shown in Fig.~\ref{fig:fig3}(e) at 66 ps when \emph{WP}s $\psi_2^A$ and $\psi_1^B$ meet each other at $x\sim 0$. The appearance of interference fringes in real space, $n_{12}$, signals unambiguously to coherence between these two \emph{WP}s. Since real and momentum spaces are reciprocal to each other, equivalent results for the interference patterns are expected. The complementary expression in real space to Eq.~\ref{eq:eq4} reads now $\Delta x=2\pi/\kappa$, where $\Delta x$ is the period of the fringes and $\kappa$ the difference in momentum of the propagating \emph{WP}s. The experimental period of the fringes, seen in the dashed-rectangle area in Fig.~\ref{fig:fig3}(e), $\Delta x=1.99(17)$ $\mu$m, should correspond to $\kappa=\left(k_x\right)_2^A-\left(k_x\right)_1^B=3.2(2)$ $\mu$m$^{-1}$. This is again borne out by our results, as shown in Fig.~\ref{fig:fig3}(d), where the emission in $\mathbf{k}$-space shows clearly that \emph{WP}s $\psi_2^A$ and $\psi_1^B$ are counter-propagating with $k_x=\pm1.6$ $\mu$m$^{-1}$, respectively. Figure~\ref{fig:fig3}(f) shows the Fourier transform of $n_{12}$ in the region enclosed by the rectangle in Fig.~\ref{fig:fig3}(e). It reveals a strong $\Delta K_x$ Fourier component at 3.1 $\mu$m$^{-1}$, in full agreement with the value of $\kappa$ displayed in Fig.~\ref{fig:fig3}(d). Let us also emphasize that \emph{WP}s first meet in real space at 66 ps, while interferences in momentum space are seen as early as $\sim10$ ps demonstrating that the phase locking occurs before the \emph{WP}s spatially overlap. The third result that we present corresponds to the arrival at 108 ps of $\psi_2^A$ and $\psi_1^B$ to the excitation regions \emph{B} and \emph{A}, respectively. Here, they run into the hills of the photogenerated potentials $V_B$ and $V_A$ that elastically convert their kinetic energy into potential energy~\cite{Anton:2013aa}. They slow down, halting, providing a new separation between \emph{WP}s $n_2^A$ and $n_1^B$, $d_{12} \sim 60$ $\mu$m (see Fig.~\ref{fig:fig3}(h)). Their emission in momentum space, arising from $k_x\sim 0$, evidences an interference pattern with $\Delta k_x=0.108(5)$ $\mu$m$^{-1}$ ($n_{12}$, see Fig.~\ref{fig:fig3}(g)). Once again, Eq.~\ref{eq:eq4} predicts a separation $d_{12}=60(4)$ $\mu$m between $n_2^A$ and $n_1^B$, as observed in the experiments. For completeness, we also show in Fig.~\ref{fig:fig3}(i) the Fourier transform map of the density that exhibits an emerging component at $\Delta X=d_{12}=60$ $\mu$m. Further insight into this scaling behavior, relating distances in real space between \emph{WP}s with the fringes period in momentum space, is presented in the Supplementary information~\cite{supple}. \section{Conclusions} \label{sec:conclu} In summary, the convenience of monitoring the evolution of exciton-polaritons in semiconductor microcavites, through the detection of emitted light, makes this system an ideal platform to study quantum coherence properties in real- as well as in momentum-space. Profiting from this fact, we have demonstrated the existence of quantum remote coherence between spatially separated polariton condensates whose phase is determined by the symmetry of the excitation conditions and therefore is constant in each realization of our multi-shot experiments. This issue is related to the superposition principle in quantum mechanics and it is crucial to understand how mutual coherence is acquired. \section{Acknowledgements} \label{sec:acknow} We thank D. Steel and J.J. Baumberg for a critical reading of the manuscript. C.A. acknowledge financial support from a Spanish FPU scholarship. P.G.S. acknowledges Greek GSRT program ``ARISTEIA" (1978) for financial support. The work was partially supported by the Spanish MEC MAT2011-22997, CAM (S-2009/ESP-1503) and FP7 ITN's ``Clermont4" (235114), ``Spin-optronics" (237252) and ``INDEX" (289968) projects.
{ "redpajama_set_name": "RedPajamaArXiv" }
9,701
{"url":"http:\/\/afrconstruction.net\/new-kid-udu\/simpson%27s-diversity-index-calculator-a4c6e3","text":"The formula for the Simpson's Index is Where: n = number of individuals of each species; N = total number of individuals of all species An interactive Simpson\u2019s Diversity Index Excel Calculator - this can be used to quickly calculate Simpson\u2019s Index of Diversity for any habitat with 10 species or fewer. Calculate the Simpson's Diversity Index for Narrows. N= total number of individuals of all ethnicities 3. (That\u2019s because we\u2019re using the College Board\u2019s formula, which is the Diversity Index Difference). Simpson's Index A measure that accounts for both richness and proportion (percent) of each species is the Simpson's diversity index. The formula for Simpson's Diversity Index is: D = 1 - ( ( \u03a3 n(n-1) ) \/ ( N (N-1) ) ) Where: n = number of individuals of each ethnicity; N = total number of individuals of all ethnicities; The value of D ranges between 0 and 1; To make calculating this metric even easier for you, download this free spreadsheet or use our Diversity Index Calculator to simply enter employee \u2026 B) Based on the species richness (the number of species present) and species abundance (the number of individuals per species). There are 2000 hectares total, so pmudflats = 500\/2000 = 0.25; pwetland = 400\/2000 = 0.20, pforest = 600\/2000 = 0.30, pscrubland = 500\/2000 = 0.2\u2026 \u2026 Full usage instructions are in the PowerPoint. \u2022ni = # of individuals (or biomass) in the ith species. - Step 1: Insert the total number in the set (89) into the formula N (N \u2013 1) and solve: N (N \u2013 1) = 89 (89 -1) = 7832 Put this number aside for a moment. In the Simpson index, p \u2026 You can\u2019t change what you don\u2019t measure, so get started now! 1 \u2013 0.101 = 0.899. Using the Simpson's Diversity Index, which \u2026 In this case, a few rare species with only a few representatives will not affect the diversity. There are a number of other options that may be used (such as species richness and Shannon's Diversity Index), but the AP Biology Equation and Formula Sheet includes Simpson's, so AP Biology students should be prepared to use it for the AP Biology exam. Script written by Maryland Sea Grant 2) Calculate the proportional representation of each habitat (pi). The Simpson index is a dominance index because it gives more weight to common or dominant species. \u2022N = total # of individuals or total biomass for all species. How to Calculate Biodiversity . observations) as integer or \u2026 Mann Whitney U is a statistical test \u2026 Like Simpson's index, Shannon's index accounts for both abundance and evenness of the species present. \u2026 My data looks a bit like this, where species is the species x site contingency table and env is the treatment factor x site table: [D Biology Classroom]how to use Simpson\u2019s Index of Diversity (D)? Methods: Simpson's diversity index (D) is a simple mathematical measure that characterizes species diversity in a community.The proportion of species i relative to the total number of species (p i) is calculated and squared.The squared proportions for all the species are summed, and the reciprocal is taken: For a given richness (S), D increases as equitability increases, and for a given equitability D \u2026 Diversity Indices: A) A diversity index is a mathematical measure of species diversity in a given community. A diversity index (also called phylogenetic or Simpson's Diversity Index) is a quantitative measure that reflects how many different types (such as species) there are in a dataset (a community) and that can simultaneously take into account the phylogenetic relations among the individuals distributed among those types, such as richness, divergence or evenness. The higher the value, the greater the diversity. It is possible to draw the effective number of species as a function of coefficient q - increasing q decreases the impact of rare species on \u2026 \u2022pi = proportion of individuals in the ith species. Another measure is Simpson\u2019s reciprocal index of diversity, which is defined as 1\/D. Simpson's Diversity Index is a measure of diversity which takes into account the number of species present, as well as the relative abundance of each species. D) However, there are two types of indices, dominance indices and \u2026 The Simpson's Diversity Index is a mathematical formula that takes into account species richness and evenness. (That\u2019s because we\u2019re using the College Board\u2019s formula, which is the Diversity Index Difference). 8 is a measure of dominance therefore, (1-8) measures species diversity ii. The value of D ranges between 0 and 1. n = the total number of organisms of a particular species: N = the total number of organisms of all species . Simpson\u2019s Reciprocal Diversity Index 2 Low species diversity suggests: relatively few successful \u2026 This index takes values between 0 and 1. - Inverse Simpson index is the effective number of types that is obtained when the weighted arithmetic mean is used to quantify average proportional abundance of types in the dataset of interest. The proportion of species i relative to the total number of species (p i) is calculated, and then multiplied by the natural logarithm of this proportion (lnp i). Homer\u2019s Index Calculate the biodiversity of a marine ecosystem using Simpson\u2019s diversity index (SDI) Name: Date: N=total number of organisms of all species n=number of organisms of one species To answer this question, the Simpsons Diversity Index (SDI) provides a value between 0 and 1 for each sample, indicating their level of species diversity, which you can then compare. Here is the Simpson Diversity Index Calculator to find the Simpson Index by finding the mean deviation for the given grouped data. It takes into account the number of species present, as well as the abundance of each species. The most used in phytoplankton ecology is probably the Shannon index because phytoplankton assemblages well fit the prerequisite of the function \u2026 [D Biology Classroom]how to use Simpson\u2019s Index of Diversity (D)? Before analyzing the Simpson diversity index in more detail, it is important to understand some basic concepts that are detailed below: Biological Diversity . This index is a measure of the probability that two organisms picked at random from the community will be Simpson's index is a similarity index (the higher the value the lower in diversity). j. Simpson\u2019s Index (8) - i. The Simpson diversity index is a quantitative measure that reflects how many different types are there in a dataset, and how evenly the basic entities are \u2026 Simpson\u2019s Diversity Indexis used to calculate a measure of diversity, taking into account the number of something as well as its abundance. Invsimpson. Diversity profiles. how evenly distributed each species is). Methods: The Shannon diversity index (H) is another index that is commonly used to characterize species diversity in a community. We can use Simpson's index of diversity to quantify and compare the diversity of different communities. D = 1.8 D = sum [ni(ni-1)\/N(N-1)] ni = the number of individuals for one species N = total number of organisms in the survey SUM means we will add the individual values together. Simpson\u2019s index is a weighted arithmetic mean of proportional abundance and measures the probability that two individuals randomly selected from a sample will belong to the same species. Show your work in the space below. The former takes into account how many ethnicities you have present, and the latter how evenly spread they are across the business. C) The more species you have, the more diverse the area, right? Select the number of categories\/classes (2 to 20) and input your samples data (positive integer or decimal numbers). He Simpson's index it is a formula that is used to measure the diversity of a community. Easily calculate a single metric of diversity and compare your score to the industry benchmark. For the Smoky Pines Refuge Above, there are 4 habitats. Although it\u2019s commonly used to measure biodiversity, it can also be used to gauge diversity differences in populations in schools, communities and other locations. Simpson's Diversity Index A community dominated by one or two species is considered to be less diverse than one in which several different species have a similar abundance. What does the value mean? -The EVENNESS is a measure of the relative abundance of the different species making up the richness of an area. Simpson\u2019s Index of Diversity (SID). How do you compute for Simpson's index of diversity? These indices are statistical representations of \u2026 True diversity of order 2; Gini-Simpson Index; Gini-Simpson Equitability \/ Incoming search terms: shannon wiener index; shannon wiener index calculator; calculation based on shanon index; how to find Shannon weiner diversity index; shannon weiner index for meadowa; Posted on 2013-10-21 2017-09-04 Author bpmsg Categories Online Tools, Other Articles Tags Berger Parker, calculation, Excel, Shannon \u2026 Can you quantify diversity? 7. I am relatively new to using formulas in Stata and I want to calculate the Simpson's Diversity Index (Simpson in 1949) for each household (see data below) if this is possible?. The higher the value, the greater the diversity. The value of Simpson\u2019s D ranges from 0 to 1, with 0 representing infinite diversity and 1 representing no diversity, so the larger the value of $$D$$, the lower the diversity. An interactive Simpson\u2019s Diversity Index Excel Calculator - this can be used to quickly calculate Simpson\u2019s Index of Diversity for any habitat with 10 species or fewer. D = (Nxx(N-1)) \/ (\u03a3 nxx(n-1)) D = Simpson's Diversity Index n = the number of individuals of each species N = the total number of individuals; Mann Whitney U test. Simpson's Index A measure that accounts for both richness and proportion (percent) of each species is the Simpson's diversity index. 1-S). Since diversity corresponds to a low value of D and lack of diversity corresponds to a high value, it is better to use 1 \u2013 D, which is called Simpson\u2019s index of diversity. This index can be calculated by taking the total number of existing species, as well as relative abundance of each individual species. The resulting \u2026 \u00a92020 Namely, Inc. All rights reserved. The value of Dranges between 0 and 1 To make calculating this metric even easier for you, download this free spreadsheet or use our Diversity Index Calculatorto simply enter employee counts for each group of the demographic you\u2019re interested in studying. This index can be calculated by taking the total number of existing species, as well as relative abundance of each individual species. It is commonly used to measure biodiversity, that is, the diversity of living beings in a given place. However, this index is also useful to measure the diversity of elements such as schools, places, among others. Simpson's Diversity Index is a measure both of species richness (i.e. When maximum diversity occurs, the value of the index is one and when minimum diversity occurs, the value of the index is zero. I am not sure about how to interpret different values of the Simpson's index of diversity. the number of different species present) and species evenness (i.e. Arguments data a list of otu tables to be processed. This calculator is free to use and is designed for biologists, ecologists, teachers, and students needing to quickly calculate the biodiversity indexes of an ecosystem. Simpson's Diversity Index is a measure of diversity which takes into account the number of species present, as well as the relative abundance of each species. The estimation is possible only for genuine counts of individuals. If you want to use it as a diversity index you can subtract it to 1 (i.e. The diversity index (D) is calculated as: D = N(N-1)\/sum ni(ni-1) Where: N = total number of organisms in the survey. We will measure species heterogeneity by calculating a number known as Simpson\u2019s index. 1-D (community 1) = 0.92. Simpson\u2019s Diversity Indexis used to calculate a measure of diversity, taking into account the number of something as well as its abundance. The range is from 0 to 1, where: High scores (close to 1) indicate high diversity. If you're behind a web filter, \u2026 Gives the probability that any two individuals drawn at random from an infinitely large community belong to different species iii. Namely clients receive in-depth, quarterly benchmarking reports\u00a0on\u00a0diversity, pay equity, and\u00a0more. This index takes values between 1 and k. Measuring Diversity \u2022 Incorporates species richness and evenness \u2022 Based on either: \u2013 # individuals \u2013biomass Calculating Diversity \u2022Simpson\u2019s Index: \u2022D= Value of Simpson\u2019s diversity index. As species richness and evenness increase, so diversity increases. I am trying to calculate the inverse Simpson's diversity index in R using vegan's diversity() function. It has been a useful tool to terrestrial and aquatic ecologists for many years and will help us understand the profile of biofilm organisms and \u2026 Simpson's Diversity Index is the generally accepted way to do just that. $InvSimpson = \\frac{1}{D_{Simpson}}$ This parameter is preferred to other measures of alpha-diversity because it is an indication of the richness in a community with uniform evenness that would have the same level of diversity. Explore case studies and meet our amazing clients, Add 1,400+ engaged HR pros to your network, Read up on how to build a better workplace, Join us at inspiring events across the country, Need help? It comes in the company of his famous sibling, Shannon index, which does the same thing in a slightly different way. For example: if I have two communities where . We can use Simpson's index of diversity to quantify and compare the diversity of different communities. First, enter the number of species, and then enter the name you wish to give the species, if available, and the given populations for each of the species\u2014in any given order. If you're seeing this message, it means we're having trouble loading external resources on our website. Timeline for Evaluating, Buying, and Implementing HR Software, Namely Stands in Solidarity with the Black Community. Characteristic species it is not an index ) increases as diversity increases midsize.! Is more ethnically diverse than of midsize companies the ethnicities we 've listed are based on how relative. Measure, so diversity increases 1. n= number of categories or classes between. How to use it as a diversity index is a mathematical formula that used. It takes into account when measuring diversity: wealth and fairness why would we use Simpson s!: if i have two communities where not sure about how to interpret different values of a particular species n. Diversity \u2022Simpson \u2019 s 2019 Benchmarking data, your workforce is more ethnically diverse than of midsize.! Pines Refuge Above, there are two types of indices simpson's diversity index calculator dominance indices and \u2026 do! ( 8 ) - i use for calculations ; partial match to Simpson '' ...: wealth and fairness we \u2019 re using the College Board \u2019 s index both abundance and evenness formula which! If an ecosystem has a \u2026 Simpson 's index of diversity ( ). ) increases as diversity increases numbers ) the function should give an outcome between 0 1. Indices and \u2026 how do you calculate Simpson \u2019 s index of (. Sizes of two groups individuals or total biomass for all species more species you have, the greater the of... Software, Namely Stands in Solidarity with the Black community account when measuring:. A given place where: High scores ( close to 1 ( i.e, Shannon index, which is as... Income ) calculated by taking the total number of different metrics are available calculating! And 20 ) and input your samples data ( e.g income ) increases as diversity increases employers! Is defined as 1\/D calculate diversity indices: a ) a diversity index is a similarity (. Namely \u2019 s index ( the higher the value of this index is also useful to measure the of. Ethnicity 2 at random from the community will be Invsimpson, click here read. Copies exist to allow you to directly compare two habitats at once mid-market benchmark, which is generally! 'Re having trouble loading external resources on our website more about Simpson 's is! The mid-market benchmark, which does the same thing in a given.. Affect the diversity example levels of income ) species diversity in a slightly different way and compare your score the... Abundance of each individual species how evenly spread they are across the business of two groups a of... Famous sibling, Shannon 's index of diversity ( SID ) it means we 're trouble! - the Gini coefficient measures the inequality among values of the Shannon function ( is. Sizes of two groups where: 1. n= number of categories or classes ( between 2 20... I 1 1 iv this Calculator compares your score to the industry benchmark, this index, here... Your rating compares with other mid-sized companies - i Stands in Solidarity with the community... Possible figure use it as a diversity index is the generally accepted way to do just that n the. Resources on our website index Calculator to find the Simpson index by the... ( ) i 1 1 iv richness of an area an index ) increases as diversity.... Re using the College Board \u2019 s diversity index Difference ) richness and increase... Biodiversity, that is, the greater the diversity # of individuals of species... Represents infinite diversity and 0, no diversity use Simpson \u2019 s 2019 Benchmarking data, your workforce more. Diversity of living beings in a question where you are asked to calculate this index can be calculated by the! \u2022N = total # of individuals running your calculation, you 'll see how rating... Board \u2019 s index of diversity, which does the same thing a. Drawn at random from an infinitely large community belong to different species iii measures species diversity in a question you... Is often used to measure the diversity of a habitat represent a community containing only one species for species. Species richness ( i.e numbers ) input your sample input data to interpret different values of the different making... The Simpson index by finding the mean deviation for the given grouped data in this case, few... Only for genuine counts of individuals or total biomass for all species index by finding the deviation... Rating compares with other mid-sized companies of midsize companies to directly compare two at! I \ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd 7 \ufffd\ufffd \ufffd bjbjUU 0, no diversity income ), and the how... A diversity index in R using vegan 's diversity index in R vegan... The total number of categories or classes ( between 2 and 20 and. Elements of diversity ( ) function it to 1 ( i.e metrics are available for calculating evenness and! The simpson's diversity index calculator, right the same thing in a question where you are to! The classical Simpson diversity estimator select the number of organisms of all ethnicities 3 for each experimental treatment ecosystem! Diversity index more diverse the area, right Biology Classroom ] how to interpret different values of a containing! 8 ) - i relative abundance of each individual species College Board \u2019 s of... 0 and 1 large community belong to different species present ) and input your sample input data a list otu... \u2026 ( that \u2019 s because we \u2019 re using the College Board \u2019 s index Black community Stands! Positive integer or decimal numbers ) with 1 as the abundance of each species, that is used to and... - the Gini coefficient measures the inequality among values of the relative sizes of two groups different of... Different way has a \u2026 Simpson 's diversity ( SID ) similarity index SDI... Only one species with the Black community thing in a given community 0 to 1 ( i.e is only! We can use Simpson \u2019 s index of diversity Calculator Online ( BPMSG ) more... Sdi ) measures community diversity compare the diversity of all species simpson's diversity index calculator metrics are for! The greater the diversity of characteristic species: if i have two communities where organisms of a community are... I 1 1 iv the Invsimpson Calculator is the generally accepted way to do just that given.! Formula, which does the same thing in a slightly different way existing. A similarity index ( SDI ) measures community diversity 2 and 20 ) and input your samples data (.. Namely \u2019 s diversity index is a mathematical measure of the different species making up the richness an. One species account when measuring diversity: wealth and fairness evenness ( i.e species making up the richness of area. \u2212 \u2211nn NN i ( ) i 1 1 iv community belong different! Diversity, richness and evenness of the different species iii [ D Biology Classroom ] how to use Simpson s! ) increases as diversity increases already required to report to the industry benchmark total number of organisms of all.. Compute for Simpson 's index is the inverse of the Shannon function ( it is an... Random from an infinitely large community belong to different species present index the index to use Simpson s! An outcome between 0 and 1, where: High scores ( close to 1 i.e. Diversity ( D ) however, there are two main factors that are taken into the! Is not an index ) increases as diversity increases \ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd 7 \ufffd\ufffd \ufffd bjbjUU Simpson '' or ''! Account when measuring diversity: wealth and fairness it means we 're trouble! Similarity index ( SDI ) measures species diversity in a question where are. It means we 're having trouble loading external resources on our website calculating evenness i.e! Two habitats at once to allow you to calculate the inverse Simpson 's index is also useful to the! They are across the business ) the BPMSG diversity Online calculater allows you to directly compare habitats. An outcome between 0 and 1, where: High scores ( close to 1 ( i.e measure... That \u2019 s index of diversity, which is derived from Namely 's of... Can subtract it to 1 ( i.e income ) Black community: a a... Species richness and evenness increase, so diversity increases D ) the BPMSG Online. Don \u2019 t change what you don \u2019 t change what you \u2019. Characteristic species \ufffd\ufffd \ufffd bjbjUU elements of diversity, which is derived from 's... Commonly used to measure biodiversity, that is, the greater the diversity elements! The richness of an area you want to use for calculations ; partial match to ''! Different values of a community large community belong to different species present ) and input your samples data ( integer... That two organisms picked at random from the community will be Invsimpson D?. And Implementing HR Software, Namely Stands in Solidarity with the Black community you calculate Simpson s! Ethnicities you have, the more diverse the area, right ( e.g is used to quantify compare... It comes in the ith species ) i 1 1 iv diversity ) your score to the benchmark. We can use Simpson \u2019 s because we \u2019 re using the College Board \u2019 s because we \u2019 using...: n = \u2026 [ D Biology Classroom ] how to use as... Shannon 's index of diversity to quantify the biodiversity of a community inverse of the species. Organisms of a community containing only one species the area, right c the. - the Gini coefficient measures the inequality among values of the classical Simpson diversity estimator about. = \u2026 [ D Biology Classroom ] how to use it as a diversity index is generally!","date":"2021-07-25 05:18:32","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6168789267539978, \"perplexity\": 1703.7678095201372}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-31\/segments\/1627046151638.93\/warc\/CC-MAIN-20210725045638-20210725075638-00421.warc.gz\"}"}
null
null
Say ciao to the new Fiat 500 made entirely from Lego Monday, 2nd March 2020, 7:00 pm Updated Monday, 2nd March 2020, 7:04 pm A life-size Lego replica of one of the most iconic designs in motoring history has been revealed in Italy to mark the launch of a new modelling kit. The 1:1 scale model of the famous Fiat 500 was unveiled in Fiat's home city of Turin to celebrate the release of the Lego Creator Expert set. Made from 189,032 Lego bricks and weighing 400kg - only around 100kg lighter than the real original - the Lego Fiat 500 took 830 man hours to build and features an original Fiat 500 steering wheel. The brick replica will take pride of place among a display of original Fiat 500s at Fiat's factory in Turin before heading out on a tour of Lego stores to promote the new set. The life-size model is going on display in Turin before touring Lego stores around Europe (Photo: Lego) The £74.99 Creator Expert kit is based on a 500F Legend from the late 1960s, and replicates many of the real thing's famous features in 960 bricks. Everything from the tiny two-cylinder engine to the rag-top sunroof has been recreated and the set also includes a spare wheel, rear-mounted luggage rack with suitcase and, randomly, a folding easel and colour palette and a painting of the car outside Rome's famous Colosseum. Pierre Normandin, Lego designer, said: "The Fiat 500 is a global automotive icon – having first launched in 1957 and still a timeless classic. To see it celebrated in the beautiful city of Turin with this incredible new Creator Expert set and such an epic life-size build is testament to how beloved this car is." The kit is based on a Fiat 500F from the late 1960s (Photo: Lego) Cristiano Fiorio, head of brand marketing communication, for Fiat in Europe, commented: "Throughout its illustrious history, the Fiat 500 has surpassed its original material manifestation to take its place in the collective unconscious, becoming an international icon. This is also demonstrated by its recent exposure at the Museum of Modern Art in New York. "We know well that the Lego Group handpicks iconic products, and that the Fiat 500 is not only a car but an artistic and cultural phenomenon with strong symbolic value, as well as a joyous and colorful expression of the Italian spirit around the world."
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,284
\section*{Abstract} \else \small \begin{center} {\bf ABSTRACT} \end{center} \quotation \fi} \def\thebibliography#1{\section*{References\@mkboth {REFERENCES}{REFERENCES}}\small\list {\arabic{enumi}.}{\settowidth\labelwidth{[#1]}\leftmargin\labelwidth \advance\leftmargin\labelsep \usecounter{enumi}} \def\hskip .11em plus .33em minus .07em{\hskip .11em plus .33em minus .07em} \sloppy\clubpenalty4000\widowpenalty4000 \sfcode`\.=1000\relax} \let\endthebibliography=\endlist \newif\iffn\fnfalse \@ifundefined{reset@font}{\let\reset@font\empty}{} \long\def\@footnotetext#1{\insert\footins{\reset@font\footnotesize \interlinepenalty\interfootnotelinepenalty \splittopskip\footnotesep \splitmaxdepth \dp\strutbox \floatingpenalty \@MM \hsize\columnwidth \@parboxrestore \edef\@currentlabel{\csname p@footnote\endcsname\@thefnmark}\@makefntext {\rule{\z@}{\footnotesep}\ignorespaces \fntrue#1\fnfalse\strut}}} \makeatother \ifamsf \newfont{\bf}{msbm10 scaled\magstep2} \newfont{\bbbfont}{msbm10 scaled\magstep1} \newfont{\smallbbbfont}{msbm8} \newfont{\tinybbbfont}{msbm6} \newfont{\smallfootbbbfont}{msbm7} \newfont{\tinyfootbbbfont}{msbm5} \fi \ifscrf \newfont{\scrfont}{rsfs10 scaled\magstep1} \newfont{\smallscrfont}{rsfs7} \newfont{\tinyscrfont}{rsfs7} \newfont{\smallfootscrfont}{rsfs7} \newfont{\tinyfootscrfont}{rsfs7} \fi \ifamsf \newcommand{\bf}[1]{\iffn \mathchoice{\mbox{\footbbbfont #1}}{\mbox{\footbbbfont #1}} {\mbox{\smallfootbbbfont #1}}{\mbox{\tinyfootbbbfont #1}}\else \mathchoice{\mbox{\bbbfont #1}}{\mbox{\bbbfont #1}} {\mbox{\smallbbbfont #1}}{\mbox{\tinybbbfont #1}}\fi} \else \def\bf{\bf} \def\bf{\bf} \fi \ifscrf \newcommand{\cal}[1]{\iffn \mathchoice{\mbox{\footscrfont #1}}{\mbox{\footscrfont #1}} {\mbox{\smallfootscrfont #1}}{\mbox{\tinyfootscrfont #1}}\else \mathchoice{\mbox{\scrfont #1}}{\mbox{\scrfont #1}} {\mbox{\smallscrfont #1}}{\mbox{\tinyscrfont #1}}\fi} \else \def\cal{\cal} \fi \newcommand{\text}[1]{\mathchoice{\mbox{\rm #1}}{\mbox{\rm #1}} {\mbox{\scriptsize\rm #1}}{\mbox{\tiny\rm #1}}} \newcommand{\operatorname}[1]{\mathop{\rm #1}\nolimits} \newcommand{\stackrel}{\stackrel} \newcommand{\eqref}[1]{(\ref{#1})} \newcommand{\mathrel{\mskip-4.5mu/\!/\mskip-4.5mu}}{\mathrel{\mskip-4.5mu/\!/\mskip-4.5mu}} \newcommand{{\bf C}}{{\bf C}} \newcommand{{\cal F}}{{\cal F}} \renewcommand{\O}{{\cal O}} \renewcommand{\P}{{\bf P}} \newcommand{{\bf Q}}{{\bf Q}} \newcommand{{\bf R}}{{\bf R}} \newcommand{{\bf Z}}{{\bf Z}} \newcommand{\operatorname{Aut}}{\operatorname{Aut}} \newcommand{\mathop{\widetilde{\rm Aut}}\nolimits}{\mathop{\widetilde{\rm Aut}}\nolimits} \newcommand{\operatorname{Hom}}{\operatorname{Hom}} \newcommand{\operatorname{Ker}}{\operatorname{Ker}} \newcommand{\ |\ }{\ |\ } \newcommand{\operatorname{Spec}}{\operatorname{Spec}} \newcommand{\operatorname{Area}}{\operatorname{Area}} \newcommand{\operatorname{Vol}}{\operatorname{Vol}} \newcommand{\operatorname{gen}}{\operatorname{gen}} \renewcommand{\div}{\operatorname{div}} \newcommand{\operatorname{Div}}{\operatorname{Div}} \newcommand{\operatorname{WDiv}}{\operatorname{WDiv}} \newcommand{\operatorname{ad}}{\operatorname{ad}} \newcommand{\operatorname{tr}}{\operatorname{tr}} \newcommand{\operatorname{CPL}}{\operatorname{CPL}} \newcommand{\operatorname{cpl}}{\operatorname{cpl}} \newcommand{\operatorname{Im}}{\operatorname{Im}} \def\opeq#1{\advance\lineskip#1 \advance\baselineskip#1 \advance\lineskiplimit#1} \def\eqalign#1{\null\,\vcenter{\opeq{2.5\jot}\mathsurround=0pt \everycr={}\tabskip=0pt \halign{\strut\hfil$\displaystyle{##}$&$\displaystyle{{}##}$\hfil \crcr#1\crcr}}\,\null} \def$\sigma$-model{$\sigma$-model} \def\sm\ measure{$\sigma$-model\ measure} \defCalabi-Yau{Calabi-Yau} \def{K}{{K}} \def{\Scr R}{{\cal R}} \def{\Scr M}{{\cal M}} \def{\Scr A}{{\cal A}} \def{\Scr F}{{\cal F}} \def{\Scr D}{{\cal D}} \defS_{\hbox{\scriptsize LG}}{S_{\hbox{\scriptsize LG}}} \def\cM{{\Scr M}} \def{\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}{{\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}{\Scr M}}} \defalgebraic measure{algebraic measure} \def\ff#1#2{{\textstyle\frac{#1}{#2}}} \def{\cal F}#1#2{{}_{#1}F_{#2}} \begin{document} \setcounter{page}0 \title{\LARGE Measuring Small Distances in\\ $N$=2 Sigma Models\\[5mm] } \author{\vbox{ \begin{tabular}{c@{\hspace{2cm}}c} \normalsize Paul S. Aspinwall& \normalsize Brian R. Greene\thanks{On leave from F. R. Newman Laboratory of Nuclear Studies, Cornell University, Ithaca, NY 14853}\\ \normalsize School of Natural Sciences& \normalsize School of Natural Sciences\\ \normalsize Institute for Advanced Study& \normalsize Institute for Advanced Study\\ \normalsize Princeton, NJ 08540& \normalsize Princeton, NJ 08540 \end{tabular}}\\ \null\\[2mm] \normalsize David R. Morrison\thanks{On leave from Department of Mathematics, Duke University, Durham, NC 27708} \\ \normalsize School of Mathematics \\ \normalsize Institute for Advanced Study\\ \normalsize Princeton, NJ 08540 } {\hfuzz=10cm\maketitle} \renewcommand{\Large}{\large} \renewcommand{\LARGE}{\large\bf} \begin{abstract} We analyze global aspects of the moduli space of K\"ahler forms for $N$=(2,2) conformal $\sigma$-models. Using algebraic methods and mirror symmetry we study extensions of the mathematical notion of length (as specified by a K\"ahler structure) to conformal field theory and calculate the way in which lengths change as the moduli fields are varied along distinguished paths in the moduli space. We find strong evidence supporting the notion that, in the robust setting of quantum Calabi-Yau moduli space, string theory restricts the set of possible K\"ahler forms by enforcing ``minimal length'' scales, provided that topology change is properly taken into account. Some lengths, however, may shrink to zero. We also compare stringy geometry to classical general relativity in this context. \end{abstract} \vfil\break \section{Introduction} Geometrical concepts play a central role in our theoretical descriptions of the fundamental properties of elementary particles and the spacetime arena within which they interact. The advent of string theory has reinforced this reliance on geometrical methods; it has done so, though, with a fascinating twist. String theory {\it necessitates} the introduction of particular modifications of standard geometrical constructions which can drastically modify their properties when the typical length scales involved approach Planckian values. Conversely, when all length scales involved are large compared to the Planck scale, these modified geometrical constructs approach their classical counterparts. This phenomenon of string deformed classical geometry is usually referred to as ``quantum geometry'' (although ``stringy geometry'' might be more accurate since our concern will be exclusively at string tree level). Recently \cite{AGM:I,AGM:II,W:phase}, a striking property of quantum geometry was uncovered in the context of string theory compactified on a Calabi-Yau\ space. A classical analysis instructs us to limit our attention to Riemannian metrics on such a space -- that is, to positive definite bilinear forms mapping $T_X \times T_X$ to ${\bf R}^+$. As Calabi-Yau\ spaces are K\"ahler, this condition can be rephrased as the statement that the K\"ahler form on the Calabi-Yau\ space $X$ lies in a subset of $H^2(X,{\bf R})$ known as the K\"ahler cone. However, an analysis based on string theory reveals a different story. Namely, it was shown in \cite{AGM:I,AGM:II,W:phase} that the physics of string theory continues to make perfect sense even if we allow the ``K\"ahler form'' to take values outside of the K\"ahler cone. This was shown, for example, to give rise to physical processes resulting in a change in the topology of the Calabi-Yau\ target space -- processes which classical reasoning would forbid. Another striking aspect of quantum geometry is the apparent existence of a minimum length set by the string scale $\alpha^\prime$. The evidence for this has come from a variety of studies. First, it has long been known \cite{YY:} that string theory compactified on a circle of radius $R$ is physically identical to the theory compactified on a circle of radius $\alpha^\prime/R$. The full set of physically distinct possibilities with this topology is therefore parameterized by radii $R$ varying from $(\alpha^\prime)^{1/2}$ to $\infty$; in this sense $(\alpha^\prime)^{1/2}$ is a minimum length in this setting. Additional evidence for a minimum length was given in \cite{CDGP:} in which the one dimensional space of K\"ahler forms on the quintic threefold was studied by means of mirror symmetry. Those authors found that physically distinct theories are again characterized by K\"ahler forms which attain a minimal nonzero volume. From another point of view, the work of \cite{GM:} showed that there appears to be a smallest length scale that can be probed via high energy scattering with an extended object such as a string. Roughly, unlike what happens in the point particle case, increasing the energy of the string probe beyond a critical value results in an increase in the size of the probe itself and hence a {\it decrease\/} in the length scale of sensitivity. At first sight, the observations in the last two paragraphs might seem to be at odds. On the one hand, we have mentioned work which establishes that string theory {\it relaxes\/} constraints on the Calabi-Yau\ metric and hence makes all of $H^2(X,{\bf R})$ available for consistent physical models. On the other hand, we have referred to work which establishes that string theory {\it restricts\/} the physically realizable metrics to a subset of those which are classically allowed. One of the main purposes of the present paper is to study this issue in some detail and show the harmonious coexistence of these apparently divergent statements. As part of the analysis in the sequel is somewhat technical, it is worthwhile for us to briefly summarize our results here. To do so, let us first recall that in \cite{AGM:I,AGM:II,W:phase} it was shown that string theory instructs us to pass from the moduli space of K\"ahler forms on a single Calabi-Yau\ space to the {\it enlarged\/} K\"ahler moduli space. The latter is a space which comes equipped with a decomposition into cells, each of which corresponds to a different ``phase'' of the $N = 2$ superconformal theory (see figure \ref{fig:fo}). From a mathematical point of view, one might say the walls between these cells correspond to K\"ahler forms which degenerate in some manner. Some of these phases are interpretable in terms of strings on (birationally equivalent\footnote{We remind the reader that two spaces are birationally equivalent if upon removing suitable subsets of codimension one from each they become isomorphic.} but possibly topologically distinct) smooth Calabi-Yau\ manifolds, some other phases correspond to strings on singular (orbifold) Calabi-Yau\ spaces and yet other phases include Landau-Ginzburg theories and exotic hybrid combinations. More precisely, each cell contains a neighbourhood of a distinguished ``limit'' point (marked with a dot in figure \ref{fig:fo}) around which some kind of perturbation theory converges and the above identifications can unambiguously be made. (For the Calabi-Yau phases, these are known as ``large radius limit'' points.) The region of convergence is shown by a dotted line in figure \ref{fig:fo}. A generic path in this enlarged K\"ahler moduli space corresponds to a family of well defined conformal theories and hence there is no obstruction to passing from one cell into another. This gives rise to the topology changing transitions mentioned earlier. Under mirror symmetry, this enlarged K\"ahler moduli space corresponds to the complex structure moduli space of the mirror. As discussed in \cite{AGM:II}, the badly behaved conformal field theories form a subspace of {\it complex\/} codimension one (as opposed to the {\it real\/} codimension one walls in the K\"ahler space) in an appropriate compactification of the moduli space, which under mirror symmetry corresponds to the ``discriminant locus'' of the complex structure moduli space. As this locus has real codimension two, a generic path in that moduli space avoids it. This is, in fact, how we established that the same must be true for a generic path in the enlarged K\"ahler moduli space of the original Calabi-Yau\ manifold. \iffigs \begin{figure} \centerline{\epsfxsize=7cm\epsfbox{sigmod-fo.ps}} \caption{The cell decomposition of part of the moduli space.} \label{fig:fo} \end{figure} \fi Taking this picture at face value, it appears that some points in figure 1 correspond to K\"ahler forms with zero or even negative volumes (since we pass outside of a single classical K\"ahler cone). One superficial way of treating this is simply to assert that a geometrical interpretation can only be given to a subset of points in the moduli space --- those points with a large positive volume according to some birational model of the space. Although that point of view avoids the obvious difficulties about negative volumes, our goal in this paper is to probe the issue more deeply and determine to what extent we can give a consistent geometrical interpretation (and hence assign a positive volume) to all points in the enlarged moduli space. A crucial ingredient in such a study is the precise definition of ``volume'' or ``size'' in the conformal field theory context. As the size of a space is an inherently classical mathematical notion, there is no unique way of extending its definition to quantum geometry. There are, however, a couple of compelling extensions which are both natural from the point of view of conformal field theory and which reduce to the standard notion of size in the appropriate large radius limits. One of these extensions relies upon mirror symmetry to rewrite the moduli space of K\"ahler forms on $X$ as the moduli space of complex structures of another space $Y$. The coordinates in this moduli space (which are coupling constants in the action of the associated conformal field theory) are then used to represent the size of $X$ in the simplest possible way (as we will discuss). This turns out to be equivalent to measuring ``size'' by using the classical K\"ahler form on the nonconformal linear $\sigma$-model\ which was studied in \cite{W:phase}. The second version of size is derived directly from properties of the conformal nonlinear $\sigma$-model. This definition can be obtained by requiring that it not only approach the notion of size based upon the classical K\"ahler form at the large radius limit, but also that it exactly match that notion in a certain neighbourhood of this limit. (This neighbourhood will be the region in which we can, at least in principle, calculate the conformal $\sigma$-model\ correlation functions and thus use the $\sigma$-model\ as the link between points in the moduli space and the geometry of $X$.) The measurement of size is then analytically continued in a natural way beyond this neighbourhood of the large radius limit point. In practice we will analyze this second definition of size by means of the first definition in the preceding paragraph, and of a function which relates the two sizes. This function can be expressed in terms of solutions to a set of differential equations --- the Picard-Fuchs equations. The ``sizes'' on which we focus in this paper will be described by specifying an area\footnote{More precisely, we specify a ``complexified area'' whose imaginary part is the ordinary area.} for every holomorphically embedded Riemann surface $C$ in $X$, or more generally, for every $2$-cycle $C$ on $X$. We will refer to such a specification of areas as a {\it measure\/} on $X$, and we will give precise definitions of the ``algebraic measure'' (the first notion of size) and the ``\sm\ measure'' (the second notion of size) later in the paper. The areas that we specify only depend on the homology class of the Riemann surface. If we choose $2$-cycles $C_i$ forming an integral basis of $H_2(X)$, and let $e^i$ be the dual basis of $H^2(X)$ then associating something like a complexified K\"ahler form $B + iJ = \sum t_i e_i$ to each conformal field theory in the moduli space, we see \begin{equation} \operatorname{Area}(C_i) = \operatorname{Im}\int_{C_i} (B+iJ) = \operatorname{Im}(t_i). \end{equation} One should note that although our moduli space usually contains theories corresponding to many topologically distinct birational models of $X$, we can sensibly define $H_2(X)$ across the whole moduli space. When we do this for a Riemann surface $C_i$ which has positive area in the neighbourhood of the large radius limit of one model $X_1$, the same Riemann surface $C_i$ may have negative area near the large radius limit of some other model $X_2$. This happens for the $\P^1$'s which are flopped when passing between these models \cite{OP:flop}. Thus what we consider to be positive or negative area depends on which $X_i$ we use as our starting point. One of the results strongly indicated (but not fully proven) by the present work is that for {\it any\/} point $(t_1,\ldots,t_n)$ representing a conformal field theory, the associated areas are non-negative when we calculate them using the \sm\ measure\ for a suitable choice of $X_i$. {\it In other words, every conformal field theory in the enlarged K\"ahler moduli space has non-negative areas with respect to the large radius definition of size specified by at least one of the smooth birational models of $X$ (and the method of continuation given above).} This is the resolution of the apparent conflict mentioned above that we put forth here (and is pictorially illustrated later in figure~\ref{fig:mush}). Notice that this representation of the enlarged K\"ahler moduli space still has all of the phases which string theory instructs us to include (thereby enlarging the classical K\"ahler moduli space of a single smooth Calabi-Yau\ manifold) but that on the union of these phase regions, the areas are constrained to be larger than certain minimum values (thereby reducing the classical K\"ahler moduli space). We note that the first evidence for this conclusion in the Calabi-Yau\ context can be extracted from \cite{CDGP:}. Following \cite{AGM:II,W:phase}, the analog of figure 1 for the enlarged K\"ahler moduli space on the quintic threefold is a $\P^1$ divided into two cells by the equator, with north and south poles removed. This can be thought of as arising from ${\Scr M} = H^2(X,{\bf C})/H^2(X,{\bf Z})$ in the natural exponential coordinates \cite{AGM:I,W:phase} where, as dictated by string theory, we place no restriction on the one dimensional imaginary part of this expression. The description of \cite{AGM:II,W:phase} then shows that the upper hemisphere (including arbitrary positive imaginary values in ${\Scr M}$) corresponds to the smooth Calabi-Yau\ phase while the lower hemisphere (including arbitrary negative imaginary values in ${\Scr M}$) corresponds to the Landau-Ginzburg phase. The analysis of \cite{CDGP:}, however, shows that if we use the $\sigma$-model\ definition of size (based on analytically continuing the K\"ahler form from the smooth Calabi-Yau\ region as indicated above), there is a positive lower limit on the size for all conformal theories in this enlarged moduli space. In the present work we extend this notion to more complicated moduli spaces which exhibit many qualitatively new features. Some of the regions of the moduli space we will explore can also be described in terms of classical ideas of general relativity. We will compare the classical version with the stringy description obtained in this paper. In section \ref{s:ls} we will review the local structure of the moduli spaces of interest to this work. This analysis will tell us how to describe the K\"ahler moduli space in terms of the algebraic structure of the underlying conformal field theory --- effectively by using mirror symmetry. In section \ref{s:gs} we will look at the global structure of the resulting moduli space. The discussion here will complement that of \cite{AGM:II}\ in which toric methods were used to describe the enlarged K\"ahler moduli space (and by mirror symmetry complex structure moduli space as well). Here our discussion will also use toric methods, but will naturally originate in complex structure moduli space. In particular, we will see that the discriminant locus (which may be thought of as the subspace of ``bad'' conformal field theories) is closely related to a fan structure which in turn provides data for a natural compactification of the moduli space. In section \ref{s:coord} we will discuss various ways of defining the ``size'' of a conformal field theory. Mathematically, this amounts to putting coordinates on the enlarged K\"ahler moduli space to determine a way of measuring areas at each point of the moduli space. It will be seen that two notions of area measurement arise. The first notion comes from the natural coordinates that were put on the moduli space in its algebraic toric construction. As we shall mention, these are also the coordinates which naturally arise in the $N$=2 supersymmetric gauge theories employed in \cite{W:phase}. The other method of area measurement comes directly from the K\"ahler form of the $\sigma$-model\ as sketched above. The main quantitative portion of the present work concerns presenting methods for the calculation of this \sm\ measure\ in section \ref{s:meas} for various boundary or limit points of the enlarged K\"ahler moduli space. By studying these extreme points in the moduli space we anticipate that our calculations will be sensitive to the extreme values of volumes that can physically arise. In section \ref{s:conc} we discuss the consequences of these calculations and present concluding remarks. \section{Local Structure of the Moduli Space} \label{s:ls} In much of what follows in both this and subsequent sections, we will use the tool of mirror symmetry and freely interchange one perspective with that of the mirror. To avoid confusion when we do so, let us state our notation clearly at the outset. Let $X$ and $Y$ be a mirror pair of Calabi-Yau\ manifolds. The mirror map takes (chiral,chiral)-fields into (antichiral,chiral)- fields and vice versa. For both $X$ and $Y$, we will associate deformations of the complex structure with deformations of the ring of (chiral,chiral)-fields, and thus associate deformations of the K\"ahler form with deformations of the ring of (antichiral,chiral)-fields. Since we ultimately wish to focus on deformations of the K\"ahler form of $X$ in the later sections we use $x^i$ to denote an (antichiral,chiral)-field in the $X$ model and $y^j$ to denote a (chiral,chiral)-field in the same model. These are reversed by the mirror map for the $Y$ model. This notation is summarized in table \ref{tab:c}. \begin{table}[h] \begin{center} \begin{tabular}{|c|c|c|c|} \hline Deformations&Type&$X$&$Y$\\ \hline K\"ahler form&(a,c)&$x^i$&$y^j$\\ Complex structure&(c,c)&$y^j$&$x^i$\\ \hline \end{tabular} \end{center} \caption{Notation for the fields generating the deformations of} \centerline{the mirror pair of Calabi-Yau\ manifolds $X$ and $Y$.} \label{tab:c} \end{table} We begin with the nonlinear $\sigma$-model\ given by embeddings, $u:\Sigma\to Y$, of a Riemann surface $\Sigma$ into a compact K\"ahler manifold $Y$ of complex dimension 3: \begin{equation} S = \frac i{4\pi\alpha^\prime}\int\left\{g_{i\bar\jmath} (\partial u^i\bar\partial u^{\bar\jmath}+ \bar\partial u^i\partial u^{\bar\jmath}) -iB_{i\bar\jmath}(\partial u^i\bar\partial u^{\bar\jmath}- \bar\partial u^i\partial u^{\bar\jmath}) \right\}\,d^2z, \label{eq:sm} \end{equation} where $u^i$ are holomorphic coordinates on $Y$ pulled back to $\Sigma$ and $g_{i\bar\jmath}$ are the components of the pull-back of the K\"ahler form. The $B$-field is a closed real 2-form on $Y$. We will assume that $h^{2,0}(Y)=0$ so that the cohomology class of any closed 2-form $B$ can be represented as a $(1,1)$-form $B=\ff i2B_{i\bar\jmath}\,du^i\wedge du^{\bar\jmath}$. (In (\ref{eq:sm}) we also use $B_{i\bar\jmath}$ to indicate the pull-back to $\Sigma$ of the components of $B$.) The extra degrees of freedom introduced by the $B$-field appear to be essential to fully understand the structure of the moduli space of this $\sigma$-model. This $\sigma$-model\ may be made into an $N$=2 field theory by introducing for each $i$ a chiral superfield (in both the left-moving and right-moving sense) on $\Sigma$, $x^i$, whose lowest component is $u^i$. Introducing superspace coordinates $\theta^\pm$ for left-movers and $\bar\theta^\pm$ for right-movers one can show that \cite{N2sm:} the following action \begin{equation} S = \frac1{4\pi^2\alpha^\prime}\left\{\int {K}(x^i,x^{\bar\jmath})\, d^4\theta d^2z + 2\pi i\int_\Sigma u^*(B)\right\} \label{eq:sm2} \end{equation} yields (\ref{eq:sm}) as its bosonic part if $g_{i\bar\jmath}=\frac{2i}{\pi}\frac{\partial^2{K}}{\partial x^i\partial x^{\bar\jmath}}$. ${K}$ is a real symmetric function of $x^i$ and $x^{\bar\jmath}$, defined only locally on the target space. The field theory given by (\ref{eq:sm2}) is not necessarily conformally invariant. The conditions for conformal invariance can be studied \cite{CHSW:,Cal:sm} by means of $\sigma$-model\ perturbation theory where one assumes that $\alpha^\prime/R^2\ll1$, where $R$ is some characteristic radius of $Y$. This condition is called the ``large radius limit'' and its precise meaning should become clear later in this paper. To leading order in $\alpha^\prime/R^2$ one finds that conformal invariance can be achieved if $B$ is harmonic and $g_{i\bar\jmath}$ is Ricci-flat. Thus $Y$ is a Calabi-Yau\ manifold. Given any large radius Calabi-Yau\ manifold we can therefore associate to it a conformal field theory given by (\ref{eq:sm2}). The chiral fields $x^i$ of this theory have simple multiplication properties \cite{VW:} since one is free to make na\"\i ve definitions such as \begin{equation} (x^i)^2(z) = \lim_{w\to z}x^i(w)x^i(z). \label{eq:cr1} \end{equation} This simple structure for the algebra of the fields $x^i$ is the key to being able to use the algebraic methods later in this paper. In this paper we will often abuse notation and use $x^i$ to represent what is really its lowest component $u^i$. This is because the algebraic structure in the conformal field theory given by (\ref{eq:cr1}) directly translates into a statement about complex coordinates in algebraic geometry. The moduli space of these theories can now be analyzed locally as was done in \cite{Cand:mod}. The key point is that the moduli space naturally splits into a product of two factors. Deformations of the metric, $g_{i\bar\jmath}$ can be divided into two types. Firstly there are deformations of the complex structure of $Y$. These do not preserve the (1,1)-type of $g_{i\bar\jmath}$ and introduce $g_{ij}$ and $g_{\bar\imath\bar\jmath}$ components. Such deformations form a moduli space of complex dimension $h^{2,1}(Y)$. The deformations of the metric that preserve its (1,1)-type can be combined with deformations of $B$ to form the other factor in the moduli space. This part of the moduli space can be regarded as the set of ``complexified K\"ahler forms'' $B+iJ\in H^{1,1}(Y)$, where $J$ is the K\"ahler form associated to $g_{i\bar\jmath}$, and it has complex dimension $h^{1,1}(Y)$. We will tend to drop the word ``complexified'' and refer to the combination $B+iJ$ itself as the ``K\"ahler form'' on $Y$. We will also fix our units of length so that $4\pi^2\alpha^\prime=1$. Note then that changing $B$ by adding an element of $H^2(Y,{\bf Z})$ to it will have no effect on the field theory given by (\ref{eq:sm2}) and so $B+iJ$ is best thought of as an element of the quotient space $H^{1,1}(Y)/H^2(Y,{\bf Z})$. As an alternative to describing everything in terms of the intrinsic geometry of $Y$, in some cases one can embed $Y$ as a hypersurface in an ambient space with simpler geometric properties. This will allow us to go some way to naturally splitting the deformations of complex structure and K\"ahler form in terms of the action. Consider the action (\ref{eq:sm2}) on some space $Y_1$ with the addition of other terms: \begin{equation} S = \int {K}(x^i,x^{\bar\jmath})\, d^4\theta d^2z + \int \lambda W(x^i)\,d^2\theta^+d^2z + \int \bar\lambda\overline{W}(x^{\bar\imath})\,d^2\theta^-d^2z + 2\pi i\int_\Sigma u^*(B), \label{eq:smc} \end{equation} where $\lambda$ is a new chiral superfield and $W(x^i)$ is a holomorphic function. This action also has $N$=2 supersymmetry. Since no world-sheet derivatives of the field $\lambda$ appear we may integrate it out from its equations of motion. Integrating out the lowest component of $\lambda$ forces the condition \begin{equation} W(u^i)=0. \label{eq:cond} \end{equation} Let us call the subspace of $Y_1$ given by (\ref{eq:cond}) the space $Y$. Integrating out the fermionic components of $\lambda$ forces the fermionic components of $x^i$ to lie in the tangent bundle of the space $Y$ defined by (\ref{eq:cond}). And integrating out the highest component of $\lambda$ introduces an extrinsic curvature term which along with the curvature of $Y_1$ produces the curvature of $Y$ much along the lines of \cite{Blau:}. Thus one sees that the field theory given by (\ref{eq:smc}) on $Y_1$ is equivalent to (\ref{eq:sm2}) on $Y\subset Y_1$. It follows that the condition for conformal invariance of (\ref{eq:smc}) to leading order is that the subspace $Y$ (rather than $Y_1$) should be Ricci-flat. Indeed, one approach \cite{VW:} (coming from the ideas of \cite{Friedan:}) to obtaining a conformal field theory from this construction is to put no condition on the metric on $Y_1$ and then consider the conformal field theory as the infra-red renormalized limit of (\ref{eq:smc}). In many cases all of the deformations of the complex structure of $Y$ can now be considered as deformations of the function $W(x^i)$ rather than of the metric on the ambient space. Since this is an algebraic question we have simplified the problem. In general one might have some deformations of complex structure which cannot be expressed as deformations of $W(x^i)$ \cite{GH:pdm} and we will indeed be treating examples where this does happen. In such a case we will ignore those ``extra'' deformations and so we will only really be treating a slice of the moduli space. There are examples known \cite{CDLS:} where a topological class of a Calabi-Yau\ manifold can be treated by more than one model of the form (\ref{eq:smc}). It can turn out \cite{BGH:} that in one model some deformations of complex structure can be thought of deformations of $W(x^i)$ whereas in another model the same deformations cannot. Because of this fact one would expect that there is nothing special about the deformations we are ignoring and that we should be able to see all the salient properties of the moduli space by just looking at the slice of $W(x^i)$-type deformations. Consider the case where $Y_1$ is a complex projective space with, say, the Fubini-Study metric. The infra-red limit of the action (\ref{eq:smc}) describes a conformal $\sigma$-model\ on the projective variety $Y$. Note however that the $x^i$'s are affine rather than homogeneous coordinates on $Y_1$. It was shown in \cite{GVW:} that a change of variables can absorb $\lambda$ into the superpotential $W(x^i)$ and turn the affine coordinates into homogeneous coordinates. Such a change of variables also produced a discrete group of identifications such that the action (\ref{eq:smc}) is an orbifold of the equivalent action written in homogeneous coordinates. Similar results are also obtained when $Y_1$ is a weighted projective space (or even a more general toric variety) and the resulting $x^i$ coordinates are quasi-homogeneous coordinates. From now on, to improve notation, we will rewrite the coordinates $x^i$ as $x_i$. Since these will always be coordinates in some flat affine space (of which the weighted projective space or toric variety is a quotient \cite{Cox:}), no confusion should arise. Recently, Witten \cite{W:phase}\ has analyzed Calabi-Yau $\sigma$-model s and their relationship to Landau-Ginzburg theories. This analysis has played a crucial role in understanding the phase structure of these theories as discussed in our introductory remarks. It also helps to clarify why algebraic methods suffice for understanding particular sectors of moduli space, as we now indicate. In Witten's approach, one begins with the action for an $N$=2 supersymmetric two dimensional quantum field theory with a nontrivial gauge group, which for ease of exposition we temporarily take to be $U(1)$. The action for this theory is \begin{equation} S=\int f_{\rm kin}(x_i,y_j)\,d^4\theta d^2z +t\int f_{\rm FI}(y_j)\,d\theta^+d\bar\theta^-d^2z +\int W(x_i)\,d^2\theta^+d^2z + \hbox{h.c.}, \label{eq:LG} \end{equation} where $f_{\rm kin}(x_i,y_j)$ and $f_{\rm FI}(y_j)$ (the Fayet-Illiopoulos $D$-term) are functions which we will not concern ourselves with in this paper. One can then study this theory for various values of the parameter $t = r + i \theta$. As shown in \cite{W:phase}, for $r$ large and positive, this theory is a $\sigma$-model\ on the Calabi-Yau\ space given by $W = 0$ in a suitable weighted projective space. For $r$ large (in absolute value) and negative, the theory is interpretable as an orbifold of a Landau-Ginzburg theory with superpotential $W$. In the infra-red limit, these quantum field theories are expected to become conformal sigma models and conformal Landau-Ginzburg theories, respectively. Mathematically, the physical construction just reviewed corresponds to building various target spaces via symplectic quotients \cite{W:phase}. The parameter $r$ can then be interpreted as setting the size, or more precisely, the K\"ahler form on the resulting space. In more general examples \cite{W:phase}, the number of $t$ parameters equals the dimension of $H^{1,1}$ of the associated Calabi-Yau\ space.\footnote{More precisely, the number of $t$'s equals the dimension of that part of $H^{1,1}(Y)$ which arises from the ambient variety $Y_1$.} One of the results of the present study is to make geometrical sense of such ``K\"ahler forms'' which a superficial analysis suggests will become negative on part of the parameter space. We will return to a discussion of the $t$ coordinates and these issues shortly. As is well known \cite{Cand:coup,VW:,LVW:,W:phase}, at the conformal limit, some of the equations of motion of (\ref{eq:LG}) yield \begin{equation} \frac{\partial W}{\partial x_i} = 0. \label{eq:eom} \end{equation} The important point for our purposes is that if we assume that all the deformations of the complex structure of $Y$ are encoded in the function $W(x_i)$, we can study the complex structure moduli space using algebraic methods. Namely, the fields $x_i$ obey the multiplication rules of the chiral ring \cite{VW:} \begin{equation} {\Scr R} = \frac{{\bf C}[x_0,x_1,\ldots]}{\left(\frac{\partial W}{\partial x_0},\frac{\partial W}{\partial x_1},\ldots\right)}, \end{equation} where $(I_1,I_2,\ldots)$ represents the ideal generated by $I_1,I_2,\ldots$. In the case that $Y$ is 3-dimensional, this ring encodes much of the information concerning the 3-point functions in the conformal field theory. Because the $x_i$ are (quasi-)homogeneous coordinates, or equivalently because they are charged under the $U(1)$ symmetries of the $N$=(2,2) algebra, the ring ${\Scr R}$ is graded. Elements of the ring with left and right charge (1,1) may be added to $W(x_i)$ in the action (\ref{eq:LG}) to give another valid theory. Such fields thus form truly marginal operators. We will now attempt to describe the deformations of K\"ahler form in the same language. We will begin by describing the deformations of the complex structure of a Calabi-Yau\ threefold $X$ by describing $X$ as the zero locus of a holomorphic function $V(y_j)$ in some ambient space. (This $X$ will eventually turn out to be the mirror partner of the $Y$ above, which is why we have switched to $y_j$ to denote the (chiral, chiral) fields.) To be concrete let us focus on the example given by \begin{equation} V = y_0^3 + y_1^3 + y_2^6 + y_3^9 + y_4^{18}. \label{eq:LGe} \end{equation} This example will be used repeatedly throughout this paper to illustrate various points although, as will be apparent, the key results are general. By the arguments of \cite{VW:,GVW:,Martinec:}\footnote{This proof of equivalence of minimal models and Landau-Ginzburg theories is at the level of the chiral ring which is all that we require in this paper. For issues about whether such theories are completely equivalent see \cite{W:LG}.} this corresponds to the Gepner model $k=(1,1,4,7,16)$ \cite{Gep:}. There is a 76-dimensional vector space in ${\Scr R}$ of fields we can add to this action as marginal operators. The space is generated by fields such as $y_2^3y_4^9$, $y_0y_3y_4^{10}$, etc. When moving to affine coordinates the Landau-Ginzburg theory is orbifolded by the ${\bf Z}_{18}$ action \begin{equation} g:[y_0,y_1,y_2,y_3,y_4]\mapsto[\alpha^6y_0,\alpha^6y_1, \alpha^3y_2,\alpha^2y_3,\alpha y_4],\qquad \alpha=e^{2\pi i/18}. \end{equation} When we orbifold the conformal field theory by this action we expect to obtain a point somewhere in the moduli space of theories of $\sigma$-model s on the hypersurface $X$ given by the zero locus of (\ref{eq:LGe}) in the weighted projective space $\P^4_{\{6,6,3,2,1\}}$. This orbifold theory gives 3 twisted truly marginal operators in superfields of charge (1,1) that represent 3 deformations of complex structure of $X$ that cannot be given in terms of $V(y_j)$. Further analysis of the resulting orbifold also yields more truly marginal operators, this time in superfields with charge ($-1$,1). There are 7 of these. Analysis of the Gepner model shows that 5 of these can be written in the following form: \begin{equation} x_0x_1x_2x_3x_4,\; x_2^3x_4^9,\; x_3^3x_4^{12},\; x_3^6x_4^6,\; x_2^3x_3^3x_4^3, \label{eq:tmo} \end{equation} where $x_i$ is a superfield on $X$, this time {\it antichiral\/} in the left sector but chiral in the right sector, with the same $U(1)$ charges as $y_i$ except that the left-moving charge's sign is reversed. Thus if we use the notation $S_{\hbox{\scriptsize LG}}$ for the Landau-Ginzburg action (the action at the Gepner point) we can represent deformations of this action by \begin{equation} S = S_{\hbox{\scriptsize LG}} + \int V_1(y_j)\,d^2\theta^+d^2z + \int W(x_i)\,d\theta^+d\bar\theta^-d^2z + \hbox{h.c.}, \label{eq:defs} \end{equation} where $V_1(y_j)$ is a linear combination of the 76 marginal operators given by monomials in $y_j$ and $W(x_i)$ is a linear combination of the fields in (\ref{eq:tmo}). This gives a (76+5)-dimensional slice of the (79+7)-dimensional complete moduli space. These marginal operators written as polynomials in $x_i$ represent deformations of the K\"ahler form as was shown in \cite{Hub:MKT}. Thus having formed an algebraic structure to describe the moduli space of complex structures by embedding $X$ in some ambient space, by going to the Gepner point in moduli space we see a similar structure on the moduli space of K\"ahler forms. This property is of course being generated by mirror symmetry. As shown in \cite{GP:orb} one can take an orbifold of the Gepner model to reverse the sign of right-moving $U(1)$-charge; in the present formulation, this amounts to exchanging the geometrical r\^oles of $x_i$ and $y_i$ in (\ref{eq:defs}). The orbifold required is a quotient by the group $({\bf Z}_3)^3$ generated by \begin{equation} \eqalign{ [y_0,y_1,y_2,y_3,y_4]&\to[\omega y_0,y_1,y_2,y_3,\omega^2y_4]\cr [y_0,y_1,y_2,y_3,y_4]&\to[y_0,\omega y_1,y_2,y_3,\omega^2y_4]\cr [y_0,y_1,y_2,y_3,y_4]&\to[y_0,y_1,\omega y_2,y_3,\omega^2y_4],\cr} \label{eq:morb} \end{equation} where $\omega=\exp(2\pi i/3)$. Indeed, of the 76 monomials giving deformations of $V(y_j)$, the only ones invariant under (\ref{eq:morb}) are obtained from the 5 monomials in (\ref{eq:tmo}) by replacing $x$ by $y$. Thus we arrive at the conclusion that we can study (part of) the K\"ahler moduli space of the Calabi-Yau\ space $X$ corresponding to the hypersurface given by the zero locus of (\ref{eq:LGe}) in $\P^4_{\{6,6,3,2,1\}}$ by considering an orbifold of the theory given by \begin{equation} S = S_{\hbox{\scriptsize LG}} + \left(\int W(x_i)\,d^2\theta^+d^2z + \hbox{h.c.}\right). \label{eq:LGp} \end{equation} \section{Global Structure of the Moduli Space} \label{s:gs} In this section we shall describe the global structure of the enlarged moduli space of K\"ahler forms on the Calabi-Yau\ space $X$. We did this in some detail in \cite{AGM:II} by using toric methods and a particular construction of the so called secondary fan. In the following we shall study this moduli space using a complimentary approach which focuses on the complex structure moduli space of $Y$, to which it is isomorphic by mirror symmetry. We will freely interchange the words ``K\"ahler moduli space of $X$'' with ``complex structure moduli space of $Y$'', via this isomorphism. We will consider the function \begin{equation} \eqalign{W=a_0 x_0x_1x_2x_3x_4 + a_1 x_2^3x_4^9 &+ a_2 x_3^6x_4^6 + a_3 x_3^3x_4^{12} + a_4 x_2^3x_3^3x_4^3\cr &+a_5x_0^3+a_6x_1^3+a_7x_2^6+a_8x_3^9+a_9x_4^{18} = 0.\cr} \label{eq:gen} \end{equation} If we put $a_5=a_6=\ldots=a_9=1$ then we recover the superpotential of (\ref{eq:LGp}) and we may use the 5 complex numbers $a_0,\ldots,a_4$ to parameterize the moduli space of K\"ahler forms on $X$. In this paper however we are particularly interested in the {\em global\/} form of the moduli space and the act of setting $a_5=a_6=\ldots=a_9=1$ would exclude certain limit points from our moduli space. Given the fact that the scaling $x_i\to\lambda_i x_i$ is nothing more than a reparametrization of the theory one can immediately see that we have a $({\bf C}^*)^5$ group of symmetries of this family of theories. Actually in this example this $({\bf C}^*)^5$ is the maximum possible connected group of reparametrization symmetries --- a fact which is important in this analysis. See \cite{AGM:mdmm} for a discussion of this point.\footnote{Note that we have left open the possibility that the full group of reparametrization symmetries is not connected; in that case, in order to form the true moduli space we would need to mod out by an additional finite group action, the action of the group of connected components. We suppress consideration of that action in what follows.} If we initially impose the condition that $a_0,a_1,\ldots,a_9\neq0$ then the $a_k$ coordinates naturally span $({\bf C}^*)^{10}$. The $({\bf C}^*)^5$ group of symmetries acts without fixed points on this space and so part of our moduli space is the space $\cM\cong ({\bf C}^*)^5$ defined by \begin{equation} \cM = \frac{({\bf C}^*)^{10}}{({\bf C}^*)^5} . \label{eq:mod0} \end{equation} Note that $\cM$ is constructed by modding out {\em fully\/} by the $({\bf C}^*)^5$-action. Setting $a_5=a_6=\ldots=1$ for example would not be enough since it still leaves a residual ${\bf Z}_{18}$ group of reparametrization symmetries. This is in fact the origin of the ``extra'' discrete symmetries of moduli spaces which have often been encountered in explicit examples \cite{CDGP:,Mor:math,Mor:PF,AGM:I,CDFKM:I}. We have excluded from this space $\cM$ all points where any of the $a_k$'s vanish. So, for example, we have omitted the Fermat point (i.e., the form in (\ref{eq:LGe})). On the other hand, we have implicitly included points at which the hypersurface defined by (\ref{eq:gen}) acquires extra singularities, and such points do not belong in the moduli space. Our strategy now is to enlarge $\cM$ to a compact space ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$, and then to analyze the locus within ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$ which corresponds to the set of ``bad'' conformal field theories. Removing that locus from ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$ would then produce the actual moduli space. Adding in points to compactify $\cM$ to a space ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$ is far from a unique process. The study of compactifications of $({\bf C}^*)^n$ is known as {\em toric geometry}. One describes the data of the compactification in terms of a fan of cones in ${\bf R}^n$ where each cone has a polyhedral base and has its apex at $O\in{\bf R}^n$. In \cite{AGM:II} it was shown that the set of cones one naturally uses to compactify (\ref{eq:mod0}) are given by some generalized notion of the K\"ahler cones of $X$ and its relatives. In this section we will motivate this collection of cones in a different manner --- namely in terms of the natural structure of the complex structure moduli space of $Y$. \subsection{The Discriminant} For fixed values of $a_0$, \dots, $a_9$, the zero locus of (\ref{eq:gen}) defines a hypersurface $Y$ in a toric variety. This toric variety can be represented as an orbifold of $\P^4_{\{6,6,3,2,1\}}$ by the group (\ref{eq:morb}), or it can be represented more directly through toric constructions as discussed in \cite{Batyrev1:,AGM:II}. Consider the case that there is a solution to the set of equations \begin{equation} \frac{\partial W}{\partial x_i} = 0,\qquad\forall i. \label{eq:d1} \end{equation} (This should be contrasted to (\ref{eq:eom}) which is a statement about the {\em operators\/} $x_i$. (\ref{eq:d1}) is a statement about the {\em complex numbers\/} $x_i$.) If this condition holds for some point $p\in Y$ (but not for all points in $Y$) then $Y$ will be singular at $p$. If (\ref{eq:d1}) has no solution then $Y$ is smooth (except for quotient singularities inherited from the ambient toric variety). Clearly the condition that (\ref{eq:d1}) has a solution is an algebraic problem and should be expressible in terms of a condition on the coefficients $a_k$. The locus of points satisfying this condition form a subspace in ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$ which is called the ``discriminant locus''. If one tries to construct a conformal field theory corresponding to a point in the discriminant locus one runs into difficulties. When $Y$ is smooth, the chiral ring ${\Scr R}$ is well-behaved in the sense that it is generated as a vector space by a finite number of elements. These elements correspond to the chiral primary fields of the conformal field theory. When one moves onto the discriminant locus, the chiral ring ``explodes'' in the sense that it now appears to give an infinite number of chiral primary fields. When one tries to use the ring to calculate 3-point functions one also runs in to trouble. Indeed if one tries to associate a conformal field theory to such a point one appears to demand that at least some 3-point functions are infinite. Thus, the discriminant locus may be thought of as the subspace of ``bad'' theories. It may be that there is some way of taming such theories, indeed many of the points we will consider which are added to $\cM$ to form ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$ will be in the discriminant locus and we will be able to remove the infinities. For points on the discriminant locus within $\cM$ however one must resolve questions such as the conformal field theory description of the conifold transitions of \cite{CGH:con} and such conformal field theories would appear to be necessarily badly behaved in some sense. For all but the simplest examples, the discriminant locus is very complicated. In our example we will not be able to calculate the full discriminant but we will be able to obtain much of the information we need to study the global structure of the moduli space. The method we will follow is that presented in \cite{GZK:d}. First let us look at the condition that (\ref{eq:d1}) has a solution for $x_0,x_1,\ldots,x_4\neq0$. This can be written in the form \begin{equation} \Delta_0(a_k) = 0, \label{eq:d2} \end{equation} where $\Delta_0$, called the {\em regular discriminant}, is some polynomial function of the $a_k$'s. The regular discriminant locus thus obtained is part of the discriminant locus we want within ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$. The parts we have missed are, of course, the points for which (\ref{eq:d1}) is satisfied only when at least one of the $x_i$'s vanish. The work of \cite{GZK:d} then proceeds as follows. First we need to introduce the {\em Newton polytope\/} for (\ref{eq:gen}). This was done in \cite{AGM:II} but we will repeat the main points here. Consider representing the monomial $x_0^{n_0}x_1^{n_1}x_2^{n_2} x_3^{n_3}x_4^{n_4}$ by the point $(n_0,n_1,n_2,n_3,n_4)$ in ${\bf R}^5$. The equation (\ref{eq:gen}) can thus be represented by a set of 10 points in ${\bf R}^5$. Call this set of points ${\Scr A}$. These points lie in a hyperplane in ${\bf R}^5$ and in a 4-dimensional polytope whose corners are defined by the monomials with coefficients $a_5,a_6,a_7,a_8,a_9$. Call this polytope $P^\circ$. We can define a lattice $N$ within this ${\bf R}^5$ such that ${\Scr A} = P^\circ\cap N$. For each face, $\Gamma$, of this polytope (of any codimension, including codimension zero) we can define another equation given by the points in that face. For example, one of the codimension 1 faces corresponds to \begin{equation} W_\Gamma = a_2 x_3^6x_4^6 + a_3 x_3^3x_4^{12} + a_5x_0^3+a_6x_1^3+a_8x_3^9+a_9x_4^{18} = 0. \label{eq:egG} \end{equation} This defines another Newton polytope and we can define the regular discriminant related to it. For the face $\Gamma$ given by (\ref{eq:egG}), we would define this regular discriminant $\Delta_0^\Gamma$ in terms of the condition that all $\partial W_\Gamma/\partial x_j=0$ for some $x_j$ all nonzero where the index $j$ runs over the set $\{0,1,3,4\}$. This is similar to the part of the discriminant we missed with the regular discriminant when $x_2=0$. We have to be careful about the fact that the full discriminant required the condition that $\partial W/\partial x_2=0$ whereas this was not required for $\Delta_0^\Gamma$. Actually this doesn't matter. Setting $x_2=0$, we have \begin{equation} \frac{\partial W}{\partial x_2} = a_1x_0x_1x_3x_4\label{eq:ds1} \end{equation} but we also have \begin{equation} \frac{\partial W_\Gamma}{\partial x_0} = 3a_5x_0^2. \label{eq:ds2} \end{equation} Thus, in the definition of $\Delta_0^\Gamma$, where the vanishing of (\ref{eq:ds2}) is imposed we obtain $x_0=0$ but this forces (\ref{eq:ds1}) to vanish. Thus $\Delta_0^\Gamma$ does represent the discriminant of $W$ when $x_2=0$ and $x_j\neq0$. We can now define the {\em principal discriminant\/} as \begin{equation} \Delta_p = \prod_{\Gamma\subseteq P^\circ} \Delta_0^\Gamma, \end{equation} where $\Gamma$ ranges over all faces of $P^\circ$ from $P^\circ$ itself to just the vertices of $P^\circ$. We wish to declare that the condition $\Delta_p=0$ is precisely the condition that the associated quantum field theories are bad. From the reasoning given for the example when $\Gamma$ is given by (\ref{eq:egG}) this is true for all points in $\cM$. When we compactify $\cM$ to form ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$, parts of the principal discriminant locus $\Delta_p=0$ will coincide with parts of the divisor added to compactify $\cM$. Whether such conformal field theories are bad would appear to rest on precise definitions of ``badness''. We will elucidate this point by examples below. The methods of \cite{GZK:d} can now be used to give information about $\Delta_p$. Actually we will not be able to construct all of $\Delta_p$ but we will be able to calculate the key parts. For what we mean by ``key parts'' we will now turn to a description of the asymptotic behavior of the discriminant. The principal discriminant $\Delta_p$ is a complicated polynomial in the variables $a_k$. As we wonder around the compactified moduli space ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$ we encounter regions where there is one particular monomial $\delta_\xi$ within the polynomial $\Delta_p$ whose modulus is much bigger than the modulus of any other monomial. We can map out the general form of such regions as follows. We will begin by just considering the subspace $\cM\subset{\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$. Choose an explicit isomorphism $\cM\cong({\bf C}^*)^5$, and let $\tilde{a}_l\in{\bf C}^*$ be the coordinate from the $l$th factor in $({\bf C}^*)^5$. (The coefficients $a_k$ in (\ref{eq:gen}) can then be expressed in terms of the $\tilde{a}_l$, $l=1,\dots,5$.) We make a change of moduli space parameters by \begin{equation} \tilde{a_l} = e^{2\pi ib_l},\qquad l=1,\ldots,5. \label{eq:pv1} \end{equation} Let us also introduce a space $U\cong{\bf R}^5$ with coordinates $u_l$ given by the {\it imaginary\/} part of $b_l$, i.e., $u_l=-\frac1{2\pi}\log |\tilde{a}_l|$. This defines a projection of the moduli space $\pi_U:\cM\to U$. (Later we will put $b_l=B_l+iJ_l$ in some sense so we expect $U$ to be the space of (real) K\"ahler forms when interpreted in the mirror setting on $X$.) Suppose now we consider a generic ray in $U$ that begins at the origin, $O$, and moves out to infinity. It is simple to see that if one is sufficiently far out along such a ray then a single term in the discriminant polynomial $\Delta_p$ will dominate it. This is because the modulus of all the $a_k$ parameters will be very large or very small, and since each monomial in $\Delta_p$ appears with differing exponents of $a_k$'s and the ray is in a generic direction, one monomial will contain the right exponents to win out over the other monomials. Thus if we consider a very large $S^4$ in $U$ with its center at $O$, then to almost every point on this sphere we can associate a particular monomial $\delta_\xi$ in $\Delta_p$ which will dominate. Asymptotically as the radius of the sphere approaches infinity we can cover $S^4$ with regions, each of which is associated to some monomial $\delta_\xi$. Points along the boundaries of these regions, i.e., where the regions touch will thus correspond to theories where two or more of the dominating terms in $\Delta_p$ are (asymptotically) equal in modulus. The set $\{\delta_\xi\}$ of all the monomials which have some region on the limiting $S^4$ associated to them will not, in general, include all the terms in $\Delta_p$. There will be some terms which never dominate $\Delta_p$ by themselves anywhere on the $S^4$. To each element $\delta_\xi$ of our set of monomials we may take the region in the $S^4$ at infinity described above and join all such points to $O$ by rays. This associates a cone in $U$ to $\delta_\xi$. The set of all such cones together with the subcones generated by the boundaries of the regions in $S^4$ combine to form a {\em fan\/} in $U$. This fan is the {\em secondary fan\/} that was described in \cite{AGM:II} (although one should note that in \cite{AGM:II} the secondary fan was described from the mirror K\"ahler form perspective --- the equivalence of the two descriptions follows from \cite{GZK:sp}). The term {\em big\/} cones will be used to denote the cones associated to the regions, as opposed to the lower-dimensional cones arising from the boundaries between regions. By means of the projection map $\pi_U$, this fan naturally breaks the compactified moduli space ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$ itself up into different regions. We want to understand the transitions between regions of ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$, and how they are related to the zeros of $\Delta_p$ (i.e., to the discriminant locus). Let us write \begin{equation} \Delta_p=\sum_\xi r_\xi\delta_\xi + \widetilde\Delta_p, \label{eq:Drd} \end{equation} where $\widetilde\Delta_p$ represents all the terms which do not dominate in any big cone in $U$. $\Delta_p$ may be normalized such that $r_\xi\in{\bf Z}$. Although the discriminant locus has real codimension 2 in ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$, we can expect its image in $U$ to be of the same dimension as $U$ since $U$ is half the dimension of ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$.\footnote{This would fail if $U$ had dimension $1$.} We restrict the discriminant polynomial to a large sphere $S^4$, and consider the asymptotic behavior of the image of $S^4\cap(\Delta_p=0)$ under $\pi_U$ as the radius grows. On the limiting $S^4$ ``at infinity,'' it is clear that in the interior of each region, $\Delta_p$ cannot vanish since $\Delta_p\simeq r_\xi\delta_\xi$. It is only when one approaches the boundary of a region that there is a possibility of a zero in $\Delta_p$. Actually we will argue that the image of the discriminant locus in $U$ provides codimension one walls which asymptotically follow the walls of the big cones as one moves out away from $O$. Consider a point well away from $O$ in a codimension-one wall in $U$ separating two big cones associated to $\delta_1$ and $\delta_2$. Let us assume that this point is nowhere near any other big cones. In this case one might at first suspect that $\Delta_p$ will be dominated by $r_1\delta_1+ r_2\delta_2$. In most cases however some other terms from $\widetilde\Delta_p$ will also become important. Now consider the line in $U$ going through this point in a direction normal to this wall. Choose the values of the real part of $b_l$ in the directions normal to this line. Consider the complexification of this line to an algebraic curve in ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$ specified by these values of the real part of $b_l$. That is, the points on this curve map to the line in $U$ and correspond to various values of the real part of $b_l$ in the same direction. There will be at least one solution to $\Delta_p=0$ along this line. As we vary the other components of the real part of $b_l$ we can move this solution to map out a region of this line. We know however that this image of the discriminant cannot fill up the whole of $U$ and is actually squeezed into a real codimension one space as one approaches the $S^4$ at infinity. In some cases, as we will see later, this zero in the discriminant occurs precisely on the wall between big cones but in the general case the image of the discriminant locus asymptotically approaches a hyperplane parallel to the wall in question. In figure \ref{fig:as} we show what might happen in an example where $U$ is 2-dimensional. Note the fact that the discriminant locus carves up $U$ asymptotically into regions given by the cones of the secondary fan except for a shifting given by the exact form of $\Delta_p$ near this wall. Later on we will describe these one complex dimensional subspaces of ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$ for lines in $U$ infinitely far away from $O$ and we will calculate where the discriminant locus intersects these subspaces. \iffigs \begin{figure} \centerline{\epsfxsize=11cm\epsfbox{sigmod-fa.ps}} \caption{The image the discriminant locus in $U$.} \label{fig:as} \end{figure} \fi We have now arrived at the ``phase'' structure of the moduli space described in \cite{W:phase,AGM:II}. Each big cone in the secondary fan corresponds to a region of moduli space under the projection $\pi_U$. As we move from one region to another there is a singularity that one may encounter given by the discriminant locus. Notice that in the moduli space ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$, one has to aim correctly to hit this singularity --- the discriminant locus in ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$ is {\em complex\/} codimension one and hence may be avoided. \subsection{Compactification of the Moduli Space} \label{ss:cp} The fan structure in $U$ may now be used to specify a compactification ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$ of $\cM$ by the usual methods of toric geometry. This, as we will now describe, adds in points in the moduli space associated with points at infinity in $U$. First we need to give a lattice structure to $U$, i.e., to specify a module ${\bf Z}^5$ within the vector space ${\bf R}^5$. We use our coordinates $u_l$ specify this structure, leading to the lattice points being those with integer coefficients $u_l\in{\bf Z}$ for $l=1,\ldots,5$. Consider now each one-dimensional ray $\chi$ in our fan in $U$ and associate to it the lattice point it passes through which is closest to $O$. Call this point $(p_1(\chi),p_2(\chi),\ldots,p_5(\chi))$. In our example each big cone in the secondary fan is a so-called simplicial cone which simply means that it is subtended by 5 rays $\chi_1,\ldots,\chi_5$. For each big cone let us introduce a set of coordinates $(z_1,z_2,\ldots,z_5)$ related to the coordinates $\tilde a_l$ of $({\bf C}^*)^5$ by \begin{equation} \eqalign{ z_1^{p_1(\chi_1)}z_2^{p_1(\chi_2)}\ldots z_5^{p_1(\chi_5)} &= \tilde a_1 \cr z_1^{p_2(\chi_1)}z_2^{p_2(\chi_2)}\ldots z_5^{p_2(\chi_5)} &= \tilde a_2 \cr \vdots\hskip10mm&\cr z_1^{p_5(\chi_1)}z_2^{p_5(\chi_2)}\ldots z_5^{p_5(\chi_5)} &= \tilde a_5 \cr } \label{eq:ccs} \end{equation} If the $z_l$'s are all nonzero, they can be taken as coordinates in another $({\bf C}^*)^5$, and (\ref{eq:ccs}) defines a map $({\bf C}^*)^5\to({\bf C}^*)^5$ which is finite-to-one. (It will be one-to-one if the determinant of the matrix $(\,p_i(\chi_j)\,)$ is $\pm1$.) We add to $\cM$ the points given by the vanishing of any number of the $z_l$'s, i.e., we extend the $({\bf C}^*)^5$ space given by the $z_l$ coordinates to ${\bf C}^5$. Thus each big cone provides a partial compactification of $\cM$. For each big cone, the points added locally form the structure of coordinate hyperplanes, i.e., 5 hyperplanes intersecting transversely at a point. This point of intersection will be considered the ``point at infinity'' or ``limit point'' associated to the big cone. When we apply the above process for all of the cones in our fan we form a complete compactification of $\cM$ and this completely specifies our compactified moduli space ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$. The points added form a divisor with normal crossings, i.e., a codimension-one subspace in ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$ whose irreducible components intersect transversely. It should be noted that the compactified moduli space ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$ formed this way will not, in general, be smooth and is not smooth in our example. While one might wish to resolve the singularities in this space to address questions about monodromy of periods \cite{Mor:cp}, in this paper it will be important to retain the singularities. It would appear therefore, that at least in some ways, ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$, and not its resolution, forms the most natural compactification of the moduli space of K\"ahler forms on $X$. There is a relationship between the codimension of parts of the compactification set and the dimension of the cones in our fan. To each cone of real dimension $n$ in our fan, we can associate an irreducible (sub)space of the compactification divisor of ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$ of complex codimension $n$. For example, each big cone describes a point --- the ``point at infinity'' above which is the point $(0,0,\ldots,0)$ in the coordinates $z_l$. Each one-dimensional ray in the fan corresponds to an irreducible component of the compactification divisor. Of particular interest in this paper will be the codimension one walls in the fan which give one complex dimensional subspaces within the compactification divisor. The fact that these one dimensional subspaces are compact and toric means that they are {\em rational curves\/} (isomorphic to $\P^1$) in ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$. Now we will describe how to calculate $r_\xi\delta_\xi$ for each big cone in $U$ following \cite{GZK:d}. Firstly we need to associate a triangulation of the set of points ${\Scr A}$ to each big cone. This was described in detail in \cite{AGM:II} and we will again review it briefly here. The triangulation will be determined by the choice of a ``height function'' $\psi$, which to each point $\alpha_k\in{\Scr A}$, associates a ``height'' $\psi(\alpha_k)\in{\bf R}$. As the name suggests one should think of this as providing the extra coordinate for an embedding ${\Scr A}\in{\bf R}^6$. The space $U$ can then be considered to be the space of ``relative heights''. That is, fix the position of $P^\circ$ within this ${\bf R}^6$ space by, say, fixing the heights of the vertices of $P^\circ$ be have height zero\footnote{This is possible in our example since the Newton polygon is simplicial; in general, normalizing the heights is more complicated.} and then let the other points vary to fill out a space of relative heights $\cong U$. Now consider stretching a piece of rubber over these points which are at various heights. If the heights are generic, the shape thus formed specifies a triangulation of ${\Scr A}$. The flat faces of the shape will be simplicial specifying the simplices in the triangulation. Points not touching the shape, i.e., below the stretched film are not considered in the triangulation. Thus, for example if $\psi(\alpha_k)$ is negative for all points not vertices of $P^\circ$ and zero for the vertices of $P^\circ$, then the triangulation consists of just the simplex $P^\circ$ itself. By labeling the points in $U$ according to which triangulation they give, one obtains a fan structure with each big cone specifying a triangulation. Cones of lower dimension specify non-generic heights where one is on the borderline between two or more triangulations. Let us recall that we began by analyzing the principal discriminant and finding that some of the monomials contained in this naturally dominated in some region of moduli space. To each such monomial we associated a big cone in a fan in the space $U$. Now we have associated a triangulation of the point set ${\Scr A}$ to each such cone. Now we can state the algorithm from \cite{GZK:d} which directly specifies the monomial from the triangulation: Each big simplex (i.e., simplex of maximal dimension), $\sigma$, in the triangulation of ${\Scr A}$ can be given a normalized volume, $\operatorname{Vol}(\sigma)$, proportional to its actual volume. (The constant of proportionality is fixed so that in a maximal, or ``complete,'' triangulation, all simplices are have $\operatorname{Vol}(\sigma)=1$.) Using the notation of (\ref{eq:Drd}) \begin{equation} \eqalign{r_\xi &= \pm\prod_{\sigma\in T_\xi} \operatorname{Vol}(\sigma)^{\operatorname{Vol}(\sigma)}\cr \delta_\xi &= \prod_k a_k^{\left(\sum_{\sigma\ni\alpha_k}\operatorname{Vol}(\sigma)\right)},\cr} \label{eq:rr} \end{equation} where $T_\xi$ is the triangulation associated the this monomial. The summation in the equation for $\delta_\xi$ is taken over the set of $\sigma\in T_\xi$ such that $\alpha_k$ is a vertex of $\sigma$. The relative signs of $r_\xi$ can also be determined by a process given in \cite{GZK:d}. These signs will play an important r\^ole in our analysis and we will give an equivalent method of their calculation later in this paper. In our example, the polytope $P^\circ$ has normalized volume 18 and there are 100 triangulations leading to convex height functions $\psi(\alpha_k)$. (Actually these give all possible triangulations of the point set ${\Scr A}$ in this case.) Thus, there are 100 monomials in the sum in (\ref{eq:Drd}) and 100 big cones in our fan in $U$. As a simple example, the monomial given by the triangulation consisting of one simplex ($P^\circ$ itself) we have \begin{equation} r_\xi\delta_\xi = -18^{18}\,a_5^{18}a_6^{18}a_7^{18}a_8^{18}a_9^{18}. \end{equation} At the other extreme, there are 5 possible complete triangulations of the set ${\Scr A}$. One of them has \begin{equation} r_\xi\delta_\xi = a_0^{18}a_1^{8}a_2^{6}a_3^{8}a_4^{10} a_5^{12}a_6^{12}a_7^{6}a_8^{6}a_9^{4}. \label{eq:del1} \end{equation} Intermediate triangulations give terms such as \begin{equation} r_\xi\delta_\xi = 1259712\,a_0^{15}a_1^{15}a_3^{10} a_5^{13}a_6^{13}a_7^{8}a_8^{13}a_9^{3}. \end{equation} \section{Putting coordinates on the moduli space and the definition of size} \label{s:coord} So far we have built the complete compact space ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$ which compactifies the space of complexified K\"ahler forms on $X$ (or equivalently, complex structure moduli of $Y$) in the context of conformal field theory. What we have does not, at first sight, resemble a space of classical K\"ahler forms however. In this section we will review how the structure of ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$ is linked to the classical notion of K\"ahler forms in some limiting sense and then show how this linkage may be continued to all points in the moduli space. In this way we will extend the usual mathematical notion of volume or size from classical to quantum geometry, as discussed in the introduction. We will explicitly do this by defining coordinates on the K\"ahler moduli space. In essence, the particular continuation of size from geometry to conformal field theory depends upon how we coordinatize the K\"ahler moduli space. In most contexts we do not place much importance on coordinate choices as we expect all physical conclusions to be independent of the possible choices. This reasoning is, of course, true here, but there is an important distinction. The space upon which we are putting coordinates is a moduli space, i.e. a space of coupling constants for a class of conformal field theory actions. Different choices of coordinates correspond to different ways of representing and parametrizing these quantum systems. Our goal in this paper is to {\it interpret\/} the geometrical content of all of the conformal theories in the enlarged K\"ahler moduli space. This goal, in turn, will dictate particular ways of representing these theories (via nonlinear sigma models) and particular parametrizations (directly in terms of their complexified K\"ahler forms and their analytic continuations) --- i.e. a particular choice of coordinates on the moduli space. This, we shall argue, is the choice which makes the geometrical interpretation most clear, but it is certainly not unique. In fact, we will find it useful to introduce two particular coordinate systems on the enlarged K\"ahler moduli space of $X$ --- each of which will give rise to a definition of ``size'' at every point of the moduli space. For both of these, we will give an implicit definition of length (inferred from an explicit definition of area) such that both of the following hold: \begin{enumerate} \item The definition of length in each conformal field theory is given in terms of the fundamental data determining the latter, i.e. its two and three point functions. \item If the underlying conformal theory is smoothly deformed to a large radius Calabi-Yau\ sigma model, then the conformal field theory definition of length asymptotically approaches the standard geometrical definition of length on the Calabi-Yau\ space. \end{enumerate} It might be worth pointing out here that such definitions of length should not be expected to be modular invariant. For instance, specifying that a circle has radius $R$ in string theory is not a modular invariant notion because the specified radius obviously differs (almost everywhere) from the equivalent radius $\alpha^\prime/R$. Even so, we are certainly justified in saying that string theory on a circle imposes a lower bound of $(\alpha^\prime)^{1/2}$ on the radius --- the point being that this is true in a fundamental domain (in the Teichm\"uller space) for the modular group. Thus, in this case, in order to associate a notion of size to conformal field theories, we are obligated to make a choice of fundamental domain and work within it. As we shall see, toric geometry provides us directly with the moduli space itself rather than the Teichm\"uller space. We will construct a fundamental domain so that the large radius limit will be an element of it\footnote{More precisely, the large radius limit is an interior point in a partial compactification $({\Scr D}/\Gamma)^-$ of ${\Scr D}/\Gamma$, where ${\Scr D}$ is the fundamental domain and $\Gamma$ is the ``$\sigma$-model\ part'' of the modular group, obtained from integral shifts of the $B$ field and the holomorphic automorphisms of $X$ \cite{Mor:cp}.} and so we are forming the analog of the $R>(\alpha^\prime)^{1/2}$ region in the above context. In practice, each of the definitions of length we introduce will rely on mirror symmetry. Namely, we have complete analytic understanding of the complex structure moduli space of, say, $Y$. Mirror symmetry provides us with an abstract map from this moduli space to that of the enlarged K\"ahler moduli space of $X$. Different explicit realizations of this map will associate different coordinates and hence definitions of length to the underlying conformal theories in the K\"ahler moduli space of $X$. The first explicit realization is mathematically the simplest and amounts to extending the ``monomial-divisor mirror map'' of \cite{AGM:mdmm}\ throughout the moduli space. The same coordinates also naturally arise from the physical approach of \cite{W:phase} from somewhat the opposite point of view. The second explicit realization makes use of the results of \cite{CDGP:} which, via the sigma model, provides a direct link between physical observables and classical geometry. \subsection{The monomial divisor mirror map and the algebraic measure} In section \ref{s:gs} we obtained the result that the space ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$ contained 100 special points which were obtained from the 100 big cones in our fan in $U$. It was shown in \cite{W:phase,AGM:II} that each of these points could be related to some space that modeled $X$ in the following way. First one takes the cone in $U$ associated to the point in ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$. Then one takes the triangulation $T_\xi$ of ${\Scr A}$ associated to this cone. This triangulation can then be taken as the base of a set of cones forming a fan $\Delta^+$ (not to be confused with the original fan in $U$). From $\Delta^+$ one builds a toric variety $V_{\Delta^+}$ in the same way as we constructed ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$ from a fan. The target space $X$ can then be interpreted as the critical locus of some function $W$ within this toric variety. When identifying these models the point $\alpha_0$ associated to the monomial with coefficient $a_0$ in (\ref{eq:gen}) plays a special r\^ole. Key examples are as follows: \begin{enumerate} \item When $T_\xi$ is a complete triangulation of ${\Scr A}$ one has that $V_{\Delta^+}$ is a line bundle of some smooth toric 4-fold $V$. $X$ is then the Calabi-Yau\ manifold defined by $W=0$ at infinite radius limit. \label{i:1} \item When $T_\xi$ is comprised of only the simplex $P^\circ$ then $V_{\Delta^+}$ is a point. The target space is this point but the quantum field theory includes some massless modes around this point. This is a Landau-Ginzburg orbifold theory. \item When $T_\xi$ omits some points of ${\Scr A}$ but $\alpha_0$ is a vertex of every $\sigma\in T_\xi$ then $V_{\Delta^+}$ has the structure of a line bundle (in a suitable sense) over some singular space $V$. $X$ is again defined by $W=0$ and is still at some infinite radius limit but has quotient singularities. \item When $T_\xi$ has more than one simplex, $\sigma$, but $\alpha_0$ is not a vertex of each $\sigma$ then one has some kind of hybrid model where at least part of $X$ is given by a Landau-Ginzburg orbifold theory ``fibered'' over a manifold of complex dimension one or two. \end{enumerate} The reader may be somewhat surprised that we began with a specific manifold $X$ but now that we have analyzed the global structure of the moduli space of K\"ahler forms on $X$ we have 100 geometric models all equally as valid as $X$. This is because, as emphasized in \cite{AGM:I,W:phase}, conformal field theory happily smooths out topological transformations of $X$ so that our moduli space will, if complete, necessarily contain the other types of $X$ that can be reached from the original $X$. The key example in this case is case \ref{i:1}. This allows us to identify which cone in $U$ corresponds to the $\sigma$-model\ we began with. In our example there are 5 complete triangulations of ${\Scr A}$ and hence 5 smooth Calabi-Yau\ manifolds equally valid as starting points for this analysis. Picking one of these models, we take the coordinates $z_l$ in ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$ from (\ref{eq:ccs}) related to the corresponding big cone. In more physical language, a given point in the moduli space corresponds to some abstract conformal field theory. The coordinates $z_l$ are chosen so that the complex structure on $Y$ is such that the resulting correlation functions agree with those of the associated conformal field theory. In other words, we deduce the coupling constants for the sigma model action on $Y$ (the coefficients in \ref{eq:gen}) by ``measuring'' scattering amplitudes (calculating correlation functions) in the chosen conformal theory. There is no problem in carrying out this procedure since we can calculate three point functions associated with complex structure moduli exactly by using the results of \cite{Cand:coup,DG:exact}. So much for the complex structure sector of $Y$. We now state that the complexified K\"ahler form on $X$ is given asymptotically by the {\em monomial divisor mirror map\/} \cite{AGM:mdmm}: \begin{equation} B_l + iJ_l = \frac1{2\pi i}\log(\pm z_l), \label{eq:mdmm} \end{equation} where we have defined some basis, $e_l$, of $H^2(X,{\bf Z})$, such that \begin{equation} B+iJ = \sum_l (B_l+iJ_l)e_l. \end{equation} We then define the cycles $C_l\in H_2(X)$ by \begin{equation} \int_{C_k} e_l = \delta_{kl}. \end{equation} and regard a choice of $B+iJ$ as a way of specifying areas: \begin{equation} \operatorname{Area}(C_l) = \operatorname{Im}\int_{C_l} (B+iJ) = \frac1{2\pi}\log|z_l|. \label{eq:Bidef} \end{equation} (In fact, the ``complexified areas'' $\int_{C_l}(B+iJ)$ are also determined by this choice.) The sign ambiguity of $z_l$ in (\ref{eq:mdmm}) is referred to in \cite{AGM:mdmm}\footnote{This sign problem appears to have been ignored in \cite{BvS:}.} and we will fix it later in this paper. The divisors representing $e_l$ may also be determined by the monomial-divisor map \cite{AGM:mdmm}. Note that the monomial-divisor mirror map is consistent with the invariance of the theory under the transformation $B\to B+x,\; x\in H^2(X,{\bf Z})$, and that the origin of our coordinate patch $z_l=0$ corresponds to $J_l\to\infty$ consistent with this point being the large radius limit of the Calabi-Yau\ manifold. This is our first definition of coordinates. We have constructed the complete moduli space ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$ of K\"ahler forms on $X$ and put coordinates on this space that allows us to explicitly assign an area to $2$-cycles at every point in ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$. We may consider that the measurement of areas on $X$ is {\em defined\/} by the choice of cohomology class (\ref{eq:mdmm}), and that this definition agrees with classical geometry at large radii. This definition, as discussed above, can be phrased in terms of the correlation function data of the underlying conformal theory. Therefore, this definition satisfies the two properties emphasized in the beginning of section 4. {\it The measurement of areas defined in this way will be called ``the algebraic measure''.} This object (or rather, it's imaginary part) may be used in the same way as the classical K\"ahler class $J$ to measure the areas on Riemann surfaces in $X$. (In this case, one can also measure the volume of $X$ itself using $J\wedge J\wedge J$, and the volumes of divisors on $X$ using $J\wedge J$, but we will concentrate on the area measurements, for reasons we will see shortly.) The classical geometric significance of these coordinates is most directly gleaned from the work of \cite{W:phase}. As we have discussed earlier and will explain more fully in \cite{AGM:IV}, Witten's approach is the physical manifestation of the toric methods under discussion via the relationship between holomorphic and symplectic quotients. The real part of the coordinates $t$ (more generally, $t_i$) which appear in the action (\ref{eq:LG}) are, in fact, precisely the algebraic coordinates just defined. That is, if one wants to connect the algebraic measure\ to some classical notion of distance then the algebraic measure\ may be thought of as arising from the classical K\"ahler form on the target space of the non-conformal field theory given by (\ref{eq:LG}). With this definition of the complexified K\"ahler class $B+iJ$, the image of the $\sigma$-model\ phase under the projection $\pi_U$ is precisely the K\"ahler cone of $X$. If one follows a path in ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$ which moves from the large radius $\sigma$-model\ on $X$ to a point $m\in{\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$ where $\pi_U(m)$ lies outside that cone, then $J_l$ becomes negative for some $l$ just as one passes through the wall of the cone. That is, the area of some Riemann surface on $X$ becomes negative at this point. Thus, the 99 other cones in $U$ can be interpreted as a $\sigma$-model\ on $X$ where some area is negative. As mentioned in the introduction, in the first case studied in \cite{W:phase} of the mirror of the quintic threefold, $U$ was a line and consisted of just two cones, i.e., two rays in either direction from $O$. When one ray is interpreted as the K\"ahler cone of the Calabi-Yau\ manifold one sees that the other region must be interpreted as a manifold whose overall volume is negative and that the Landau-Ginzburg orbifold theory can be thought of as a Calabi-Yau\ manifold with overall volume equal to $-\infty$. Our situation is similar but now we have 99 limit points where the area of some subspace of $X$ (and perhaps the volume of $X$ itself) is $-\infty$. Four of these other regions actually have all of the associated areas being positive if we interpret the situation not in terms of $X$ but rather in terms of a topologically different manifold --- a {\em flop\/} of $X$ \cite{AGM:II}. We emphasize that we have not modified the physics in any way; we have only reinterpreted the conformal field theory in terms of its most natural geometrical model. Some of the other 95 limit points correspond to orbifolds. In this context, the orbifold points in the moduli space of Calabi-Yau\ manifolds would normally be thought of as limit points where some divisor, the {\em exceptional divisor}, has shrunk down to zero volume (the reverse of blowing up). When we use the algebraic measure\ however we arrive at the different conclusion that the volume of the exceptional divisor is $-\infty$ at the orbifold point. (In terms of areas: every Riemann surface within that exceptional divisor has area $-\infty$.) Actually this shift from 0 to $-\infty$ is a recurring feature of many of the other regions. In each case one would naturally have wanted to interpret the conformal field theory as having some target space in which some part of $X$ has shrunk to zero area, but in each case the area defined by the algebraic measure\ is $-\infty$. The Landau-Ginzburg orbifold model is the extreme case --- here the target space is a point, i.e., the whole of $X$ has shrunk to zero, whereas its algebraic areas are all $-\infty$. Thus we have seen that the algebraic measure\ has its advantages and disadvantages. It is easily defined in terms of the natural coordinate charts on ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$ and it reproduces the K\"ahler cone of $X$. What one might be uncomfortable with however is the fact that most of the moduli space ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$ is comprised of $X$'s with negative area subspaces and that this definition has a complicated (and largely only implicit) conformal field theory representation. \subsection{The \sm\ measure} We will now make another attempt at defining ``size,'' this time trying to model more closely the properties of the classical K\"ahler form. This is done at the expense of the simplicity of the definition in terms of the natural coordinates on ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$. One can use the action (\ref{eq:sm2}) to calculate the 3-point function between (chiral,antichiral)-fields in the resulting quantum field theory. This is best achieved by twisting this $N$=(2,2) superconformal $\sigma$-model\ into the so-called {\em A-model\/} topological field theory \cite{W:tsm,W:AB}. Each field can be associated to an element of $H_4(X)$ and the 3-point functions can in principle be calculated from intersection theory. If to each field $\phi_l$ we associate a divisor $D_l$, then to leading order in the large radius limit we have \begin{equation} \langle\phi_l\phi_m\phi_n\rangle\sim \#(D_l\cap D_m\cap D_n). \end{equation} (We omit the ``\#'' symbol denoting ``degree of intersection'' from now on.) These intersection numbers agree with those predicted by the monomial-divisor mirror map \cite{AGM:I,Bat:q} as explained above. Beyond this asymptotic form of the 3-point functions at large radius limit we may ask what happens if $X$ is near, rather than at, the large radius limit. In this case one may expand the 3-point function out in terms of an instanton series \cite{DSWW:}. The instantons in question are given by holomorphically embedded $\P^1$'s in $X$ and for the exact form of this instanton series one should consult \cite{AM:} (and the references therein) but it can be stated roughly as \begin{equation} \langle\phi_l\phi_m\phi_n\rangle= (D_l\cap D_m\cap D_n) +\sum_\Gamma\frac{{\bf q}^\Gamma}{1-{\bf q}^\Gamma}(D_l\cap\Gamma) (D_m\cap\Gamma)(D_n\cap\Gamma), \end{equation} where $\Gamma$ is a holomorphically embedded $\P^1$ in $X$ and ${\bf q}^\Gamma$ is a monomial in the $q_l$'s. We define the parameters $q_l$ by \begin{equation} q_l = \exp\{2\pi i(B_l+iJ_l)\}, \label{eq:smm} \end{equation} with $B$ and $J$ coming from (\ref{eq:sm2}) so that the resulting 3-point functions appear as power series in the $q_l$'s. This leads us to another way of defining areas for a point in ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$. That is, we determine the values of $B_l+iJ_l=\int_{C_l}B+iJ$ required to give the correct 3-point functions when these 3-point functions are expressed as an instanton sum, i.e., as a power series in $q_l$. We then analytically continue this object over the whole moduli space. {\it We will refer to this definition of area-measurement as ``the \sm\ measure''.} Note that to perform the analytical continuation of the \sm\ measure\ over ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$ we need to make some branch cuts in ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$. We claim that there is a natural choice and we specify this choice later. The reason that we distinguish these two definitions of area-measurement in this paper is because they are, in fact, different. That is, in general, $q_l\neq z_l$ so we will need to specify which coordinates we are using, in order to specify the measures. {}From now on we will use the symbol $B+iJ$ to refer to the {\em\sm\ measure\/} only. Returning to Witten's approach to the algebraic measure\ outlined in the previous section we see that when one takes the renormalization group flow limit of the field theory given by (\ref{eq:LG}) to the conformal field theory the algebraic measure\ must ``flow'' to the \sm\ measure. After all, (\ref{eq:LG}) is describing a sigma-model. These definitions of the algebraic measure\ and the \sm\ measure\ are, as constructed, in complete agreement at the large radius limit. Thus, with our conventions about $B+iJ$ being the \sm\ measure, we can modify (\ref{eq:mdmm}) to read \begin{equation} B_l + iJ_l = \frac1{2\pi i}\left\{\log(\pm z_l) + O(z_1,\dots,z_5)\right\}, \label{eq:mm} \end{equation} i.e., we expand $\log q_l$ as a power series for small $z_l$. Actually we have not justified the omission of a constant term in the right-hand-side of (\ref{eq:mm}) and we will return to this point briefly later. In \cite{CDGP:} a good geometrical way of picturing these natural $\sigma$-model\ coordinates\footnote{These are sometimes called the {\it flat coordinates\/} in the literature.} in terms of the mirror theory $Y$ was introduced. If we view ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$ as the moduli space of complex structures of $Y$ then a natural set of coordinates can be introduced via the {\em Gau\ss-Manin connection}. That is, in the case of 3-folds we introduce the holomorphic 3-form $\Omega$ and a set of 3-cycles $\gamma_n$. One can then define \begin{equation} B_l + iJ_l = \frac{\displaystyle\int_{\gamma_l}\Omega} {\displaystyle\int_{\gamma_0}\Omega}, \label{eq:period} \end{equation} These coordinates are independent of the normalization of $\Omega$ and will satisfy (\ref{eq:mm}) if $\gamma_0,\gamma_l$ are suitably chosen. See \cite{Mor:math} for more information. In \cite{CDGP:} the definition of the \sm\ measure\ via (\ref{eq:period}) was used directly to obtain the correction terms in (\ref{eq:mm}). That is, certain 3-cycles were found and $\Omega$ was integrated over them. In practice this method will be unsuited to approach the problems addressed in this paper. Instead it was noticed in \cite{CDGP:} that these periods satisfied a differential equation and in \cite{Mor:PF} that one could use these differential equations to find the form of (\ref{eq:mm}) without explicitly constructing the 3-cycles $\gamma_0,\gamma_l$. There is an important qualitative feature of the local solutions to these differential equations. The cycle $\gamma_0$ has the property that $\int_{\gamma_0}\Omega$ is regular as a function of the $z_l$. Thus, comparing (\ref{eq:mm}) with (\ref{eq:period}), we find \begin{equation} 2\pi i\,\int_{\gamma_l}\Omega = \log(\pm z_l)\left(\int_{\gamma_0}\Omega\right) + O(z_1,\dots,z_5) \end{equation} which tells us that in addition to the regular solution, there is a solution with a $\log(\pm z_l)$ type growth for each $l=1,\dots,5$. Moreover, all the other solutions will involve products or powers of these log terms. All of this is discussed in more detail in \cite{Mor:cp}. It is worthwhile noting that whereas we had no problem in in using the algebraic measure\ to measure areas and volumes of $X$ and its subspaces in the classical way, the same is not true for the \sm\ measure. For example defining \begin{equation} \operatorname{Vol}(X) = \int_X J\wedge J\wedge J, \end{equation} one would find the value of the volume behaved in an unsatisfactory way as one moved around the moduli space. A better definition would be some object of the form of a correlation function $\langle JJJ\rangle$. Some of the properties of the \sm\ measure\ are actually quite insidious. In the classical picture the K\"ahler form lives in the linear vector space $H^2(X)$. Although we tried to mimic this in the quantum picture by exercising great care in choosing the \sm\ measure\ coordinates, the quantum corrected moduli space is not flat and this is reflected in some non-linearity in the structure of the \sm\ measure. This underlies the reason why the ring structure given by the wedge product in $H^*(X)$ is not a natural object in ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$. We will refer to this issue of non-linearity briefly again later in the paper and we hope to address further questions about this structure in future work. In this paper we will only attempt to use the \sm\ measure\ to directly measure the area of Riemann surfaces in $X$. We will also use the classical notion that if a manifold has zero volume then any subspace within it is also of zero volume. This is the only sense in which we will measure volumes, as opposed to areas of Riemann surfaces. \section{Evaluating the \sm\ measure} \label{s:meas} \subsection{The Hypergeometric System} In this section we will discuss the system of partial differential equations which allow one to find the natural $\sigma$-model\ coordinates (\ref{eq:period}) required for the \sm\ measure. With the notation of section \ref{s:gs}, let $\alpha_k\in{\Scr A}$ have coordinates $(\alpha_{k,1},\alpha_{k,2}, \ldots,\alpha_{k,5})$ in ${\bf R}^5$. For given values of $\beta_n$ consider the following differential operators introduced in \cite{GZK:h}: \begin{equation} \eqalign{ Z_n &= \left(\sum_{k}\alpha_{k,n} a_k\frac\partial{\partial a_k}\right) -\beta_n\cr \Box_l &= \prod_{m_{l,k}>0}\left(\frac\partial{\partial a_k}\right)^{m_{l,k}} - \prod_{m_{l,k}<0}\left(\frac\partial{\partial a_k}\right)^{-m_{l,k}},\cr} \end{equation} where $n=1,\ldots,5$ and $l$ labels a relationship \begin{equation} \sum_{k}m_{l,k}\alpha_{k,n}=0, \qquad n=1,\ldots,5. \label{eq:cond1} \end{equation} Now we look for a function $\Phi(a_0,a_1,\ldots,a_9)$ such that \begin{equation} Z_n \Phi = \Box_l\Phi = 0, \qquad\forall n,l. \label{eq:PF} \end{equation} The numbers $\beta_n$ specify how $\Phi$ transforms under the $({\bf C}^*)^5$ action $x_i\to\lambda_i x_i$. In \cite{Bat:var}, it was shown that the periods in (\ref{eq:period}) satisfy (\ref{eq:PF}) for a certain choice of $\beta_n$ which we will now give. We first need to make a special choice of coordinates on the ${\bf R}^5$ space in which the points ${\Scr A}$ live. Remember that the (quasi-)homogeneity of (\ref{eq:gen}) means that these points lie in a hyperplane. Let the coordinates be chosen such that $\alpha_{k,5}=1$ for $k=0,\ldots,9$ and let the coordinates of $\alpha_0$ be $(0,0,0,0,1)$. In this basis the values of $\beta_n$ required are $\beta_n=0$ for $n=1,\ldots,4$ and $\beta_5=-1$. We can now give a general solution to the partial differential equations $Z_n\Phi=0$ but first we need to say more about $({\bf C}^*)^5$-invariant coordinates. The $a_k$ parameters transform under the $({\bf C}^*)^5$-action by the condition that (\ref{eq:gen}) is invariant. This means that for each condition of the form (\ref{eq:cond1}) we may introduce \begin{equation} z_l = \prod_k a_k^{m_{l,k}} \label{eq:z2a} \end{equation} which are invariant under the $({\bf C}^*)^5$-action. The fact that we are using the same notation $z_l$ for such invariants and the coordinate patches on ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$ in (\ref{eq:ccs}) is not an oversight --- they can be considered the same thing as we now discuss. One of the big cones in our fan in $U$ corresponds to the Landau-Ginzburg orbifold model. We know that the local space around the Landau-Ginzburg orbifold point can be parametrized by $a_0,\ldots,a_4$ and setting $a_5,\ldots,a_9=1$. Calling the coordinates for the Landau-Ginzburg orbifold model $z_l^{\rm (LG)}$ we thus see that for $a_5,\ldots,a_9=1$ we can define $z_l^{\rm (LG)} = a_{l-1}$. We may remove the $a_5,\ldots,a_9=1$ condition by multiplying $a_{l-1}$ by the necessary powers of $a_5,\ldots,a_9$ required to achieve $({\bf C}^*)^5$ invariance, thus we have \begin{equation} z_1^{\rm (LG)}=a_0a_5^{-\frac13}a_6^{-\frac13}a_7^{-\frac16} a_8^{-\frac19}a_9^{-\frac1{18}}, z_2^{\rm (LG)}=a_1a_7^{-\frac12}a_9^{-\frac12},\ldots \end{equation} (Fractional powers appear here because the Landau-Ginzburg orbifold point is at a quotient singularity in ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$.) Actually there is a technical point that should be addressed here. The above form for the Landau-Ginzburg orbifold coordinates reflects the fact that there is a ${\bf Z}_{18}$-quotient singularity at this point in the moduli space. Quotient singularities in one complex dimension are trivial is the sense that they can be removed by a change of coordinates. Our description of ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$ in terms of toric geometry automatically removes such singularities. The ${\bf Z}_{18}$-quotient singularity in the moduli space is actually only a ${\bf Z}_6$-quotient singularity once this process is performed. Thus in the toric description given one should actually use a 3-fold cover of the above coordinates. An alternative is to modify the definition of the coordinates $(p_1(\chi),p_2(\chi),\ldots,p_5(\chi))$ in terms of the rays of the secondary fan. Our original definition was in terms of the {\em first\/} point from $O$ encountered by the ray. It turns out that by taking the third point rather than the first for one of the rays restores the ${\bf Z}_{18}$ singularity. (Actually this also occurs when we construct these rays as vectors from the {\em Gale transform\/} of ${\Scr A}$\, \cite{BFS:}.) In what follows we assume that $(p_1(\chi),p_2(\chi),\ldots,p_5(\chi))$ has been rescaled for one of the rays in this way. We may now use (\ref{eq:ccs}) to give $({\bf C}^*)^5$-invariant $z_l$ coordinates for each big cone in our fan. Thus for each big cone in the fan we have a set of $z_l$ coordinates and a set of conditions (\ref{eq:cond1}) given by (\ref{eq:z2a}). It is not difficult to show that the equations $Z_n\Phi=0$ have as a general solution \begin{equation} \Phi(a_0,a_1,\ldots,a_9) = a_0^{-1}f(z_1,z_2,\ldots,z_5), \label{eq:genZ} \end{equation} where $f$ is an arbitrary function and the $z_l$'s are any set of $({\bf C}^*)^5$-invariant coordinates. For each big cone we can now write $\Phi$ in the form (\ref{eq:genZ}) and write down the $\Box_l\Phi=0$ equations. Let us choose one of the cones corresponding to the large radius limit of a smooth Calabi-Yau\ manifold and write these differential equations down. We can specify such a cone by specifying a complete triangulation of ${\Scr A}$. The complete triangulations of ${\Scr A}$ are unique except for the triangle with vertices $\alpha_7,\alpha_8,\alpha_9$. We will first concentrate on ``resolution 1'' from \cite{AGM:I} given by \begin{equation} \setlength{\unitlength}{0.007in}% \begin{picture}(265,189)(100,585) \thinlines \put(240,760){\line(-3,-4){120}} \put(120,600){\line( 1, 0){240}} \put(360,600){\line(-3, 4){120}} \put(240,760){\line( 1,-4){ 40}} \put(260,680){\line(-1, 0){ 80}} \put(180,680){\line( 1,-4){ 20}} \put(200,600){\line( 3, 4){ 60}} \put(260,680){\line( 5,-4){100}} \put(235,765){\makebox(0,0)[lb]{\raisebox{0pt}[0pt][0pt]{$\alpha_7$}}} \put(365,595){\makebox(0,0)[lb]{\raisebox{0pt}[0pt][0pt]{$\alpha_8$}}} \put(100,587){\makebox(0,0)[lb]{\raisebox{0pt}[0pt][0pt]{$\alpha_9$}}} \put(195,585){\makebox(0,0)[lb]{\raisebox{0pt}[0pt][0pt]{$\alpha_3$}}} \put(270,585){\makebox(0,0)[lb]{\raisebox{0pt}[0pt][0pt]{$\alpha_2$}}} \put(155,683){\makebox(0,0)[lb]{\raisebox{0pt}[0pt][0pt]{$\alpha_1$}}} \put(230,687){\makebox(0,0)[lb]{\raisebox{0pt}[0pt][0pt]{$\alpha_4$}}} \end{picture} \label{eq:res1} \end{equation} This model is associated with the monomial (\ref{eq:del1}) in the discriminant. Denoting the resulting coordinate patch in ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$ by $z_l^{(1)}$ we obtain \begin{equation} \eqalign{z_1^{(1)} &= \frac{a_1a_3a_5a_6}{a_0^3a_9}\cr z_2^{(1)} &= \frac{a_4a_9}{a_1a_3}\cr z_3^{(1)} &= \frac{a_3a_7}{a_1a_4}\cr z_4^{(1)} &= \frac{a_1a_2}{a_3a_4}\cr z_5^{(1)} &= \frac{a_3a_8}{a_2^2}.\cr} \label{eq:co1} \end{equation} (Note there are no fractional powers of $a_k$ since the large radius limit point of a smooth Calabi-Yau\ manifold is a regular point in ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$, in the example we are considering.) At this point we can also state the sign in (\ref{eq:mdmm}) and (\ref{eq:mm}). We will discuss this issue further in section \ref{ss:per}. We may associate an integer $d_l$ to each coordinate $z_l$ defined as the total degree of the numerator or denominator when expressed in terms of $a_k$. (\ref{eq:mm}) then becomes \begin{equation} B_l + iJ_l = \frac1{2\pi i}\left\{\log\left((-1)^{d_l} z_l \right) + O(z_1,\dots,z_5)\right\} \label{eq:mdmms} \end{equation} Thus, in the present example $(-1)^{d_l}=1$ for all $l$. The $\Box_l$ operators are \begin{equation} \eqalign{ \Box_1 &=\frac\partial{\partial a_1} + \frac\partial{\partial a_3} + \frac\partial{\partial a_5} +\frac\partial{\partial a_6} - \frac{\partial^3}{\partial a_0^3} -\frac\partial{\partial a_9}\cr \Box_2 &= \frac\partial{\partial a_4} +\frac\partial{\partial a_9} -\frac\partial{\partial a_1} -\frac\partial{\partial a_3}\cr \Box_3 &= \frac\partial{\partial a_3} +\frac\partial{\partial a_7} -\frac\partial{\partial a_1} -\frac\partial{\partial a_4}\cr \Box_4 &= \frac\partial{\partial a_1} +\frac\partial{\partial a_2} -\frac\partial{\partial a_3} -\frac\partial{\partial a_4}\cr \Box_5 &= \frac\partial{\partial a_3} +\frac\partial{\partial a_8} -\frac{\partial^2}{\partial a_2^2}.\cr} \end{equation} We can now write down the differential equations we require $\Box_l\Phi=0$ by using (\ref{eq:genZ}) and (\ref{eq:co1}). Rather than attack this daunting set of equations head on we will turn our attention to sets of ordinary differential equations contained in this set. \subsection{Rational curves in $\protect{\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$.} \label{ss:rc} The points we are particularly interested in, in this paper, are the 100 points in ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$ which each are the limit of some geometric model, whether it be smooth Calabi-Yau, orbifold, Landau-Ginzburg orbifold, etc. As mentioned in section \ref{ss:cp} toric geometry tells us that the codimension one walls between the big cones correspond to rational curves within ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$. In fact, each such rational curve contains precisely two of our 100 limit points --- the two points given by the big cones which this wall separates. In our example, each big cone in $U$ has 5 codimension one faces, that is, given one of the 100 limit points, there are 5 rational curves in ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$ passing through this point each of which passes through another limit point. In this way, there are 250 rational curves which form a ``web'' in ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$ connecting all of the 100 limit points. This is shown as a polytope in figure \ref{fig:web} where lines represent the rational curves and vertices represent limit points. (Actually this is the {\em secondary polytope\/} \cite{GZK:sp}). Thus it is easily seen that one may move in ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$ from any of the limit points to another one by moving along these rational curves. \iffigs \begin{figure} \centerline{\epsfxsize=13cm\epsfbox{sigmod-fb.ps}} \caption{The web formed by rational curves connecting limit points.} \label{fig:web} \end{figure} \fi As we will now see, the form of the set of partial differential equations from the previous section becomes particularly straight-forward when restricted to these rational curves. We will illustrate this by an example. Let us consider one of the walls of the cone considered in (\ref{eq:res1}). A neighbouring cone to this one corresponds to ``resolution 4'' of \cite{AGM:I}, i.e., another smooth Calabi-Yau\ manifold this time given by the following triangulation: \begin{equation} \setlength{\unitlength}{0.007in}% \begin{picture}(265,189)(100,585) \thinlines \put(240,760){\line(-3,-4){120}} \put(120,600){\line( 1, 0){240}} \put(360,600){\line(-3, 4){120}} \put(240,760){\line( 1,-4){ 40}} \put(260,680){\line( 5,-4){100}} \put(260,680){\line(-1, 0){ 80}} \put(180,680){\line( 5,-4){100}} \put(180,680){\line( 1,-4){ 20}} \put(235,765){\makebox(0,0)[lb]{\raisebox{0pt}[0pt][0pt]{$\alpha_7$}}} \put(365,595){\makebox(0,0)[lb]{\raisebox{0pt}[0pt][0pt]{$\alpha_8$}}} \put(100,587){\makebox(0,0)[lb]{\raisebox{0pt}[0pt][0pt]{$\alpha_9$}}} \put(195,585){\makebox(0,0)[lb]{\raisebox{0pt}[0pt][0pt]{$\alpha_3$}}} \put(270,585){\makebox(0,0)[lb]{\raisebox{0pt}[0pt][0pt]{$\alpha_2$}}} \put(155,683){\makebox(0,0)[lb]{\raisebox{0pt}[0pt][0pt]{$\alpha_1$}}} \put(230,687){\makebox(0,0)[lb]{\raisebox{0pt}[0pt][0pt]{$\alpha_4$}}} \end{picture} \label{eq:res4} \end{equation} For this big cone we have the following coordinates in ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$: \begin{equation} \eqalign{z_1^{(2)} &= \frac{a_1a_3a_5a_6}{a_0^3a_9}\cr z_2^{(2)} &= \frac{a_2a_9}{a_3^2}\cr z_3^{(2)} &= \frac{a_2a_7}{a_4^2}\cr z_4^{(2)} &= \frac{a_3a_4}{a_1a_2}\cr z_5^{(2)} &= \frac{a_1a_8}{a_2a_4}.\cr} \end{equation} That is, \begin{equation} \eqalign{ z_1^{(2)} &= z_1^{(1)}\cr z_2^{(2)} &= z_2^{(1)}z_4^{(1)}\cr z_3^{(2)} &= z_3^{(1)}z_4^{(1)}\cr z_4^{(2)} &= \left(z_4^{(1)}\right)^{-1}\cr z_5^{(2)} &= z_5^{(1)}z_4^{(1)}.\cr} \label{eq:tfs} \end{equation} The transition functions between these two patches given in (\ref{eq:tfs}) give us the coordinates for the rational curve connecting these limit points, i.e., put $z_1^{(1)}=z_2^{(1)}=z_3^{(1)}=z_5^{(1)}=0$ and use $z=z_4^{(1)}$ as the coordinate on the rational curve. Now let us try to solve $\Box_l\Phi=0$ on this rational curve $C$. We are interested in finding the regular solution, and the solution with a $\log(\pm z_4)$-type growth, since their ratio gives the $\sigma$-model\ coordinate which {\it does not vanish}\/ on $C$. To eliminate the other solutions from consideration, we impose the additional equations \begin{equation} \left.z_n^{(1)} \frac{\partial\Phi} {\partial z_n^{(1)}}\right|_C=0, \quad n=1,2,3,5. \label{eq:neweqs} \end{equation} The solutions to $\Box_l\Phi=0$ with a $\log(\pm z_n)$-type growth for $n\ne4$, and those that involve powers or products of log terms, will fail to satisfy one of these new equations; thus, we will be left with just the solutions we want. Using (\ref{eq:neweqs}) immediately allows us to expand out $\Box_4$ in terms of $z$ alone: \begin{equation} \Box_4\Phi = \frac1{a_1a_2}\left\{ z\frac\partial{\partial z}z\frac\partial{\partial z} -z\left(z\frac\partial{\partial z}z\frac\partial{\partial z} \right)\right\}\Phi. \label{eq:de1} \end{equation} Now if we consider $\Phi$ as a function of $z$ alone then we have reduced the problem to an ordinary differential equation. \subsection{Perestro\u\i ka} \label{ss:per} Before trying to solve the differential equation (\ref{eq:de1}) we will try to generalize the method we followed in the last section so that we can write down the differential equation for any of the rational curves in ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$ joining two limit points. To do this we will first look at the difference between the triangulations of ${\Scr A}$ corresponding to the two limit points. For any $N$, consider $N+2$ points in ${\bf R}^N$ such that these points are not contained in an ${\bf R}^{N-1}$ hyperplane. Let the polytope $Q$ be the convex hull of these points (i.e., the polytope of minimal volume containing all the points). It follows \cite{AxGZ:} that there are precisely two triangulations of this set of points which contain at least the vertices of $Q$. The transition between two such triangulations is called a {\em perestro\u\i ka\/} \cite{GZK:d}. We will give several examples of perestro\u\i ka in later sections. The usefulness of the notion of a perestro\u\i ka is that two triangulations of ${\Scr A}$ corresponding to neighbouring cones in our fan differ by a perestro\u\i ka. That is, we can associate a perestro\u\i ka to each of the rational curves in ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$ we are considering. Denoting the $N+2$ points by $\alpha_s$, $s=1,\ldots,N+2$, there will be a single linear relation between these points \begin{equation} \sum_s m_s \vec\alpha_s = 0, \label{eq:linrel} \end{equation} where $\vec\alpha_s$ is the position vector of $\alpha_s$ and the $m_s$'s are relatively prime integers. From this relation we define a variable \begin{equation} z = \prod_s a_s^{m_s}, \end{equation} and a differential operator \begin{equation} \Box = \prod_{m_s>0}\left(\frac\partial{\partial a_s}\right)^{m_s} - \prod_{m_s<0}\left(\frac\partial{\partial a_s}\right)^{-m_s}. \end{equation} Setting \begin{equation} \Phi(a_0,a_1,\ldots) = a_0^{-1}f(z), \label{eq:Phf} \end{equation} one can now write $\Box\Phi=0$ as an ordinary differential equation with $z$ as the only dependent variable. We claim this construction generalizes that of the previous section. That is, for any rational curve joining two limit points in ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$ we can obtain an ordinary differential equation for the periods on $X$ in terms of $z$, the coordinate on the rational curve. Note that this ordinary differential equation will always be of {\em hypergeometric \/} type. The hypergeometric ordinary differential equations in question on $\P^1$ has solutions with possible singularities or branch points at three points which we will call $z=0,1,\infty$. The points 0 and $\infty$ are the two limit points which the rational curve connects in ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$. $z=1$ is the only other singular point and is thus where the discriminant locus of section \ref{ss:cp} cuts this curve. To be more precise it is usually the case that the whole rational curve is contained in the discriminant locus. In this case $z=1$ is the point where another irreducible component of the discriminant locus cuts\footnote{As we will see, it is often the case that the a component touches the curve tangentially rather than cutting transversely or that more that one component may pass through $z=1$.} this curve. Given this form of distinguished points on this curve we can now specify our choice of branch cuts to perform any analytic continuation. Each of the points $z=0$ and $z=\infty$ are taken to represent some limit point around which correlation functions may be expanded in some power series. This power series fails when one reaches $z=1$. We thus cut from $z=0$ to $z=1$ and from $z=\infty$ to $z=1$ to reflect this structure. (Any other choice would be artificially unsymmetric.) We extend these cuts from the rational curves on the boundary into the interior of the moduli space, to form a fundamental domain. This choice of fundamental domain is implicit in all that follows in this paper. In order to put the singularities at $z=1$ we will need to rescale the $z_l$'s introduced earlier. The sign of this rescaling is the source of the $(-1)^{d_l}$ factors in the monomial-divisor mirror map. We may think of this sign as arising from attempting to fix the mirror map so that the number of lines on a Calabi-Yau\ manifold is positive. As mentioned earlier, a three-point function in our conformal field theory may be expanded as an instanton sum in $q_l$ where the coefficients in this series give information regarding the numbers of holomorphically embedded $\P^1$'s on $X$. In particular the sub-leading term is expected to be the number of lines (that is, the number of holomorphically embedded $\P^1$'s in $X$ the homology class of whose image is some fixed integral generator of $H_2(X,{\bf Z})$). To be more precise, in some cases one may have families of lines depending on parameters, and then the ``number of lines'' must be interpreted by means of the top Chern class of the parameter space of the family \cite{Witten:tcc,AM:}. It is generally believed \cite{Gromov:,McD:} that in such a case a deformation of complex structure to a generic almost complex structure will yield a discrete set of lines. However, some of these lines may count negatively\footnote{We thank E.~Witten for pointing out such a possibility to us.} and thus we cannot use this strategy to fix the sign of $z$ in the monomial-divisor mirror map in all cases. Instead, we first note that if there were a three-point function whose expansion in $q_l$ had all coefficients positive, then any pole at the edge of convergence of such a series would occur when $q_l$ is real and positive. Since $q_l=\pm z_l$ to leading order, we find the sign required in such a case once we know how to rescale $z_l$ to give a pole at $z=1$. This is the sign choice $(-1)^{d_l}$ that we specified earlier. By looking at perestro\u\i ka such that one limit point corresponds to a large radius limit smooth Calabi-Yau\ manifold, one can show that this sign choice is consistent with the signs in the principal discriminant given by \cite{GZK:d}. Unfortunately for a general three-point function, not all of the coefficients in the $q_l$-expansion need be positive. To maintain consistency with \cite{GZK:d} we thus {\em conjecture\/} that the sign given by $(-1)^{d_l}$ is always the correct choice even when negative coefficients in the expansion occur. That is, we assume that the pole in the $q_l$-expansion of any 3-point function occurs for a real and positive $q_l$ (i.e., $B_l=0$). If our conjecture is wrong and we were to pick the wrong sign for $z_l$ then we would be counting the number of lines on $X$ with the wrong sign. In summary we thus do the following. Given the definition of $z_l$ in (\ref{eq:z2a}) we find the constant by which we need to rescale $z_l\to z$ to put a pole at $z=1$. The sign of this factor is $(-1)^{d_l}$. This sign is absorbed in the monomial-divisor mirror map so that we only take the absolute value of this scale factor in our definition of $z$. Now we will apply this construction to several examples. It is important to note that although we are describing the following examples from the perspective of our five-parameter example, in each case we only actually study the part of the toric fan specific to the transformation we look at. Thus the following results are clearly valid for any Calabi-Yau\ moduli space that is studied this way. In fact, in string theory, we expect results concerning flops, blowing-up orbifolds etc., to be dependent only on the local geometry of $X$. This means that the following examples should {\em not\/} be considered dependent on the global structure of $X$. \subsection{The flop} Recall that a flop is the transformation of a manifold into a (possibly) topologically different manifold which replaces a $\P^1$ with another $\P^1$. This occurs by blowing down a $\P^1$ in the original manifold to form a singular space with a {\em double point}. This double point can then be resolved by blowing up to give a $\P^1$ in two different ways. One way returns the original manifold and the other way yields another manifold. In general a flop need not take a K\"ahler manifold to another K\"ahler manifold. In this paper however we are moving from one manifold to another directly by a change of K\"ahler form and so in this context we are guaranteed a K\"ahler flop. Any manifold which was a non-K\"ahler flop of $X$ would not have a big cone in the secondary fan. The following perestro\u\i ka is a {\em flop}. \begin{equation} \setlength{\unitlength}{0.005in}% \begin{picture}(370,110)(135,675) \thinlines \put(140,780){\circle*{10}} \put(140,680){\circle*{10}} \put(240,680){\circle*{10}} \put(240,780){\circle*{10}} \put(400,780){\circle*{10}} \put(400,680){\circle*{10}} \put(500,680){\circle*{10}} \put(500,780){\circle*{10}} \put(140,780){\line( 0,-1){100}} \put(140,680){\line( 1, 0){100}} \put(240,680){\line( 0, 1){100}} \put(240,780){\line(-1, 0){100}} \put(400,780){\line( 0,-1){100}} \put(400,680){\line( 1, 0){100}} \put(500,680){\line( 0, 1){100}} \put(500,780){\line(-1, 0){100}} \put(140,780){\line( 1,-1){100}} \put(400,680){\line( 1, 1){100}} \put(280,730){\vector(-1, 0){ 0}} \put(280,730){\vector( 1, 0){ 80}} \end{picture} \end{equation} This was precisely the perestro\u\i ka considered in section \ref{ss:rc}. That is, we may specify it by the linear relation (using the numbering conventions of our example) \begin{equation} \vec\alpha_3+\vec\alpha_4-\vec\alpha_1-\vec\alpha_2=0. \end{equation} The ODE associated the the flop, as we saw (in this case no rescaling of the $z$ parameter is required), is \begin{equation} \left(z\frac{d}{dz}\right)^2f - z\left(z\frac{d}{dz}\right)^2f=0. \end{equation} This has a general global solution \begin{equation} f = C_1 + C_2\log(z), \end{equation} which is also a general local solution for each $z\neq1$. We can now follow \cite{Mor:PF} in finding the \sm\ measure\ in terms of $z$. The component of the \sm\ measure\ we find is, of course, the part that varies as we move along the rational curve in ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$. To find this K\"ahler form, we take the solution for $f(z)$ that behaves to leading order like $\log(z)$ at $z=0$ and divide it by the solution which is $2\pi i$ to leading order at $z=0$. That is, we find $B+iJ$ as the ratio of two solutions such that equation (\ref{eq:mm}) is obeyed. In this case this is a trivial task since we have solutions which are exactly a constant and exactly $\log(z)$. Therefore \begin{equation} B+iJ = \frac1{2\pi i}\log(z). \end{equation} That is, the \sm\ measure\ is the {\bf same} as the algebraic measure. To be more precise {\it when one performs a flop of one Calabi-Yau\ manifold into another and holds all the other components of the K\"ahler form at large radius limit then the \sm\ measure\ coincides with the algebraic measure}. In particular, this implies that the area of the flopped $\P^1$ does attain the value zero in the $\sigma$-model\ definition, just as it does in the algebraic definition. In this setting, therefore, string theory does not supply us with a nonzero lower bound. Of course, the size of the whole manifold is being kept infinite (i.e., any Riemann surface in a class other than the one being flopped has infinite area) and it is only a part of the space which shrinks to zero. The flop is a bit too trivial to show the full singularity structure in the differential equation. One will find however that many three-point functions will have a pole at $B+iJ=0$. The fact that the algebraic measure\ and the \sm\ measure\ coincide in the region of ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$ considered in this section has some interesting consequences for the 3-point functions of the superconformal field theory (which we developed in discussions with Witten \cite{W:phase}). It is known \cite{OP:flop} that in classical geometry, the K\"ahler cones of two manifolds related by a flop fit together in ${\bf R}^{h^{1,1}}$ by touching each other along the wall of each K\"ahler cone (where the area of the flopped $\P^1$ becomes zero). This is equivalent to saying that, so far as K\"ahler form data is concerned, the class represented by the flopped $\P^1$ has negative area in the flopped manifold. Since the algebraic measure\ generates the same cone structure as the classical K\"ahler form, the same considerations must also work for the algebraic measure\ and thus also for the \sm\ measure. It is important to bear in mind that the homology class of the $\P^1$ present after the flop is the {\it negative\/} of the homology class present before the flop. Thus, although the class of the original $\P^1$ acquires a negative area as the wall between cones is traversed, the post-flop $\P^1$ will have a positive area in the new region (since it belongs to the opposite class). In the portion of ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$ we consider, all Riemann surfaces in $X$, except for the ones being flopped, are of infinite area. Call the finite set of $\P^1$'s being flopped $C_\beta$. (These are all in the same homology class.) A 3-point function is then given by \begin{equation} \langle \phi_1\phi_2\phi_3\rangle= (D_1\cap D_2\cap D_3) + \sum_\beta \frac{q}{1-q}(D_1\cap C_\beta) (D_2\cap C_\beta)(D_3\cap C_\beta), \label{eq:iflp} \end{equation} where $q=\exp\{2\pi i(B+iJ)\}$ and $D_n$ is a divisor representing the field $\phi_n$ in the usual way. Let us consider the Calabi-Yau\ manifold $X_1$ at large radius limit. In this limit $q\to0$ and so the sum in (\ref{eq:iflp}) vanishes. Let us now flop $X_1$ along the $C_\beta$'s to obtain the large radius Calabi-Yau\ manifold $X_2$. Given the discussion above, this is equivalent to sending $J\to-\infty$, i.e., $q\to\infty$. We can take the proper transform of the divisors $D_n$ in $X_1$ to obtain divisors in $X_2$ which we denote by the same symbol. The fundamental equation which relates the intersection numbers before and after the flop is: \begin{equation} (D_1\cap D_2\cap D_3)_2 = (D_1\cap D_2\cap D_3)_1 - \sum_\beta (D_1\cap C_\beta)_1\,(D_2\cap C_\beta)_1\,(D_3\cap C_\beta)_1, \label{eq:i12} \end{equation} where the subscripts denote in which manifold the intersection numbers are calculated. (Note that $C_\beta$ is on $X_1$; we will denote the post-flop $\P^1$'s by $C_\beta'$.) Equation (\ref{eq:i12}) is a statement in classical geometry which is straightforward to verify. For example, if we assume that $D_1$ meets $C_\beta$ transversely at $k$ points, while $D_2$ and $D_3$ contain $C_\beta$ with multiplicities $l$ and $m$, then $(D_1\cap D_2\cap D_3)_1=klm$ while $(D_1\cap C_\beta)_1=k$, $(D_2\cap C_\beta)_1=-l$, and $(D_3\cap C_\beta)_1=-m$. On the other hand, after flopping (see figure \ref{fig:flop}), $D_2$ and $D_3$ are disjoint (at least locally near $C_\beta'$) so that $(D_1\cap D_2\cap D_3)_2=0$, verifying (\ref{eq:i12}) in this case. \begin{figure} \setlength{\unitlength}{0.01in}% $$\begin{picture}(349,180)(75,690) \thinlines \put( 80,800){\line( 1,-1){ 40}} \put(120,760){\line(-1,-1){ 40}} \put(120,760){\line( 1, 0){ 40}} \put(160,760){\line( 1, 1){ 40}} \put(160,760){\line( 1,-1){ 40}} \put(320,700){\line( 1, 1){ 40}} \put(360,740){\line( 1,-1){ 40}} \put(360,740){\line( 0, 1){ 40}} \put(360,780){\line(-1, 1){ 40}} \put(360,780){\line( 1, 1){ 40}} \put(350,800){\makebox(0,0)[lb]{\raisebox{0pt}[0pt][0pt]{$D_2$}}} \put(120,735){\makebox(0,0)[lb]{\raisebox{0pt}[0pt][0pt]{$D_3$}}} \put(400,755){\makebox(0,0)[lb]{\raisebox{0pt}[0pt][0pt]{$C_\beta^\prime$}}} \put(140,695){\makebox(0,0)[lb]{\raisebox{0pt}[0pt][0pt]{$C_\beta$}}} \put(395,760){\vector(-1, 0){ 30}} \put(150,715){\vector( 0, 1){ 40}} \multiput(210,760)(7.82609,0.00000){12}{\line( 1, 0){ 3.913}} \put(300,760){\vector( 1, 0){0}} \put(210,760){\vector(-1, 0){0}} \put( 75,755){\makebox(0,0)[lb]{\raisebox{0pt}[0pt][0pt]{$D_1$}}} \put(125,775){\makebox(0,0)[lb]{\raisebox{0pt}[0pt][0pt]{$D_2$}}} \put(350,705){\makebox(0,0)[lb]{\raisebox{0pt}[0pt][0pt]{$D_3$}}} \put(325,755){\makebox(0,0)[lb]{\raisebox{0pt}[0pt][0pt]{$D_1$}}} \put(240,765){\makebox(0,0)[lb]{\raisebox{0pt}[0pt][0pt]{flop}}} \put(125,850){\makebox(0,0)[lb]{\raisebox{0pt}[0pt][0pt]{$X_1$}}} \put(350,850){\makebox(0,0)[lb]{\raisebox{0pt}[0pt][0pt]{$X_2$}}} \end{picture}$$ \caption{A flop.} \label{fig:flop} \end{figure} If we now calculate the 3-point function (\ref{eq:iflp}) using (\ref{eq:i12}), we find \begin{equation} \eqalign{ \langle \phi_1\phi_2\phi_3\rangle_1&= (D_1\cap D_2\cap D_3)_2 + \sum_\beta \left(\frac{q}{1-q}+1\right) (D_1\cap C_\beta)_1(D_2\cap C_\beta)_1(D_3\cap C_\beta)_1 \cr &=(D_1\cap D_2\cap D_3)_2 + \sum_\beta \frac{q^{-1}}{q^{-1}-1} (-D_1\cap C_\beta')_2(-D_2\cap C_\beta')_2(-D_3\cap C_\beta')_2 , } \label{eq:instcalc} \end{equation} where we have used $(D\cap C_\beta)_1=(-D\cap C_\beta')_2$. Noting that the change in sign of homology class $[C_\beta']=-[C_\beta]$ demands that we replace $q$ by $q^{-1}$, we conclude that $\langle \phi_1\phi_2\phi_3\rangle_1=\langle \phi_1\phi_2\phi_3\rangle_2$, as expected. \subsection{The $\hbox{\bf Z}_2$-quotient singularity} \label{ss:Z2} The following is the only perestro\u\i ka in one-dimension: \begin{equation} \setlength{\unitlength}{0.005in}% \begin{picture}(310,150)(195,655) \thinlines \put(200,800){\circle*{10}} \put(200,660){\circle*{10}} \put(500,660){\circle*{10}} \put(500,800){\circle*{10}} \put(200,730){\circle*{10}} \put(500,730){\circle{10}} \put(200,800){\line( 0,-1){140}} \put(500,800){\line( 0,-1){140}} \put(280,730){\vector(-1, 0){ 0}} \put(280,730){\vector( 1, 0){140}} \end{picture} \label{eq:Z2} \end{equation} In this picture the network on the left has 2 lines whereas on the right the middle point is ignored and there is only one line. In our example we have a few such configurations, e.g., \begin{equation} \vec\alpha_3+\vec\alpha_8-2\vec\alpha_2=0. \end{equation} Indeed this perestro\u\i ka can be applied to (\ref{eq:res1}) by removing the point $\alpha_2$ from the triangulation. The model of $X$ thus obtained has a curve of ${\bf Z}_2$ quotient singularities \cite{AGM:II}. The operation (\ref{eq:Z2}) is (moving from right to left) precisely the resolution of a ${\bf Z}_2$ quotient singularity in ${\bf C}^2$ where the ${\bf Z}_2$ action in ${\bf C}^2$ is $(z_1,z_2)\mapsto(-z_1,-z_2)$. The rational curve associated to the perestro\u\i ka (\ref{eq:Z2}) thus joins a limit point of a space with a ${\bf Z}_2$ quotient singularity to the limit point of a space where such a singularity has been blown-up (to infinite size). To put a branch-point at $z=1$ we define \begin{equation} z = 4\frac{a_3a_8}{a_2^2}, \label{eq:thisZ2} \end{equation} i.e., we have introduced a factor of 4. The associated ODE is \begin{equation} \left(z\frac{d}{dz}\right)^2f - z\left(z\frac{d}{dz}\right) \left(z\frac{d}{dz}+\ff12\right)f=0. \end{equation} This has a general solution \begin{equation} f=C_1 + C_2\log\left(\frac{2-z-2\sqrt{1-z}}z\right). \end{equation} Clearly the term which is constant at $z=0$ is again exactly constant. Expanding the second term around $z=0$ we obtain (assuming the square root to be positive) \begin{equation}\label{eq:theabove} \log\left(\frac{2-z-2\sqrt{1-z}}z\right) = \log(z/4)+{\ff {1}{2}}z+{\ff {3}{16}}z^{2}+{\ff {5} {48}}z^{3}+{\ff {35}{512}}z^{4}+{\ff {63}{1280}}z^{5}+O\left (z^{6 }\right). \end{equation} Because of our rescaling of the $z$ variable we now need to look for a solution which behaves like $\log(z/4)$ to leading order. This is simply given by (\ref{eq:theabove}). Thus we obtain \begin{equation} B+iJ=\frac1{2\pi i}\log\left(\frac{2-z-2\sqrt{1-z}}z\right). \end{equation} This therefore gives an example where the \sm\ measure\ and the algebraic measure\ do {\em not\/} agree. An interesting question we can ask is what is the value of $B+iJ$ at the orbifold limit point, i.e., when $z\to\infty$. The component of the K\"ahler form we are studying is the class controlling the areas of Riemann surfaces which lie in the exceptional divisor resulting from blowing-up this singularity. That means that in some sense we are looking at the volume of this exceptional divisor. Na\"\i vely of course from classical geometry we assume that this volume is zero at the orbifold point but we see that the algebraic measure\ would have us believe that the relevant areas are $-\infty$. To find what the \sm\ measure\ tells us let us introduce the variable \begin{equation} \psi=z^{-1/2} \end{equation} and then carefully rewrite $B+iJ$ in this variable assuming $0<\arg(\psi) <\pi$ to obtain \begin{equation} B +iJ=-\frac1\pi\cos^{-1}\psi, \end{equation} where we take the branch corresponding to $\cos^{-1}\psi= \frac\pi2-\psi+O(\psi^3)$. (This is consistent with our earlier choice of branch cuts.) Thus we see that the orbifold, given by $\psi=0$ corresponds to $B=-1/2$ and $J=0$. The fact that $J=0$ means that the \sm\ measure\ agrees with the classical volume, i.e., the volume of the exceptional divisor (and thus the areas of the Riemann surfaces within it) before you blow-up a singularity is zero. More curious is the value $B=-1/2$ at the orbifold point which would appear to have no classical explanation. Note that we have measured the volume of the exceptional divisor at the the limit point of the orbifold in ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$, that is, all sizes except that associated with the exceptional divisor are infinite. It is an interesting question to see whether the exceptional divisor has non-zero volume in the case of an orbifold {\em not\/} at an otherwise large radius limit. We hope to address this question in future work.\footnote{Recently a two parameter moduli space has been studied in detail \cite{CDFKM:I} which should help address this question.} Proponents of a universal ``$R\to1/R$'' symmetry in the moduli space of string vacua should take note that in passing from the smooth blown-up Calabi-Yau\ manifold to the orbifold we have been able to shrink the Riemann surfaces within the exceptional divisor completely down to zero size without being able to identify this with some equivalent large radius model. There is no symmetry between the orbifold points and any other points in the moduli space. Thus it would appear that string theory does {\em not\/} remove all distances less than the Planck scale from a moduli space. Some parts of a target space can become as small as they wish at least so long as the rest of the target space is at large radius limit. In this example in moving from the algebraic measure\ to the \sm\ measure\ we have removed negative areas. That is, $J\geq0$ for all points on the rational curve in ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$. We will discuss this further after looking at some more orbifolds. The branch-point of the general solution of this hypergeometric equation is at $z=\psi=1$, i.e., $B+iJ=0$. This is where many three-point functions will diverge in the conformal field theory. This shows that the only difference between an orbifold, where string theory is known to be well behaved, and a ``bad'' conformal theory is the value of the $B$-field since in both cases $J=0$, i.e., the volume of the exceptional divisor is zero. Let us now look at the form of the discriminant for this ${\bf Z}_2$ resolution in the context of our example. The monomial for the large-radius limit resolution is given by (\ref{eq:del1}) and one may derive the monomial in $\Delta_p$ corresponding to the neighbouring cone in the secondary fan corresponding to the Calabi-Yau\ space with a curve of ${\bf Z}_2$-quotient singularities as \begin{equation} r_\xi\delta_\xi = -64a_0^{18}a_1^{8}a_3^{11}a_4^{10} a_5^{12}a_6^{12}a_7^{6}a_8^{9}a_9^{4}. \label{eq:del1-2} \end{equation} If we assert that the discriminant locus intersects our rational curve in ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$ at $z=1$ then we see immediately that $\Delta_p$ for points in ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$ near this rational curve is given by \begin{equation} \eqalign{\Delta_p&\simeq a_0^{18}a_1^{8}a_2^{6}a_3^{8}a_4^{10} a_5^{12}a_6^{12}a_7^{6}a_8^{6}a_9^{4}\,(1-z)^3\cr &= a_0^{18}a_1^{8}a_2^{6}a_3^{8}a_4^{10} a_5^{12}a_6^{12}a_7^{6}a_8^{6}a_9^{4} - 12a_0^{18}a_1^{8}a_2^{4}a_3^{9}a_4^{10} a_5^{12}a_6^{12}a_7^{6}a_8^{7}a_9^{4} \cr &\qquad\qquad+ 48a_0^{18}a_1^{8}a_2^{2}a_3^{10}a_4^{10} a_5^{12}a_6^{12}a_7^{6}a_8^{8}a_9^{4} - 64a_0^{18}a_1^{8}a_3^{11}a_4^{10} a_5^{12}a_6^{12}a_7^{6}a_8^{9}a_9^{4}.\cr} \end{equation} This shows how important terms from $\widetilde\Delta_p$ are on the rational curves in ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$. In this case we derive two terms in $\widetilde\Delta_p$, i.e., terms in $\Delta_p$ which could not be obtained by the methods of section \ref{ss:cp}. In this paper we will usually use the word ``orbifold'' to refer to a space whose only singularities are locally of the form of quotient singularities. It is more conventional when talking about conformal field theories to consider an orbifold to be {\em globally\/} of the form of a quotient of a smooth manifold (or conformal field theory). In this case one can determine the massless spectrum of the theory to be composed of fields from the original smooth theory combined with twisted fields from the quotient singularities in the new space. (Massive fields can also appear from group elements with no fixed points.) The specific example of a curve of ${\bf Z}_2$-quotient singularities we have considered for $z$ given by (\ref{eq:thisZ2}) cannot be globally written as an orbifold. Despite this fact we claim that we can still relate more conventional conformal field theory ideas to this orbifold as we now argue. Consider the Landau-Ginzburg orbifold theory given by the minimal triangulation of ${\Scr A}$ given by the simplex $\alpha_5\alpha_6\alpha_7\alpha_8\alpha_9$. This is an orbifold theory in the conformal field theory sense and thus has a ``quantum''-symmetry group \cite{Vafa:qs} isomorphic to the group by which we quotiented the original Landau-Ginzburg model. This ${\bf Z}_{18}$ symmetry is given by $x_4\to\exp(2\pi i/18)x_4$. The monomial $a_1x_2^3x_4^9$ transforms as a faithful representation of a ${\bf Z}_2$ subgroup of this group. Thus if this monomial is added to the Landau-Ginzburg superpotential we would break the ${\bf Z}_2$ symmetry. This is precisely the conformal field theory picture of resolving a ${\bf Z}_2$-quotient singularity --- we add the twisted marginal operator $a_1x_2^3x_4^9$ into the action to break the discrete symmetry. In terms of toric geometry this resolution of a singularity in $X$ corresponds to a subdivision of the fan representing $X$ by a star subdivision (see for example \cite{AGM:II}). Such a subdivision adds a point in ${\Scr A}$ into the triangulation. By the monomial-divisor mirror map, this point in ${\Scr A}$ is precisely the point that represents the monomial which acts as the twisted marginal operator --- i.e., $\alpha_1$. In our example we do not have a global quotient singularity but it is locally of the form of a quotient singularity and thus we expect at least the massless part of the conformal field theory to behave as if it were an orbifold. This is because massless twist fields can be considered to be localized around the fixed points. Thus we claim that for the transition given by (\ref{eq:thisZ2}) the ``twisted'' marginal operator is the monomial corresponding to the point in ${\Scr A}$ added into the triangulation by the perestro\u\i ka --- namely $a_2x_3^6x_4^6$. Indeed if we follow the approach of \cite{AGM:II} to find which superpotential, i.e., which values of $a_k$, give the relevant space with a ${\bf Z}_2$-quotient singularity we find this consistent with $a_2=0$, i.e., this marginal operator switched off. Take the theory with a quotient singularity and perform a perturbative expansion for small values of $a_2$ to blow-up the singularity (much along the lines of \cite{Cve:orb} for example). Our 3-point functions will be in the form of a power series in $a_2$, i.e., $\psi =a_2/\sqrt{4a_3a_8}$ if we write things in a $({\bf C}^*)^5$-invariant way. We know that the discriminant locus occurs at $\psi=1$ and so this marks the boundary of the circle of convergence for such a power series. In particular such a perturbative method cannot reach the smooth target space ($\psi\to\infty$) before breaking down. This is one way of viewing the ``phases'' picture of the moduli space \cite{W:phase}. In one region of moduli space containing the orbifold theory we may use perturbation theory in twisted (perhaps only in the local sense) marginal operators to calculate all 3-point functions. This region is neighboured by another region containing the point corresponding to the quotient singularity having been resolved with an exceptional divisor of infinite size. Any 3-point function in this region may be calculated by an expansion in terms of instantons given by $\P^1$'s in the exceptional divisor. On the boundary between these two regions the twisted marginal field's coefficient becomes too large for the twisted field perturbation theory to converge and on the other hand the exceptional divisor becomes too small for the instanton expansion on $\P^1$'s to converge. \subsection{The $\hbox{\bf Z}_3$-quotient singularity} \label{ss:Z3} Consider the following perestro\u\i ka in ${\bf R}^2$: \begin{equation} \setlength{\unitlength}{0.005in}% \begin{picture}(525,130)(85,665) \thinlines \put(170,790){\circle*{10}} \put(250,670){\circle*{10}} \put( 90,670){\circle*{10}} \put(170,710){\circle*{10}} \put(170,790){\line( 2,-3){ 80}} \put(250,670){\line(-1, 0){160}} \put(170,790){\line(-2,-3){ 80}} \put( 90,670){\line( 2, 1){ 80}} \put(170,710){\line( 0, 1){ 80}} \put(170,710){\line( 2,-1){ 80}} \put(525,790){\circle*{10}} \put(605,670){\circle*{10}} \put(445,670){\circle*{10}} \put(525,710){\circle{10}} \put(280,730){\vector(-1, 0){ 0}} \put(280,730){\vector( 1, 0){140}} \put(525,790){\line( 2,-3){ 80}} \put(605,670){\line(-1, 0){160}} \put(525,790){\line(-2,-3){ 80}} \end{picture} \end{equation} A fan based on the these triangles gives the toric description of an isolated ${\bf Z}_3$-quotient singularity and its blow-up. The quotient singularity in ${\bf C}^3$ is given by the action $(z_1,z_2,z_3)\mapsto(\omega z_1,\omega z_2,\omega z_3)$ where $\omega=\exp(2\pi i/3)$. In our example this perestro\u\i ka can occur based on the following relation \begin{equation} \vec\alpha_1+\vec\alpha_7+\vec\alpha_8-3\vec\alpha_4=0. \end{equation} One of the smooth models of $X$ (resolution number 5 in \cite{AGM:I}) admits this perestro\u\i ka and so one of the big cones neighbouring this cone corresponds to a target space that has acquired a ${\bf Z}_3$ quotient singularity. Defining \begin{equation} z = -27\frac{a_1a_7a_8}{a_4^3}, \end{equation} we obtain the differential equation \begin{equation} \left(z\frac{d}{dz}\right)^3f - z\left(z\frac{d}{dz}\right) \left(z\frac{d}{dz}+\ff13\right)\left(z\frac{d}{dz}+\ff23\right) f=0. \label{eq:Z3h} \end{equation} Again the solution that is regular at $z=0$ is just a constant. This time however the solution that behaves like $\log(z)$ cannot be determined in terms of elementary functions. To find the required solution of (\ref{eq:Z3h}) we need to turn to the theory of hypergeometric functions. Indeed, with the exception of the flop and the ${\bf Z}_2$-orbifold, all the ODE's we obtain for a perestro\u\i ka will require hypergeometric function theory to find the \sm\ measure. Recall \cite{Slater:hyp} that the hypergeometric function ${\cal F}{N+1}N$ is defined by the infinite series \begin{equation} \eqalign{ {\cal F}{N+1}N(a_1,a_2,\ldots,a_{N+1};\,b_1,b_2,&\ldots,b_N;\,z)= 1+\frac{a_1a_2\ldots a_{N+1}}{b_1b_2\ldots b_{N}} \frac{z}{1!}\cr &+\frac{a_1(a_1+1)a_2(a_2+1)\ldots a_{N+1}(a_{N+1}+1)} {b_1(b_1+1)b_2(b_2+1)\ldots b_{N}(b_{N}+1)}\frac{z^2}{2!}+\ldots,\cr } \end{equation} where $a_n,b_n$ are complex numbers (not to be confused with any previous use of these symbols). This series converges for $|z|<1$. The following ODE has as a solution $f(z)={\cal F}{N+1}N(a_1,\ldots;\,b_1,\ldots;\,z)$: \begin{equation} \eqalign{ \Biggl\{z\frac{d}{dz}\left(z\frac{d}{dz}+b_1-1\right)& \left(z\frac{d}{dz}+b_2-1\right)\ldots \left(z\frac{d}{dz}+b_N-1\right)\cr &-z\left(z\frac{d}{dz}+a_1\right)\left(z\frac{d}{dz}+a_2\right) \ldots\left(z\frac{d}{dz}+a_{N+1}\right) \Biggr\}f=0.\cr} \label{eq:hDE} \end{equation} All the differential equations encountered when finding the \sm\ measure\ are of the form (\ref{eq:hDE}). Thus applying hypergeometric theory to our differential equation (\ref{eq:Z3h}) we obtain the solution $f(z)=\F32(0,\ff13,\ff23;\,1,1;\,z)=1$. Hence we recover the solution we already knew. To find the other solution we require, we substitute \begin{equation} g(z) = z\frac{d}{dz}f(z) \end{equation} into (\ref{eq:Z3h}). This leads to a lower-order hypergeometric differential equation with solution $g(z)=\F21(\ff13,\ff23;\,1;\,z)$. Expanding this solution we obtain \begin{equation} \eqalign{f(z) &= \int z^{-1}g(z)\,dz\cr &= \int z^{-1}\left(1+\ff29z+\ff{10}{81}z^2+ \ff{560}{6561}z^3+\ldots\right)\,dz\cr &= \log(z) + C + \ff29z + \ff5{81}z^2 + \ff{560}{19683}z^3+\ldots,\cr} \label{eq:fint} \end{equation} for some constant, $C$. This clearly provides the other solution we require so that \begin{equation} B+iJ = \frac1{2\pi i}\left\{\log(z/27)+ \ff29z + \ff5{81}z^2 + \ff{560}{19683}z^3+\ldots\right\} \label{eq:Z3s} \end{equation} Now let us determine, as we did for the ${\bf Z}_2$ quotient singularity, the areas of the Riemann surfaces within the exceptional divisor when $z\to\infty$. In the case of the ${\bf Z}_2$ orbifold we have an exact form for the \sm\ measure\ and so this determination was straight-forward. In the ${\bf Z}_3$ case however we only have a series solution and this clearly diverges as $z\to\infty$. What we require therefore is the analytic continuation of the series in (\ref{eq:Z3s}). This may be done by solving the hypergeometric differential equation, this time as a series expanded around $z=\infty$ and then match these solutions to the solutions around $z=0$. This is known as the {\em connection problem} (see for example \cite{IKSY:}). In \cite{CDGP:} the connection problem was solved by finding a complete set of periods as a series solution around $z=0,1,\infty$ and then demanding that the transformation between these be symplectic. We shall employ another method which is more straight-forward to apply to a general case. The connection problem is simple to solve with Barne's-type integrals when no solutions with logarithmic poles are involved. Consider our function $g(z)$ above represented as a Barne's-type integral: \begin{equation} \F21(\ff13,\ff23;\,1;\,z)=\frac1{2\pi i\Gamma(\ff13)\Gamma(\ff23)} \int_{-i\infty}^{+i\infty}\frac{\Gamma(t+\ff13)\Gamma(t+\ff23) \Gamma(-t)}{\Gamma(t+1)}(-z)^t\,dt, \end{equation} where $|\arg(-z)|<\pi$. The integration path moves to the left around the pole at $t=0$ as shown in figure \ref{fig:path}. \begin{figure} \setlength{\unitlength}{1mm} $$\begin{picture}(100,80)(0,0) \thinlines \put(0,40){\line(1,0){100}} \put(50,0){\line(0,1){80}} \multiput(48,38)(10,0){6}{\makebox(4,4){$\times$}} \multiput(44.66666,38)(-10,0){5}{\makebox(4,4){$\times$}} \multiput(41.33333,38)(-10,0){5}{\makebox(4,4){$\times$}} \thicklines \put(50,0){\line(0,1){38.3}} \put(50,80){\line(0,-1){38.3}} \put(50,40){\oval(3.2,3.2)[l]} \put(50,10){\vector(0,1){10}} \put(50,50){\vector(0,1){10}} \put(15,70){\makebox(0,0){$t$-plane}} \end{picture}$$ \caption{The integration path for $\F21(\ff13,\ff23;\,1;\,z)$.} \label{fig:path} \end{figure} The series form of this hypergeometric function is recovered if one completes the integration path into a loop to enclose all the poles to the right of the path. The residues at the non-negative integers form the infinite sum. To find these residues and also to prove many of the following relations in this paper we use \begin{equation} \Gamma(x)\Gamma(1-x) = \frac\pi{\sin(\pi x)}. \label{eq:Gid} \end{equation} One may also complete the path to the left enclosing the poles at $n-\ff13,n-\ff23$, where $n$ is a non-positive integer. This expresses our hypergeometric function as another sum which now converges for $|z|>1$. This new sum is therefore the analytic continuation of the original sum. In fact this new sum is a sum of other hypergeometric functions: \begin{equation} \eqalign{ \F21(\ff13,\ff23;\,1;\,z) \simeq \frac{\Gamma(\ff13)}{\Gamma^2(\ff23)} e^{-\frac{\pi i}3}\psi\,&\F21(\ff13,\ff13;\,\ff23;\,\psi^3)\cr &-3\frac{\Gamma(\ff23)}{\Gamma^2(\ff13)} e^{-\frac{2\pi i}3}\psi^2\,\F21(\ff23,\ff23;\,\ff43;\,\psi^3),\cr} \label{eq:F12c} \end{equation} where ``$\simeq$'' denotes analytic continuation and we have introduced $\psi=z^{-1/3}$ such that $0<\arg(\psi)<2\pi/3$. For details see, for example, page 136 of \cite{Slater:hyp}. To analytically continue our definition of $B+iJ$ to the orbifold point we need to multiply (\ref{eq:F12c}) by $z^{-1}$ and integrate. Doing this directly would introduce an integration constant which would be undetermined. To answer the question of what the size of the exceptional divisor at the orbifold point is, we need to know this constant. Let us instead na\"\i vely apply this process directly to the integrand in the Barne's-type integral. This gives the following function: \begin{equation} h(z) = \frac1{2\pi i\Gamma(\ff13)\Gamma(\ff23)} \int_{-i\infty}^{+i\infty}\frac{\Gamma(t+\ff13)\Gamma(t+\ff23) \Gamma(-t)}{t\Gamma(t+1)}(-z)^t\,dt. \label{eq:Bl1} \end{equation} Completing this path to the right and writing it as a sum of residues we certainly recover the part of (\ref{eq:fint}) which is a power series in $z$. The subtlety arises because of the double pole we now have at $t=0$. Remember that if $w(t)$ is nonzero and finite at $t=0$ then the residue of $w(t)/t^2$ at $t=0$ is $(w^\prime(t))_{t=0}$. It follows that \begin{equation} h(z) = \log(-z) + \Psi(\ff13) + \Psi(\ff23) - 2\Psi(1) +\ff29z + \ff5{81}z^2 + \ff{560}{19683}z^3+\ldots, \end{equation} where $\Psi(n)$ is the {\em digamma\/} or {\em psi\/} function which is defined as the derivative of $\log\Gamma(n)$. Since (from 8.365.6 of \cite{GR:big}) \begin{equation} \sum_{k=1}^{n-1}\Psi(k/n)-(n-1)\Psi(1) = -n\log n, \end{equation} we have \begin{equation} h(z) = \log(-\frac{z}{27})+\ff29z + \ff5{81}z^2 + \ff{560}{19683}z^3+\ldots \label{eq:hasp} \end{equation} Thus \begin{equation} B+iJ=\frac1{2\pi i}h(z) -\ff12. \end{equation} We can now analytically continue $B+iJ$ into the $|z|>1$ region by completing the path of the integral in (\ref{eq:Bl1}) to the left and writing it as a sum over residues. The result is \begin{equation} \eqalign{ h(z) = -3\frac{\Gamma(\ff13)}{\Gamma^2(\ff23)} e^{-\frac{\pi i}3}\psi\,&\F32(\ff13,\ff13,\ff13;\,\ff23,\ff43;\,\psi^3)\cr &+\frac92\frac{\Gamma(\ff23)}{\Gamma^2(\ff13)} e^{-\frac{2\pi i}3}\psi^2\,\F32(\ff23,\ff23,\ff23;\, \ff43,\ff53;\,\psi^3).\cr} \label{eq:hpsi} \end{equation} Note this is a combination of hypergeometric functions which are solutions to third-order differential equations. These equations may be derived directly from (\ref{eq:Z3h}) by suitable changes of variable. Putting $\psi=0$ to obtain the orbifold point we see that $B+iJ=-\ff12$. Thus the size of the exceptional divisor is again zero and again the $B$-field has value $-\ff12$. Notice the cancellation that was required between the digamma functions appearing from the double pole in the Barne's-type integral and the $\operatorname{Vol}(\sigma)^{\operatorname{Vol}(\sigma)}$-type factors that were required to achieve this seemingly trivial final result. Note from (\ref{eq:hpsi}) that, for $|\psi|\ll1$ we have $\pi/6<\arg(\ff1{2\pi i}h(z))<5\pi/6$. This shows that $J\geq0$, i.e., we have only non-negative areas in this region. In fact, negative areas are completely excluded from the conformal field theories parameterized by this rational curve in ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$. We have also done enough to determine the value of $B+iJ$ at $z=1$ where we expect the conformal field theory to be singular. One finds $B=0$ and $J\approx0.463$ ($\approx 18.3\alpha^\prime$ putting back units of length). Thus, in contrast to the ${\bf Z}_2$-singularity case, the discriminant now vanishes when we acquire a specific non-zero size for the exceptional divisor. Naturally everything we said about the description of a theory in terms of twisted marginal operators in the last section also applies to this case. In our example the twisted marginal operator resolving the ${\bf Z}_3$-quotient singularity would be $a_4x_2^3x_3^3x_4^3$. Again this is only locally of the form of a quotient singularity and this operator is not twisted under any global symmetry of a covering theory. The moduli space of K\"ahler forms on the so-called $Z$-manifold was studied in \cite{Drk:Z}. This manifold is the resolution of an orbifold with ${\bf Z}_3$-quotient singularities of the form studied in this section. Indeed similar hypergeometric functions appear in \cite{Drk:Z} where the entire region of moduli space in the orbifold ``phase'' is studied. \subsection{The $\hbox{\bf Z}_4$-quotient singularity} In this paper we will concentrate mainly on perestro\u\i ka which take one from a large radius limit smooth Calabi-Yau\ manifold to a neighbouring cone. This is because we know how to define the K\"ahler form for the smooth Calabi-Yau\ manifold by using the monomial-divisor mirror map. If we look at any other perestro\u\i ka it would be necessary to first determine $B+iJ$ at one of the limit points by following a path from a smooth Calabi-Yau\ manifold limit point. As we discuss briefly later, such a path will usually raise considerations about basis changes as one moves from one perestro\u\i ka to the next. In this section we look at a simple example where we may ignore such problems. That is, we will blow down two irreducible divisors which will not ``interfere'' with each other and thus no basis change is required. Any quotient singularity in Calabi-Yau\ spaces of complex dimension 3 other than the two we have just studied will require an exceptional divisor with more than one irreducible component. This means that a complete resolution of the singularity requires more than one perestro\u\i ka. Consider the next simplest case: \begin{equation} \setlength{\unitlength}{0.005in}% \begin{picture}(490,340)(80,460) \thinlines \put(165,795){\circle*{10}} \put(245,675){\circle*{10}} \put(165,675){\circle*{10}} \put(165,735){\circle*{10}} \put( 85,675){\circle*{10}} \put(485,795){\circle*{10}} \put(565,675){\circle*{10}} \put(485,675){\circle*{10}} \put(405,675){\circle*{10}} \put(485,585){\circle*{10}} \put(565,465){\circle*{10}} \put(405,465){\circle*{10}} \put(165,585){\circle*{10}} \put(245,465){\circle*{10}} \put(165,525){\circle*{10}} \put( 85,465){\circle*{10}} \put(485,525){\circle{10}} \put(485,465){\circle{10}} \put(165,465){\circle{10}} \put(485,735){\circle{10}} \put(260,740){\vector(-1, 0){ 0}} \put(260,740){\vector( 1, 0){120}} \put(165,655){\vector( 0, 1){ 0}} \put(165,655){\vector( 0,-1){ 50}} \put(260,525){\vector(-1, 0){ 0}} \put(260,525){\vector( 1, 0){120}} \put(485,655){\vector( 0, 1){ 0}} \put(485,655){\vector( 0,-1){ 50}} \put(165,795){\line( 0,-1){120}} \put(165,795){\line( 2,-3){ 80}} \put(245,675){\line(-1, 0){160}} \put( 85,675){\line( 2, 3){ 80}} \put( 85,675){\line( 4, 3){ 80}} \put(165,735){\line( 4,-3){ 80}} \put(485,795){\line( 0,-1){120}} \put(485,795){\line( 2,-3){ 80}} \put(565,675){\line(-1, 0){160}} \put(405,675){\line( 2, 3){ 80}} \put(485,585){\line( 2,-3){ 80}} \put(565,465){\line(-1, 0){160}} \put(405,465){\line( 2, 3){ 80}} \put(165,585){\line( 2,-3){ 80}} \put(245,465){\line(-1, 0){160}} \put( 85,465){\line( 2, 3){ 80}} \put( 85,465){\line( 4, 3){ 80}} \put(165,525){\line( 4,-3){ 80}} \put(165,585){\line( 0,-1){ 60}} \put(465,620){\makebox(0,0)[lb]{\raisebox{0pt}[0pt][0pt]{1}}} \put(320,745){\makebox(0,0)[lb]{\raisebox{0pt}[0pt][0pt]{2}}} \put(320,530){\makebox(0,0)[lb]{\raisebox{0pt}[0pt][0pt]{3}}} \put(140,625){\makebox(0,0)[lb]{\raisebox{0pt}[0pt][0pt]{4}}} \end{picture} \label{eq:Z4t} \end{equation} The bottom-right diagram is the toric picture for a ${\bf Z}_4$-quotient singularity in ${\bf C}^3$ generated by $(z_1,z_2,z_3)\mapsto (iz_1,iz_2,-z_3)$. The top-left diagram is the complete blow-up of this singularity to give a smooth space. There are two irreducible components to the exceptional divisor and thus two perestro\u\i ka are involved. The two components of the exceptional divisor may be produced in either order in the blow-up procedure so that there are two possible paths the perform the blow-up as shown above. Such a choice of paths is a common feature in ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$. It is clear from figure \ref{fig:web} that the journey between any two limit points may be taken along many paths. In order for us to be able to give a value of the \sm\ measure\ to each limit point we require that the choice of paths does not affect this value. One of the paths in this ${\bf Z}_4$ example (taken along line 2 and 1 in (\ref{eq:Z4t})) consists of two perestro\u\i ka of the type considered in section \ref{ss:Z2}. We know therefore that this path leads to zero volumes for both components of the exceptional divisor in the orbifold limit. In the alternative path, line 4 is also of this type so that one component of the exceptional divisor is again zero at the orbifold point. In order for (\ref{eq:Z4t}) to be commutative we thus require that the perestro\u\i ka given by line 3 gives zero volume at this point as we will now check. In our example this configuration is given by \begin{equation} \vec\alpha_3+2\vec\alpha_7+\vec\alpha_8-4\vec\alpha_4=0, \end{equation} leading to \begin{equation} z=64\frac{a_3a_7^2a_8}{a_4^4}. \end{equation} We can now follow the procedure in section \ref{ss:Z3} where now we are dealing with the solutions of the equation related to the hypergeometric function $\F43(0,\ff14,\ff12,\ff34;\,\ff12,1,1;\,z)$. Again the regular solution is a constant and again we obtain the other solution by the trick in equation (\ref{eq:Bl1}). This time we obtain \begin{equation} \eqalign{h(z)&=\log(-z/64) +\ff3{16}z +\ff{105}{2048}z^2+ \ff{385}{16384}z^3+\ldots\cr &=-4\frac{\Gamma(\ff14)}{\Gamma^2(\ff34)}e^{-\frac{\pi i}4} \psi\,\F43(\ff14,\ff14,\ff14,\ff34;\, \ff12,\ff34,\ff54;\,\psi^4)+\ldots,\cr} \end{equation} where $\psi=z^{-1/4}$ and $0<\arg(\psi)<\pi/2$. Thus at the orbifold point, as $\psi\to0$ we have zero volume again as we expected. \subsection{Changing the dimension of $X$} Thus far we have always obtained the the result that $B+iJ=-\ff12$ at the limit point where we remove a point from ${\Scr A}$ from the triangulation. This is {\em not\/} a general feature. It is not difficult to convince oneself however that it will be when can use the constructions above. That is, the digamma functions introduced by the double pole at $z=0$ will cancel the factor introduced in the definition of $z$. This construction of the \sm\ measure\ relied on the fact that one of the solutions of the hypergeometric differential equation was a constant. Remember from (\ref{eq:Phf}) that $a_0$ plays a distinguished r\^ole in our hypergeometric system. So far, non of the perestro\u\i ka considered have involved the point $\alpha_0$. So long as this is true, we will obtain a hypergeometric equation with a constant solution. In fact, it is not hard to prove that the condition for {\em not\/} having a constant solution is as follows. When the linear relation (\ref{eq:linrel}) is formed, $m_0$ must be non-zero and have opposite sign to the other non-zero $m_s$'s. This statement is equivalent to the statement that the associated perestro\u\i ka consists of removing (or adding) $\alpha_0$ to the triangulation. It was shown in \cite{AGM:II} that the point $\alpha_0$ plays a distinguished r\^ole for another reason. If $\alpha_0$ is a vertex of every simplex in the triangulation of ${\Scr A}$ then $X$ may be interpreted as an irreducible space of complex dimension 3. If $\alpha_0$ is a vertex of only some of the simplices then $X$ is reducible with only part of $X$ having a 3-dimensional representation. If $\alpha_0$ does not appear in the triangulation then the dimension of $X$ is $<3$. Thus, the perestro\u\i ka we have not yet considered are the ones which lower the dimension of $X$ down from 3. An extreme example of this is effectively the one studied in \cite{CDGP:} where the other limit point is a Landau-Ginzburg orbifold theory, i.e., $X$ has dimension 0. We shall first study one of the neighbours of ``resolution 5'' of \cite{AGM:I} where the dimension is shrunk down to 2 (see \cite{AGM:II} for a full explanation of this). The perestro\u\i ka is identical to that considered in section \ref{ss:Z3} except now the relation is \begin{equation} \vec\alpha_4+\vec\alpha_5+\vec\alpha_6-3\vec\alpha_0=0, \end{equation} and thus \begin{equation} z = -27\frac{a_4a_5a_6}{a_0^3}. \end{equation} Now the differential equation is \begin{equation} \eqalign{ \left(z\frac{d}{dz}\right)^3f - &z\left(z\frac{d}{dz}+\ff13\right) \left(z\frac{d}{dz}+\ff23\right)\left(z\frac{d}{dz}+1\right)f\cr &=\left(z\frac{d}{dz}\right)\left\{\left(z\frac{d}{dz}\right)^2f -z\left(z\frac{d}{dz}+\ff13\right)\left(z\frac{d}{dz}+\ff23\right)f \right\}\cr &=0.\cr} \label{eq:Z3h0} \end{equation} Thus, the solution which is regular at $z=0$ is given by $f(z)= \F21(\ff13,\ff23;\,1;\,z)$. In (\ref{eq:F12c}) we gave the analytic continuation of this for $|z|>1$. We now need to find the solution of (\ref{eq:Z3h0}) that behaves as $\log(z)$ at $z=0$ and continue this to $|z|>1$. {}From our earlier analysis of Barne's-type integrals we saw that a double pole gave a residue with a $\log(z)$ term. With this success in mind consider the following \begin{equation} h(z) = -\frac1{2\pi i\Gamma(\ff13)\Gamma(\ff23)} \int_{-i\infty}^{+i\infty}\Gamma(t+\ff13)\Gamma(t+\ff23) \Gamma^2(-t)z^t\,dt. \label{eq:Bl2} \end{equation} Completing the path to the right and taking residues we obtain \begin{equation} \eqalign{ h(z)=\F21(\ff13,\ff23;\,1;\,z)&\log(z) - \log(27)\cr &+\frac1{\Gamma(\ff13)\Gamma(\ff23)}\sum_{n=1}^\infty \left[\frac\partial{\partial t}\left(\frac {\Gamma(t+\ff13)\Gamma(t+\ff23)}{\Gamma^2(t+1)}\right) \right]_{t=n}z^n.\cr} \end{equation} Completing the path to the left and taking residues we obtain \begin{equation} \eqalign{ h(z)=-\frac{\Gamma(\ff13)}{\Gamma^2(\ff23)}\frac\pi {\sin(\frac\pi3)}&\psi\,\F21(\ff13,\ff13;\,\ff23;\,\psi^3)\cr &+3\frac{\Gamma(\ff23)}{\Gamma^2(\ff13)}\frac\pi {\sin(\frac\pi3)}\psi^2\,\F21(\ff23,\ff23;\,\ff43;\,\psi^3),\cr} \label{eq:F12nc} \end{equation} where, as in section \ref{ss:Z3}, $\psi=z^{-1/3}$ and $0<\arg(\psi) <2\pi/3$. The above expresses $h(z)$ as a linear combination of the same functions that appeared in (\ref{eq:F12c}) and so $f(z)=h(z)$ is a solution of (\ref{eq:Z3h0}). Thus we have found the solution that behaves like $\log(z)$ at $z=0$ and its analytic continuation for $|z|>1$. This is a general method for finding the extra solutions of a hypergeometric equation whose regular solution is ${\cal F}{N+1}N(a_1,a_2,\ldots;\,1,1,\ldots;\,z)$ --- simply take some of the $\Gamma(t+1)$ factors in the denominator of the Barne's-type integral and move them into the numerator as $\Gamma(-t)$ terms (with a change in sign of $z$ for each term). This produces a high-order pole at $z=0$ which gives some power of $\log(z)$ when the residue is taken. The monomial-divisor mirror map tells us \begin{equation} B+iJ=\frac1{2\pi i}\frac{h(z)}{\F21(\ff13,\ff23;\,1;\,z)}. \end{equation} To find the value of $B+iJ$ at the limit point corresponding to the 2-dimensional target space, we take $|\psi|\ll1$ whence from (\ref{eq:F12c}) and (\ref{eq:F12nc}) we obtain \begin{equation} B+iJ\simeq\frac{ie^{\frac{\pi i}3}}{2\sin\frac\pi3}\left\{ 1-3\frac{\Gamma^3(\ff23)}{\Gamma^3(\ff13)}(1- e^{-\frac{\pi i}3})\psi+O(\psi^2)\right\}. \end{equation} Thus for $\psi=0$ we have $B=-\ff12$ and $J=\ff12\cot(\pi/3)$. That is, the area of the generator of $H_2(X,{\bf Z})$ (and thus we infer the volume of $X$) at this limit point is {\em not\/} zero. We also see from the above expression that for small $\psi$ we have \begin{equation} \pi/6<\arg\left((B+iJ)-(B+iJ)_{\psi=0}\right)<5\pi/6, \end{equation} showing how the size always increases as we move away from $\psi=0$. The method of calculation we have just done may also be applied to the mirror of the quintic threefold as studied in \cite{CDGP:}. In this case we obtain the result that at the Landau-Ginzburg orbifold point we obtain $J=\ff12\cot(\pi/5)$ (which is equal to $\frac45\sin^3(2\pi/5)$ as stated in \cite{CDGP:}). We thus see that it is impossible to shrink the whole Calabi-Yau\ manifold down to a point as measured by the \sm\ measure. If we think of a path on figure \ref{fig:web} that begins at a smooth Calabi-Yau\ point and ends on the Landau-Ginzburg orbifold point then one of the lines we traverse must correspond to a perestro\u\i ka that removes $\alpha_0$ and hence yields $J>0$. Notice that to calculate the value of $B+iJ$ at, say, the Landau-Ginzburg orbifold point is quite complicated. As we follow a path along the web of figure \ref{fig:web}, at each vertex we have to change the basis of $B+iJ$ to prepare for the next perestro\u\i ka. Because, as mentioned, earlier we do not have a nice linear structure on the moduli space expressed in terms of the \sm\ measure\ coordinates, the basis change will generally involve transcendental functions. Consider the perestro\u\i ka that adds the point $\alpha_0$ to the minimal triangulation comprising of just the simplex $P^\circ$. This is the transition between the Landau-Ginzburg orbifold and the Calabi-Yau\ space which is a hypersurface in the unresolved $\P^4_{\{6,6,3,2,1\}}$ (see \cite{AGM:II} for more details). If we measure the volume of the Landau-Ginzburg orbifold according to this transition by the above calculation we obtain $J=\ff12\cot(\pi/18)$. This shows that the above value of $J=\ff12\cot(\pi/3)$ must change as we blow-down the remaining parts of $X$ to obtain the Landau-Ginzburg orbifold. This change occurs because of the basis changes in this process. \section{Discussion and Conclusion} \label{s:conc} We began this paper by noting two properties of string theory which are relevant for understanding the space of allowed target space metrics. First, recent work \cite{AGM:I,AGM:II,W:phase}\ has shown that string theory makes sense even if the target space metric does not satisfy the usual positivity conditions that one classically expects. In this regard we are led to augment the space of allowed K\"ahler forms beyond the usual K\"ahler cone. Second, a number of works have demonstrated that string theory appears to impose ``minimal lengths" and hence restricts the physically relevant space of K\"ahler forms to lie within the usual K\"ahler cone. Part of the purpose of the present work has been to study these divergent tendencies and show that, in fact, they are completely consistent. In particular, since the concept of ``size" is an intrinsically classical mathematical notion, we have carefully studied ways of extending its meaning to the more abstract realm of conformal field theory. In essence, we have sought to find natural continuations of the definition of size from classical to quantum geometry. There is no unique way of doing this. We have found, though, that when our conformal field theory has a sigma model interpretation we can extract a definition of size, by using mirror symmetry, from the geometric structure of the latter. We can then extend this definition by analytic continuation to all theories in the enlarged K\"ahler moduli space. We have seen, in particular, that this gives rise to a precise meaning, rooted in the structure of conformal sigma models, to the {\it area of two-cycles\/} throughout the moduli space. With sufficient calculational power, we would explicitly carry out this program and thereby study the full realm of possible areas for these cycles. The work of \cite{AGM:I,AGM:II,W:phase}, for example, would appear to indicate that zero and negative areas would necessarily arise. The present paper, though, has shown that the definition of area that one would directly extract from these works (the algebraic measure) does not agree with the natural sigma model K\"ahler form discussed above. We have therefore sought to determine if the latter definition restores something akin to the usual positivity conditions. For calculational ease, we have limited our attention to particular complex dimension one subspaces in the enlarged K\"ahler moduli space for the illustrative example studied in \cite{AGM:I,AGM:II}. In terms of the algebraic measure\ the enlarged moduli space of K\"ahler forms naturally leads to many ``K\"ahler cones'', each associated to its own geometric model of $X$, glued together spanning the whole ${\bf R}^{h^{1,1}}$. In our example there are 100 such cones of which 5 correspond to smooth Calabi-Yau\ manifolds. We have considered complex dimension one spaces in this moduli space which join the ``large radius limit points'' in each region. In order to determine the value of the \sm\ measure\ at each of these limit points, we have considered the network of rational curves in the compactification divisor of the moduli space which connects them. Each such rational curve leads to an ordinary hypergeometric equation allowing for an analysis along the lines of \cite{CDGP:}. It is reasonable to expect that the extreme values of the \sm\ measure\ will occur at the these limit points. In this paper we have demonstrated this only in a limited sense by looking at the neighbourhoods within rational curves of some of the limit points, where we discover that {\it not}\/ all values in ${\bf R}^{h^{1,1}}$ are attained by the \sm\ measure. To be more precise, all necessarily negative\footnote{By ``necessarily negative'' areas we mean areas which are negative when measured with respect to {\it all}\/ of the birational models $X_i$ of $X$.} areas are eliminated as well as small positive sizes where all components of the \sm\ measure\ are small. Notice however that some Riemann surfaces {\em can\/} be shrunk down to zero area while other parts of $X$ are held at large-radius limit. More precisely, if we plot the ``$J$'' part of the \sm\ measure\ (i.e., the imaginary part of $B+iJ$) in ${\bf R}^{h^{1,1}}$ we do not find a cone structure for each of the 100 phase regions. The 5 cones of the smooth Calabi-Yau\ models are retained as cones asymptotically away from the origin since the \sm\ measure\ and the algebraic measure\ coincide there. All of the other 95 limit points are mapped somewhere within these 5 cones --- i.e., they all have non-negative areas with respect to at least one of these 5 models. Thus, assuming these limit points represent extreme values of $J$, the whole moduli space of \sm\ measure s maps into this union of 5 cones. We can represent this idea in figure~\ref{fig:mush}. We show roughly how the space of algebraic $J$'s as shown in figure~\ref{fig:as} is expected to be modified in going to the same diagram of the \sm\ measure\ $J$ in a hypothetical example where two of the cones give smooth Calabi-Yau\ manifolds. The two smooth Calabi-Yau\ regions are labeled $X_1$ and $X_2$ and the other regions are labeled a, b and c. \iffigs \begin{figure} \centerline{\epsfxsize=15cm\epsfbox{sigmod-fx.ps}} \caption{The different phases as they appear in $J$-space for} \vskip 2pt \centerline{\parbox{2.75in}{the algebraic measure\ (on the left) and the \sm\ measure\ (on the right).}} \label{fig:mush} \end{figure} \fi It would appear therefore that no negative areas appear in the space of conformal field theories describing non-linear $\sigma$-model s or at least if negative areas do occur then they can be redefined away by using a topologically different model for $X$. Thus, string theory does require us to enlarge the space of allowed K\"ahler forms beyond the usual classical K\"ahler cone, but it does so in a manner consistent with non-negative areas. In this way, we resolve the puzzle discussed at the beginning of this section and in the introduction. It is interesting to compare the results of this paper with the results of classical general relativity --- i.e., the moduli space of Ricci-flat metrics on $X$. It turns out that that both the flop and the quotient singularity (and its blow-up) appear as limiting classical solutions to the Einstein equations. First we describe the flop. This case was studied in \cite{CD:flop} and we shall repeat here only an outline of the argument. With a $2\times2$ matrix representation of the coordinates, $W$, one can define a distance $r^2=\operatorname{tr}(W^\dagger W)$ from the double point when the $\P^1$ is blown down. The metric can then be written \begin{equation} ds^2={\Scr F}\,{}^\prime\operatorname{tr}(dW^\dagger\,dW)+{\Scr F}\,{}^{\prime\prime} \left|\operatorname{tr}(dW^\dagger\,dW)\right|^2+4c\frac{|d\lambda|^2}{1+|\lambda|^2}, \end{equation} where ${\Scr F}$ is some function of $r$ and $\lambda$ is the coordinate on the $\P^1$. The real parameter $c$ in the above corresponds to the area of the flopped $\P^1$ and thus may be taken as the component of the (real) K\"ahler form which gives this area. If $c=0$ then the above metric is degenerate (in the sense that some distinguished points are now separated by zero distance). If $c<0$ one may change coordinates to give a smooth metric with a $\P^1$ with area $-c$ \cite{CD:flop}. Thus we see that the only difference between this picture and the stringy picture we presented in terms of ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$ is that we have an extra degree of freedom in the $B$-field which may be used to smooth out the singularity at $c=0$ as far the conformal field theory is concerned \cite{AGM:I}. When we turn to the quotient singularity there is a bigger difference. For example, let us consider the blow-up of the singularity of the type considered in section \ref{ss:Z3}. Near the ${\bf Z}_3$ singularity, or its blow-up, we have \cite{FG:}: \begin{equation} ds^2 = 2\frac{(c^3+L^3)^{\frac13}}L\left\{dx^id\bar x^i -\frac{c^3}{L(c^3+L^3)}\bar x^idx^i x^jd\bar x^j\right\}, \end{equation} where $L=x^i\bar x^i$. This contains a real parameter $c>0$ for a smooth metric (with suitable change of coordinates). This is roughly the form of the metric irrespective of the global geometry so long as $x,\sqrt{c}\ll R$, where $R$ is some characteristic length of the global geometry of $X$. As shown in \cite{FG:} this geometry contains a $\P^2$ submanifold with the standard Fubini-Study metric. The line element, $ds^2$, on this $\P^2$ is proportional to the parameter $c$. This $\P^2$ is clearly the exceptional divisor with $c=0$ giving the quotient singularity. Varying this parameter thus corresponds to varying the component of the real K\"ahler form that gives the volume of the exceptional divisor. For a smooth metric we require $c>0$. This gives the classical moduli space a boundary. If we continue into the $c<0$ region then the $\P^2$ acquires negative size and part of $X$ becomes ``pinched off''. This metric is still Ricci-flat and so depending on one's qualms about negative areas one might wish to consider this a solution of classical general relativity. When we look at the orbifold point in ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$ we see a different picture. Now the point in ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$ which corresponds to a zero volume exceptional divisor is not on a boundary --- ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$ has no boundary. As we move in any direction away from this point the volume of the exceptional divisor becomes positive (or remains zero) and so we never need to address the question of negative volumes. We are unable to pinch off regions of space in this manner. In general the singularities in the conformal field theories appear in a different location in the moduli space compared to the classical picture. In the case of a ${\bf Z}_2$-quotient singularity the singular theory appears at zero-volume exceptional divisor, i.e., just where the singular metric occurs but for the ${\bf Z}_3$-quotient singularity we need a small but non-zero exceptional divisor to have a singular theory. That is, the string theory is singular when the classical theory is smooth! We should be clear about our language of which regions are and are not included in our moduli space. The situation is analogous to the string on a circle of radius $R$ and the $R\leftrightarrow1/R$ duality. One point of view is to say that distances $<1$ exist but may be reinterpreted as distances $>1$. The other point of view is to say that string theory cuts off distances $<1$. We are implicitly assuming this second point of view in the above. This is because we have {\em defined\/} the \sm\ measure\ in terms of the large radius limit(s) of the $\sigma$-model\ and thus have fixed ourselves in the $R>1$ region. Any ruler which can measure distances on our large radius circle manifold and give the correct answer will be unable to measure distances $<1$. To regain the $R\leftrightarrow1/R$ picture of any distance existing but some being equivalent, one would take the moduli space ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$ and form the simply-connected, smooth covering space. The group by which one mods this covering space out by to form ${\hfuzz=100cm\hbox to 0pt{$\;\overline{\phantom{X}}$}\cM}$ is the {\em modular group\/} of the target space $X$. While there is nothing wrong with such a process from the mathematical point of view one should ask what the physical meaning of such a construction is. In terms of the \sm\ measure\ we have introduced negative areas but declared at the same time that they are entirely equivalent to positive areas. Do such areas really exist? The only way that they can be measured is to define the method of reading a ruler such that we get answers which would not agree with the large radius limit Calabi-Yau\ manifold but this is precisely where we try to make contact with our classical ideas of distance! In conclusion we have shown that when building the moduli space of allowed \sm\ measure, all distances, at least for the limit points, are non-negative. In this statement we mean non-negative when measured according to at least one of the smooth models of $X$. Some of the limit points, such as large-radius limit orbifolds, do admit zero distances however showing that string theory does not cut off distances shorter than the Planck scale. \section*{Acknowledgements} We thank E. Witten for helpful discussions. P.S.A.\ would like to thank J. Schiff for useful conversations and for reminding him that one can sometimes solve partial differential equations. B.R.G.\ would like to thank S. Hosono for calling attention to the utility of \cite{Bat:var}. The work of P.S.A.\ was supported by DOE grant DE-FG02-90ER40542, the work of B.R.G.\ was supported by a National Young Investigator award, the Ambrose Monell Foundation and the Alfred P. Sloan Foundation, and the work of D.R.M.\ was supported by an American Mathematical Society Centennial Fellowship. \pagebreak
{ "redpajama_set_name": "RedPajamaArXiv" }
8,712
The Challenges of Solid Waste Recycling in Sustainable Development in Palestine Автор: Халил Мусаб Рушади Ахмад Рубрика: 4. Экономическое развитие и рост Опубликовано в VII международная научная конференция «Экономическая наука и практика» (Краснодар, февраль 2019) Статья просмотрена: 30 раз Скачать электронную версию Халил, Мусаб Рушади Ахмад. The Challenges of Solid Waste Recycling in Sustainable Development in Palestine / Мусаб Рушади Ахмад Халил. — Текст : непосредственный // Экономическая наука и практика : материалы VII Междунар. науч. конф. (г. Краснодар, февраль 2019 г.). — Краснодар : Новация, 2019. — С. 1-4. — URL: https://moluch.ru/conf/econ/archive/323/14736/ (дата обращения: 29.01.2023). As a result of increasing global warming, humanity seeks to reduce emissions Greenhouse gases by rationalizing energy consumption and the search for renewable, environmentally friendly alternative energy. [7] This is what the Palestinian Strategic Plan for Energy seeks to raise the use of renewable energy, in 2020 Palestine consumed 5 % of the total energy in the country. In the medium term, the plan seeks to reduce the energy bill imported from Israel, which in 2017 amounted to 1609.4 $ million. [1] Solid waste in many developed countries, after sorting and recycling, is an important resource for many sources, including energy production. Scientific studies at the global level indicate that the energy that can be derived from these solid wastes can cover 10 % of the world's energy consumption. Hence, these wastes are no longer properly treated as an environmental burden on these countries, but are a vital source, is important in many developed countries, for example Japan and Germany. In Palestine, the experience is modern and primitive. [3] In this article we will talk about the challenges of solid waste recycling as part of achieving sustainable development. [6] Keywords: solid waste, sustainable development, welfare of the nation, natural resources, Renewable energy, investments, national economy, government policy. According to the Palestinian Central Bureau of Statistics (PCBS) for the year 2017, The daily environment is equivalent to 2551 tons of household solid waste, distributed as follows: 1835 tons Daily in the West Bank and 716 tons in the Gaza Strip. In addition to this, 20 thousand tons of Waste of economic facilities per month, and 381 tons per month of health facilities. This means that in 2017 the volume of solid waste in Palestine reached 1.2 million tons. In Palestine waste is disposed in dumps mostly open, burned or received on Entries of residential sites, roads randomly. [2] In the West Bank, nearly half of the population Household waste in 156 shelters is mostly unhealthy, while the other half is delivered on the sides Streets or burns. [16] In the Gaza Strip, 70 % of the waste is disposed in three dumps Major in Gaza, Deir al-Balah and Rafah, and the rest are burned and dumped along the roads. A radical revision of the solid waste situation formally requires the enactment of legislation and laws that contribute to putting end to the random disposal of these wastes. [14] It also requires raising the level of awareness among the population of the seriousness of the continuation of the current situation and the extent of the benefits that society, individuals and the community can gain if a sound policy is Waste is based on the following grounds: [13] The classification of waste by type, membership, paper, glass, plastic, metal, etc. in their respective containers to facilitate recycling. The Central Bureau of Statistics estimates the components of solid household waste for 2017 as follows: 82.3 % Organic waste, 14.7 % diapers, 2.1 % paper and cardboard, and no more than 1 % Other materials 9 %. Domestic solid waste in developed countries is 50 % Organic materials, the rest is inorganic. Map 1: Distribution of vehicles in the West Bank. [4] Waste collection by specialized units or companies in each of these wastes. The collection of waste in all population gatherings, where the statistics of the Central Organ there are still 79 communities in the West Bank with a population of about 40,000 where the waste collection service is practiced. The Gaza Strip is available in all communities Population. recycling of waste. Compacting and minimizing waste. Extract energy from waste in multiple ways. Throwing the remaining waste in sanitary landfills that protect against pollution of air and subsoil. * Extraction of energy from solid waste The principle of devising energy from organic solid waste is based on the principle that solar energy stored in these wastes are re-ignited when burned. [4] According to the experience of developed countries in dealing with solid waste, especially in Russia, Germany, and the United States that access to waste energy is possible across several ways: [9] Burning waste in Afran at a temperature ranging from 800 ° — 1000 ° C, generates high heat turning the liquid into a vapor that runs the turbine and generates electrical energy. Burning solid waste at 800 ° -900 ° C turns it into combustible gases consisting of hydrogen, carbon monoxide, and other gases that move turbines, which operate on gas to generate electricity. This process is called (Gasification). Burn solid waste at 500 ° C, and in the absence of oxygen, convert it to bio-fuels, coal, and gas molecules. Bio-oils are often used as biofuels heating, or to operate turbines that operate on gas. This process is called Pyrolysis. Brewing of organic waste in air-free buried places, where bacteria convert wastes into methane, carbon dioxide and other gases. This method is more efficient in high-moisture wastes. The burning process is usually done in ovens called (Plasma Gasification Melting PGM) where this process is produced (Syngas). [8] The number of plants operating in the world by PGM exceeds 1,000, which processes approximately 200 million tons of solid waste, producing 130 TW / tHh. In total, one tonne of burned solid waste gives energy equivalent to one barrel of oil. In addition, carbon dioxide released from the process incineration is less than the same gas that would have been released from these wastes if it had remained in landfills. [10] From the treatment of solid waste in Palestine will achieve many benefits, the most important are: [11] Reduce air pollution caused by emissions from unhealthy landfills and burning solid waste, as well as reducing surface water pollution, and groundwater basins Liquids leaking from these dumps and reduce the contamination of soil with mineral elements plastic and non-biodegradable, and reduce the adverse effect of the transformation of these sediment dumps insects and stray animals. Forming a sustainable source of environmentally friendly alternative energy that reduces energy dependence which reduces pollution and reduces the energy bill consumed in Palestine. The amount of electricity that can be obtained annually from the burning of 95 % of the solid waste in the West Bank at 308 GW. *Biogas is derived from the waste of livestock, birds and humans. The principle is the same as in the extraction of biogas from organic waste, which is to restore the release of solar energy inherent in these wastes either by burning or fermentation, as referred to in the development of energy from solid waste. [13] Access to energy from human and animal organic waste is one of the solutions applied in Many developed and developing countries, where we note that in many countries, it is done on a large national scale across large stations that treat these wastes, as is done in many Other countries on a small home scale as in India and South African countries. In China, animal and human waste has been transformed into so-called anaerobic digestion systems into a source that supplies homes with heating gas and cooking. [15] The experience of these countries will be useful in the countryside and towns of Palestine, where cattle, birds, and other animals are raised in small quantities. The installation of human organic waste handling devices will be another source of energy used for lighting, heating, operation of electrical appliances, water pumping and more. [9] * Studies and experiments in Palestine are still modest in the field of biogas extraction, especially through the utilization of human waste. However, livestock in Palestine, and the handling of sanitary sewage are an important resource for clean and sustainable alternative energy. [10] Livestock Palestine, 2017. [11,15] Number of heads Habash Energy that can be generated from bio-waste. [11,15] Source of Biogas Biogas produced per million m3 Percentage of total waste% Power produced annually GWH gigawatt hours Livestock manure Poultry and poultry manure Plant residues Solid Heaters Recommendations: Solid waste in many developed countries, after sorting and recycling, is an important resource for many sources, including energy production. Scientific studies at the global level indicate that the energy derived from these solid wastes can cover 10 % of the world's energy consumption. The following must be done: 1. The classification of waste by type, membership, paper, glass, plastic, metal, etc. in their respective containers to facilitate recycling. 2. Waste collection by specialized units or companies in each of these wastes. 3. Recycling of waste. 4. Extract energy from waste in multiple ways. 5. Reduce air pollution caused by emissions from unhealthy landfills and burning Solid waste, as well as reducing surface water pollution, and groundwater basins Liquids leaking from these dumps. 6. Forming a sustainable source of environmentally friendly alternative energy that reduces energy dependence which reduces pollution and reduces the energy bill consumed in Palestine. Palestinian Central Bureau of Statistics. Energy use statistics in the household sector. Ramallah, Palestine. 2017. pp. 14–21 Ministry of Energy. Annual Energy Statistics. God bless Palestine. 2017. pp. 15–19 Palestinian Central Bureau of Statistics. Missions to the air. Ramallah, Palestine. 2017. pp. 114–123 Wars, Saqr. Geography of Palestine. Study in the diversity of place and human genius. Ramallah: Palestine.(2014). pp. 81–85 Khadouri, Walid. «East Mediterranean Gas: Reality and Expectations». Journal of Palestinian Studies. Spring 2011. (2011). pp. 74–83 Salama, Abdul Ghani. Alternative Energy Uses in Palestine. Echo of Silence. (2014). pp. 59–64 http://abedelghani.blogspot.com/2011/09/blog post_819.html. Palestinian Energy and Natural Resources Authority (2014). Palestine Country Paper. Energy sector. Presented to the 10th Arab Energy Conference, Abu Dhabi, United Arab Emirates, 21–23. December 2014. pp. 17–24 Economic Policy Research Institute — MAS. Round table session. Renewable Energy in the Palestinian Territory: Opportunities and Challenges. Ramallah (2016). p. 79 Abu — Hamed, Tarek; Flamm, Hana; Isma'il, Lina. Assessing renewable energy potential in Palestine. (2012). pp. 184–189 https://ases.conferenceservices.net/resources/252/2859/pdf/SOLAR2012_0027_full %20pape r.pdf.Accessed 10/12/2018. Alatawneh, Bader; Germana, maria; Corrao, Rossella. Zero Energy House in Palestine. Identification of the Future Challenges. In Palestine Engineering Association. Proceeding of The Fifth International Energy Conference. Al Bireh, Palestine. pp. 47–50. Fusun, Tatlidil; Zeki, Bayramoglu; Duygu, Akturk. Animal Manure as One of the Main Biogas Production Resources: Case of Turkey. Journal of Animal and Veterinary Advances, 8: 2473–2476. (2009). p. 117 http://medwelljournals.com/abstract/ doi=javaa.2009.2473.2476.Accessed 11/12/2018. Ibrik, Imad(2009). Energy profile and the potential of Renewable Energy sources in Palestine. In Renewable Energy in the Middle East. Enhancing security through Regional Cooperation, ed.Mason,M; Mor, A. pp 71–89.Dordrecht. Netherland: Springer. Imraish, Ashraf; Abusafa, Abdelrahim. Potential of Biomass as an Alternative Fuel in Palestine — Amounts and methods of conversion. In Palestine. (2015). pp. 42–47 Engineering Association. Proceeding of The Fifth International Energy Conference. in Palestine pp. 58- 61. Al Bireh, Palestine. (2015). pp. 209 -212 Kurdi, Majdi; Kurd, Khalil. Use of solid waste in production of electricity in the area of the Palestinian National Authority (Sakhnin model). (2015). pp. 17–24 Yaseen,T,Q. Home Page. Renewable Energy Applications in Palestine. (2014). pp. 33–34 http://scholar.najah.edu/sites/default/files/conferencepaper/renewable-energy-applications-palestine.pdf Accessed 18/12/2018 Основные термины (генерируются автоматически): PGM, GWH, MAS, PCBS. sustainable development, solid waste, welfare of the nation, natural resources, Renewable energy, investments, national economy, government policy Integrated circuits: major notions, history, classification The present paper deals with the main notions on integrated circuits being widely used in modern digital electronics. It gives brief information on the history of integrated circuits development and outlines various classification types of integrated circuits. Как издать спецвыпуск? Правила оформления статей Подпишитесь на нашу рассылку:
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
765
Роа́н () — муніципалітет у Франції, у регіоні Бретань, департамент Морбіан. Населення — осіб (2011). Муніципалітет розташований на відстані близько 390 км на захід від Парижа, 80 км на захід від Ренна, 50 км на північ від Ванна. Демографія Розподіл населення за віком та статтю (2006): Економіка У 2010 році в муніципалітеті числилось 731 оподатковане домогосподарство, у яких проживали 1723,0 особи, медіана доходів виносила євро на одного особоспоживача Галерея зображень Посилання Офіційний сайт Роан Роан на сайті французького Національного інституту географії [ Розташування муніципалітету Роан на мапі Франції та сусідні муніципалітети] Див. також Список муніципалітетів департаменту Морбіан Примітки Муніципалітети департаменту Морбіан
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,902
a turnkey business Internet is a business that can be implemented or operated without additional work required by the customer – or simply "turn key". a company that is sold as a turnkey business, also known as a business in a box, is that which would include any buyer would need to get the business up and running very quickly without much work on the part of the buyer. If you were to start your own internet business, you need to get a domain name, set up the site along. with all the necessary pages, create landing page, create other forms of advertising, to maintain a support office, etc. a whole lot more work on your part would be involved for the company actually up and running on a Web site. But the home business turnkey allows the buyer to join a high paying affiliate program that is ready to be promoted as their own e-commerce solution. The most common type of business sold as a turnkey business is a franchise. A franchise is a license or a license to open a business that already has other branches in other places. In the case of franchises, a turnkey business frequently includes a building that was built to the exact specifications franchise and is located in a region or a particular territory. Anyone wishing to buy an Internet business in a box or a home business turnkey, be sure to exercise due diligence. This means that the buyer should know exactly what the turnkey operations include. All keys in hand marketing solutions are created equal. There is a key website directory by business hand available online, where buyers can bid on the companies that are available for sale. There are sites that offer franchises for sale. There are many resources on the web if one is looking to buy a key business hand. So, as an entrepreneur, the buyer must :. the company turnkey, if implemented and integrated properly, can offer buyers an instant cash machine – a way to start earning an income without the creative work of the entire single package.
{ "redpajama_set_name": "RedPajamaC4" }
7,496
{"url":"https:\/\/tex.stackexchange.com\/questions\/524211\/how-to-plot-box-chart-with-normal-distribution-curve","text":"# How to plot Box Chart with Normal Distribution Curve?\n\nSo far, I am very obsessed with Tikz though I am a newbie. I am wondering how to plot box chart with normal distribution curve as the picture below?\n\nThis screenshot is from OriginPro Galley, I dont have the original data. So I assume data like this :\n\n(0.12,0.33,0.34,0.54,0.68.0.67,0.78,1.02,1.11,0.45) (0.13,0.34,0.37,0.33,0.41,0.45,0.47,0.43,0.67,0.87) (0.31,0.42,0.35,0.64,0.72,0.47,0.67,0.87,0.58,0.56)\n\nAny help will be appreciated.\n\n\u2022 Do you have the data for the dots? Please show us what you have tried. \u2013\u00a0user194703 Jan 14 '20 at 15:34\n\u2022 Hi, Schr\u00f6dinger. I am so sorry for that. This screenshot is from OriginPro Galley, I dont have the original data, and I assume a set of data. I want to try it, but I dont know where to get started. \u2013\u00a0Lerh Jan 15 '20 at 4:28\n\nI am not sure whether the points are Scatter plot. And it is easy to draw points. So I just plot the box chart with norm distribution curve.\n\nIt is mainly based on Rotated Normal Distribution and some boxplot settings can be found in chapter 5.12.1 of Manual.\n\n\\documentclass{article}\n\\usepackage{pgfplots}\n\\usepackage{tikz}\n\\pgfplotsset{compat=1.8}\n\\usepgfplotslibrary{statistics}\n\n\\begin{document}\n\\pgfmathsetmacro{\\offset}{0.05}\n\\begin{tikzpicture}[declare function={gauss(\\x,\\y,\\z)=\\offset+1\/(\\y*sqrt(2*pi))*exp(-((\\x-\\z)^2)\/(2*\\y^2));}]\n\n\\begin{axis}[samples=101,smooth,height=8cm,\nboxplot\/draw direction=y]","date":"2021-06-14 03:58:56","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6314107179641724, \"perplexity\": 837.6099515736888}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-25\/segments\/1623487611320.18\/warc\/CC-MAIN-20210614013350-20210614043350-00311.warc.gz\"}"}
null
null
Cost is $750 for the summer season, which includes Uniform Jersey, Shorts, Practice Pinniefor new players. Payments willl be broken into 2 payments with the first payment due at team acceptance. Cost $325 for the summer season, which includes Uniform Jersey, Shorts, Practice Pinniefor new players. Payments willl be broken into 2 payments with the first payment due at team acceptance. Practices will be held on Sunday evenings at Gainesville Middle School from April - July with Midweek practices added once the High Schools are finished with their season. *Tournaments are subject to change.
{ "redpajama_set_name": "RedPajamaC4" }
5,700
cask 'font-material-icons' do version '3.0.1' sha256 '722e3b09121b82a3746f3da2ecd3a2db8d7d24153b8433324315695a45f06a90' # github.com/google/material-design-icons was verified as official when first introduced to the cask url "https://github.com/google/material-design-icons/archive/#{version}.zip" appcast 'https://github.com/google/material-design-icons/releases.atom', checkpoint: '601c6c9de3ef09cd7127abcde40947cafc38607cd5a6e971194781fa582c9e1c' name 'Material Icons' homepage 'http://google.github.io/material-design-icons/' font "material-design-icons-#{version}/iconfont/MaterialIcons-Regular.ttf" end
{ "redpajama_set_name": "RedPajamaGithub" }
9,219
Ariiel . RussianBoyForYou. BlueEyedXBlonde. Knopo4kka. EmiliSinNicoleAndJohnnyVanessaMelissaNikiSixxx420 .KaylaFerrxSophieRedmysteriousKEISHAGREENeyesPOISON .ReebeeccaaReebeeccaaSugarZNicoleAndJohnny .EduarjossNicol1AylyneMillerrXxXLilyMayer .BlueEyedXBlondeReebeeccaaLitlleBellaNollimitsCouple .SeducePrincessJaneHornyKHENZISLadyUmbra .KaylaFerrVanessaMelissaRumbbaMissSweetPinky .HotXSNicol1ValerieAngeloXXXMontanaXXX .MisKatherineKARLA11TroyePleasureSugarhot1 .HottyFirePetiteEmilyChatWithPrettyDahliaDoll .beautifulvalerieMereditheWifeSQTastyTreasureTastyTreasure . MilaMeowbeautifulvalerieMelissandraBlack11VSwhite11 .MilaMeowgatatravies4RumbbaMereditheWifeSQ .JulianFerrellJulianFerrellLitlleBellabeautifulvalerie .HotladyTanyBlack11VSwhite11AylyneMillerrXxXirisblanco .HeavenlyAngelTSSexySirenTS01BornToBeWILDHotladyTany .RumbbaKaylaFerrYoungJulyBabbyGia .VanessaBestHottyFireReebeeccaaSajani .TroyePleasurejulliettesSeducePrincessLitlleBella .MisKatherinelovellyamyyMakeMeRoarNicoleAndJohnny .NollimitsCoupleKeeleySweMissAliannaVanessaBest .XXXMontanaXXXLitlleBellaPa0LiiTaKloePunishGame .
{ "redpajama_set_name": "RedPajamaC4" }
6,237
{"url":"https:\/\/physics.stackexchange.com\/questions\/223610\/if-space-is-expanding-in-itself-why-then-is-there-redshift\/223632","text":"# If space is \u201cexpanding\u201d in itself - why then is there redshift?\n\nThe \"kid's\" way of understanding the expanding universe is that: \"space\" is totally \"ordinary\", and all the galaxies are expanding through it (like an explosion). Of course, that's wrong.\n\nThe usual better explanation is that \"space itself is expanding.\" (Of course, on scales below clusters, gravity pulls \"smaller\" structures together.)\n\nAn even more up-to-date explanation is that the conceptual \"metric of space\" is \"expanding\" (here's a typical pedagogic example) which can perhaps be summarized as the \"scale is changing\".\n\nSo ... distant objects are redshifted.\n\nBut why? Everything's just expanding -- the very metric of spacetime is expanding.\n\nIndeed, it would seem to me that you would only see redshift (or if you prefer, time dilation of far-away things) strictly in the case of \"everyday\" motion within the metric of space; the very idea of the actual \"metric of space changing!\" would seem to be that, those of us internal to that metric of space would have no clue that any such expansion is happening: the scale is just changing for everything.\n\nWhat's the best way to understand this?\n\nImagine simply a meter cube in a video game with a few things in it. There is no exterior, it is the universe. I expand the entire thing...\n\n{note...of course, obviously, the 'outside' (shadows etc.) added by the 3D presentation software to clarify the PNG here, have utterly no meaning and do not exist in any way}\n\n... to all the beings inside, I believe absolutely nothing has changed, there'd be no redshift between the objects there.\n\nWhat's the deal?\n\nNote too this somewhat similar (related?) question, which came up with the recent 2016 gravitational wave discovery:\n\nHow is it that distortions in space can be measured as distances?\n\n\u2022 I believe you have both. A certain amount of expanding space, and objects moving relative to that space. Redshift restored. \u2013\u00a0Floris Dec 12 '15 at 2:06\n\u2022 @JoeBlow : \"the scale is just changing for everything\" is not true : the fine structure constant seems to be a real constant ( Planck satellite & Wmap ) , then , at least , there is no evidence that atoms expand. \u2013\u00a0user46925 Dec 12 '15 at 2:45\n\u2022 Hi @Floris ... hmm, I'm not sure if I follow you. Of course, there is ordinary everyday doppler or gravitational redshift, and, there is cosmological redshift (caused by \"the metric expansion of the universe\"). (Hence, many questions on how to tell the two apart ... example ) My question is of course only about cosmological redshift ... my question, why should THE WHOLE DAMNED METRIC expanding, be detectable, at all, within that system? \u2013\u00a0Fattie Dec 12 '15 at 3:51\n\u2022 @JoeBlow : Anna V answered this question. Expansion theory needs less changes. There is a significant correlation between the redshift as a distance and characteristics of observed objects that reflect their ages \u2013\u00a0user46925 Dec 12 '15 at 12:38\n\u2022 Hi igael - she DID answer my question, exactly by stating that \"atoms do not take part in the metric expansion.\" Waves get metric-expanded: if atoms did take part in the metric-expansion, we would see nothing, no change. I think. Indeed, just as you say \"atoms do not expand\". \u2013\u00a0Fattie Dec 12 '15 at 16:01\n\nWhat are the observational\/experimental facts:\n\n1)Atoms have definite spectra, with a fixed pattern, a fingerprint of the atom\n\n2) The further away ( measured by luminocity) galaxies all around ours the more shifted the fingerpring pattern towards the red part of the spectrum.\n\n3) This happens uniformly all around.\n\nThe model that fits these facts is General Relativity, which predicted the behavior\n\nIn the hierarchy of forces , the gravitational force is the weakest. This assures that atoms, matter in general up to the size of galaxies keep their structure, the raisin bread analogy. Gravity is strong enough to keep even clusters of galaxies unaffected and given some assumptions on the energy density and solution of the general relativity equations gravity can fight the expansion and lead to the big crunch,.\n\nPhotons are elementary particles that have to obey locally energy and momentum conservation. The expansion of the universe changes their momentum and thus the atomic spectra arrive shifted towards the infrared.\n\nthe very idea of the actual \"metric of space changing!\" would seem to be that, those of us internal to that metric of space would have no clue that any such expansion is happening: the scale is just changing for everything.\n\nIt is the fact that matter is bound by forces that are not affected by the expansion that allows us to measure the expansion. Otherwise you are correct, our atoms would also be expanding and we would see no shift in the atomic thumbprints.\n\n\u2022 @BenitoCiaro All our explanations are ad hoc, they are called mathematical models; and they are accepted not because of their descriptive power ( that would just be mapping) but because of their predictive power . Theoretical models stand as long as new data validates them. \u2013\u00a0anna v Dec 12 '15 at 7:21\n\u2022 Hi Anna V First thanks as always. Let me try to get to the heart of the matter. As you say (1) matter is bound by forces that are not affected by the expansion (2) that allows us to measure the expansion (3) Otherwise our atoms would also be expanding and we would see no shift. The confusion for me is, the only thing that happens to \"expand\" is structures bigger than clusters. With other types of motion (caused by gravity, explosions, etc etc) we extremely simply say \"it happens to be 'moving'\" and that, 'motion', is what causes the red shift in that case. However....... \u2013\u00a0Fattie Dec 12 '15 at 15:11\n\u2022 ....... it appears that strictly in the case of structures-bigger-than-clusters, instead of just saying \"they are moving apart\" (for some reason - an as yet unknown force ... whatever ... ), we assert instead that the very nature of the epistemological underlying coordinate system is scale-changing and that the entirety of \"spacetime\" is indeed undergoing metric expansion. So, that's what I'm wondering here. \u2013\u00a0Fattie Dec 12 '15 at 15:16\n\u2022 Well in the raisin-dough example, the raisins stay a fixed size. (Right??) If the raisins also increased in size (precisely as in my visual example: notice the two raisins) then there is no redshift. Indeed, you have explained to me that atoms are NOT expanding which I suppose is precisely the exact answer to my specific question here. \u2013\u00a0Fattie Dec 12 '15 at 15:37\n\u2022 Note too that: it is always pointed out that due to GRAVITY, although the spacetime metric is expanding, structures below clusters are NOT expanding. However. I now realize this has absolutely nothing, whatsoever, to do with the question here (\"how come redshift if everything's expanding?\") The reason for redshift is because strictly ATOMS are not expanding. Say gravity was weaker and indeed clusters and galaxies, even planetary systems WERE expanding. That would make no difference. We'd stlll see redshift, because ATOMS are NOT expanding with the spacetime metric. I think. \u2013\u00a0Fattie Dec 12 '15 at 15:40\n\nIt's pretty easy to explain if we take a classical view of an electromagnetic wave. As an EM wave from a distant star propagates towards us, the space it propagates through is expanding. Since the space is expanding, the peaks & troughs of the EM wave are getting farther apart from each other. That corresponds to an increase in wavelength and a decrease in frequency; a redshift arises.\n\nEdit:\n\nI think you're imagining \"the very metric of space is expanding\" to mean that the definition of distance is changing$^{\u2020}$, but that's not the case. The \"metric of space expanding\" means the distances themselves are changing. As an electron and a proton are subjected to an expansion of the space them, the electrostatic attraction between them (speaking in a roughly non-quantum framework) pulls them \"back together\" and maintains the size of the atom.\n\nThe \"rubber sheet\" analogy is a little perilous, but in this case I think it's apt. Two objects on an expanding rubber sheet will experience an increase in the distance between them, but not if they're connected by a spring - in that case, they will maintain their relative distance.\n\nIt's not a magic exemption that allows the atoms of our measuring apparatus to be unaffected by an expanding universe; they're unaffected because the electrostatic interactions and associated quantum mechanics that determine the behavior of subatomic particles is not changed by the expansion of space.\n\n$^{\u2020}$that would just mean the universe is changing from one set of units to another, which would (as you point out) be physically and philosophically undetectable, and therefore meaningless.\n\n\u2022 But the thing is, if (that's IF) the measuring equipment and literally everything was expanding - as one would perhaps expect from an astounding pedagogical exposition like \"the very metric of spacetime is expanding\" - there's no change whatsoever. \u2013\u00a0Fattie Dec 12 '15 at 15:30\n\u2022 I see your point, but I would argue that measuring equipment is made of particles held together by electrostatic and nuclear forces. The same can't be said for the peaks of an electromagnetic wave - there is no restoring force maintaining the scale of an EM wave. Does that resolve your objection? \u2013\u00a0Brionius Dec 12 '15 at 15:38\n\u2022 Hmm, if literally THE METRIC OF SPACETIME (for God's sake!) was actually expanding -- which indeed is nothing more than saying the \"scale is changing\", then very much the \"measure\" of distance between the peaks (or - any measure!) would just be \"the same\" - there's no redshift. But indeed, this would appear to be all resolved: it seems that when we say \"the very metric of spacetime is increasing\" we actually mean \"BUT NOT ATOMS\". So, we can sort of \"magically\" measure the otherwise meaningless change in \"scale - distance\" between those peaks: because atoms are conveniently not changing :O \u2013\u00a0Fattie Dec 12 '15 at 15:47\n\u2022 BTW if you're still there, this was a great answer, @Brionius ! \u2013\u00a0Fattie Dec 9 '19 at 19:39\n\nhttp:\/\/arxiv.org\/abs\/0707.0380v1\n\n# the exact issue of this question,\n\nand indeed whether the whole pedagogical idea of \"space expanding,\" is crap.\n\nFor example, section 2.6.2, is a question identical to the OP here.\n\n2.6.2 Is everything expanding?\n\nAn extension of the argument against global expansion given in section 2.2 is that is should be undetectable, since everything will simply expand with it.\n\nThey essentially go on to say that because atoms don't expand we can measure redshift, eg,\n\nWhich does seem to be the asymptotic answer here.\n\n\u2022 \"it should be indetectable\": well, depends if constants keeps constant or not ( c, thin structure, etc, especially all that have lengths involved somewhere in their definition). \u2013\u00a0Fabrice NEYRET Dec 12 '15 at 22:22\n\u2022 If certain constants (or anything) change over time you would be able to see that change by looking at the past, as you suggest. In any event, the question at hand, the OP, is why is the metric-expansion detectable, since, detectors\/etc themselves would expand. Answer: atoms don't expand under the metric-expansion. \u2013\u00a0Fattie Dec 12 '15 at 23:08\n\u2022 My point about constants was just the opposite situation: if the metric scaled but not the constants, you would have the impression that the constant change (e.g. c decrease), which can be detected as you point out. \u2013\u00a0Fabrice NEYRET Dec 13 '15 at 2:20\n\nThe \"kid's\" way of understanding the expanding universe is that: \"space\" is totally \"ordinary\", and all the galaxies are expanding through it (like an explosion). Of course, that's wrong.\n\nIt's not wrong. There is no difference in general relativity between \"expansion of space\" and simple relative motion. They are the same phenomenon described with respect to different coordinates.\n\nHere's an analogy. On a planet-sized ball of dirt (the earth without oceans, mountains, etc.), carve a bunch of straight (great-circle) foot paths from one pole to the other, all of a fixed width (say 1 meter). As you go away from the poles, the foot paths get farther away from each other, reaching a maximum distance at the equator, then reconverge to the other pole.\n\nNow consider this situation from the perspective of polar coordinates (latitude and longitude). Each path is at a constant longitude. As the latitude changes, the paths don't move apart or together; they remain at fixed longitudinal separations, while the scale factor relating the longitude to physical distances changes.\n\nIs the separation of paths now unobservable, because the metric itself is being rescaled? No. You don't change physical reality by choosing new coordinates for it. In polar coordinates, the coordinate width of the path decreases as the scale factor increases, meaning that the physical distance between the paths, measured using the physical path width as a meterstick, increases just as before. The decrease in coordinate width is not the result of a physical force acting against the expansion force. There is no expansion force. There are just paths of constant width that don't know or care about the properties of the coordinate system that you chose.\n\nThis is a very close analogy. FLRW cosmologies have an approximate symmetry similar to the symmetry of the earth around its axis of rotation, and FLRW coordinates are similar to polar coordinates. The cosmological time is the latitude, and the spatial position is the longitude.\n\nThe laws of physics are local. If you look at any small portion of the earth (away from the equator), the footpaths on it are diverging. That local divergence (relative motion) is all that the laws of physics actually \"see\". We humans recognize the overall shape, and the great circles, and choose global coordinates that respect the global symmetry. That's our choice. The universe doesn't care.\n\n\u2022 I would have to think about this for a long time. Not internet time, a long actual time. \u2013\u00a0Fattie Aug 30 '20 at 0:33\n\u2022 Unfortunately I still feel all answers here are, in a word, hopeless. \u2013\u00a0Fattie Aug 30 '20 at 0:34\n\n1: It is false to say \"everything is expending\" : space is expending, but objects linked together by forces (like gravity) are kept at same distance. So it's wrong to confuse expansion and simple change of scale.\n\n2: When a wave travels at some speed, front 2 has a delay as compared to front 1, and needs some time to cover the distance to reach former position of front 1. But during this time expansion has enlarged the gap, so the distance between the fronts (which is the wavelength) increases : this is the red shift.\n\n\u2022 Thanks Fabrice. Actually, it seems to me that gravity is irrelevant. (Gravity keeps objets smaller than clusters together.) The only key thing is whether atoms are expanding. If atoms were expanding (as in my visual example), our measuring equipment and the photons would all be really expanding, and there would be no redshift. The key would seem to be what Anna says, \"Otherwise you are correct, our *atoms would also be expanding and we would see no shift in the atomic thumbprints.\"* \u2013\u00a0Fattie Dec 12 '15 at 15:27\n\u2022 I said any forces. Atoms are bounded by various forces, so they have no reason to expand. \u2013\u00a0Fabrice NEYRET Dec 12 '15 at 22:20\n\nNote that in your example there would actually be red shift. The two spheres are still proportionally equivalent distances apart, which means that they have moved away from each other in terms of a fixed non-shifted frame.\n\n\u2022 Your example claimed that everything expanded (in your diagram) remains the same. By your frame of reference (your diagram) the distance between the two spheres has increased even though they don't appear different due to scaling. A radio signal sent from one sphere to the other would take longer (that is the meaning of \"expansion\") in the expanded universe than it did before the expansion - even if everything appears to look the same. This is not a matter of scaling, it is a matter of the distance having changed, even if it looks the same. \u2013\u00a0M Willey Dec 14 '15 at 6:34","date":"2021-06-16 21:10:23","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7629466652870178, \"perplexity\": 830.1517521708107}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-25\/segments\/1623487626008.14\/warc\/CC-MAIN-20210616190205-20210616220205-00029.warc.gz\"}"}
null
null
Biblioteka Imeni Lenina (Russisch: Библиотека имени Ленина, Leninbibliotheek) is een station op de Sokolnitsjeskaja-lijn van de Moskouse metro. Het station is vernoemd naar de voormalige Leninbibliotheek (nu de Russische Staatsbibliotheek). Geschiedenis Het station is op 15 mei 1935 geopend als een onderdeel van de eerste fase van de metro. Deze omvatte dertien stations, waarvan 10 langs de hoofdlijn tussen Sokolniki en Tsentralnyi Park Koeltoery i Otdykha imeni Gorkovo en drie langs de zijlijn van Ochotny Rjad naar Smolenskaja. Het station ligt onder de Mochovajastraat parallel aan de oostgevel van de bibliotheek. Het station kreeg aan beide uiteinden van het perron een verdeelhal en een bovengronds toegangsgebouw. Hoewel de stations tegelijk werden gebouwd werd pas in 1937 een doorgang naar station Oelitsa Komintern gebouwd. Hiermee werd het station een van de eerste twee overstapstations van de Moskouse metro. Op 13 maart 1938 werd de splitsing bij Ochotny Rjad opgeheven, zodat er daarna sprake was van een overstap tussen de blauwe en rode lijn. In 1946 volgde een verbouwing en op 5 april 1953 werd de doorgang gesloten en vervangen door een roltrap naar het dieper gelegen Arbarskaja. In 1958 werd de doorgang naar het, inmiddels Kalininskaja genoemde, station hersteld toen de oorspronkelijke zijlijn werd heropend als zelfstandige lijn. Eind februari 1965 werd een extra uitgang via een brug ongeveer in het midden van het perron geopend met een capaciteit van 24.000 reizigers per uur. Hierlangs kunnen de reizigers zowel de uitgang als de stations Arbatskaja en Kalininskaja bereiken. In april volgde een nieuwe kaartverkoop en een voetgangerstunnel met diverse uitgangen rond het naastgelegen kruispunt en een in het Alexanderpark (Aleksandrovski Sad) bij de kassa bij de ingang van het Kremlin. Het toegangsgebouw aan het kruispunt werd toen gesloopt. In 1975 werd begonnen met de aanleg van de Kalininski-radius in het oosten van de stad. Het toenmalige plan omvatte een verbinding van Tretjakovskaja naar de stations rond Biblioteka Imeni Lenina, zodat door aansluiting op lijn 3 of 4 een oost-west lijn zou ontstaan, dit is tot nog toe niet uitgevoerd. In 1984 werd het zuidelijke toegangsgebouw vergroot in verband met de aanleg van de Serpoechovsko-Timirjazevskaja-lijn, hierbij werden vier roltrappen tussen de vergrote verdeelhal en het nieuwe station Borovitskaja. In 1991 is voorgesteld om het station om te dopen in Mochovyjoe. Ontwerp en afwerking Het station is het eerste enkelgewelfdstation van de Moskouse metro. Om verstoring van het wegverkeer te voorkomen werd het geheel ondergronds gebouwd. Er werd gekozen voor een enkelgewelf gezien de bodemgesteldheid en de beschikbare ruimte. Het gewelf, dat aan de bovenkant is afgedekt met bitumenpapier tegen insijpelend grondwater, werd gemaakt van cement versterkt met een ijzeren raamwerk. De hele ontgraving is slechts 19,8 meter breed en 11,7 meter hoog. Het perron is 160 meter lang en de ruimte tussen het gewelf en het straatoppervlak varieert tussen slechts 2 tot 3,5 meter. De perrons zijn met de respectievelijke verdeelhallen verbonden met een trap tussen de tunnels aan het uiteinde van het perron. De steunbogen van het gewelf bevinden zich achter de verlichting en zijn aan de onderkant bedekt met marmer. De tunnelwanden tussen deze bogen zijn betegeld met gele en zwarte tegels. Naast de lampen aan de steunbogen hangen ook meerdere bolvormige lampen aan het witte cassetteplafond. Het perron is in het verleden zowel bedekt geweest met parket als met asfalt, tegenwoordig bestaan de vloeren uit grijs graniet. De noordelijke verdeelhal is begin jaren 70 van de twintigste eeuw opgesierd met een mozaïek van Lenin van de hand van kunstenaar G.I. Oprysjko. In het marmer dat gebruikt is op de wanden van de verdeelhal zijn fossielen te zien. In zijn boek "vermakende mineralogie" schreef A.E. Fersman over het station: ..en op de trap naar het perron zien we in het rode marmer van de Krim fossielen van slakken en schelpen die leefden in de zeeën in het zuiden die vele miljoenen jaren op de plaats van de Krim en de Kaukasus lagen. Overstappunt Samen met de stations Arbatskaja aan de Arbatsko-Pokrovskaja-lijn, Borovitskaja aan de Serpoechovsko-Timirjazevskaja-lijn en Aleksandrovski Sad aan de Filjovskaja-lijn vormt het station het grootste overstappunt van de Moskouse metro. De overstap tussen Aleksandrovski Sad en Arbatskaja loopt via de noordelijke verdeelhal. Deze verdeelhal biedt eveneens toegang tot het perron van Biblioteka Imeni Lenina terwijl de reizigers vanaf de Sokolnitsjeskaja-lijn via de uitgang in het midden van het perron deze verdeelhal kunnen bereiken. De zuidelijke verdeelhal en het daar, naast de bibliotheek, gelegen toegangsgebouw verwerkt ook de overstappers van en naar Borovitskaja. Formeel is geen van de verdeelhallen onderdeel van het station Biblioteka Imeni Lenina. In 2002 passeerden per toegangsgebouw ongeveer 14.850 reizigers per dag de ingangen van het station. Het aantal overstappers is veel groter, 190.000 voor Borovitskaja, 44.000 voor Arbatskaja en 84.000 voor Aleksandrovski Sad. Het station is voor reizigers geopend tussen 5:30 uur en 01:00 uur, de eerste trein naar het zuiden vertrekt om 5:42 uur, die naar het noorden om 5:56 uur. Tijdens repetities voor en de overwinningsparade zelf kan het station alleen verlaten worden met speciale passen van het Russische ministerie van defensie. Galerij Metrostation in Moskou
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,730
Capable of carrying a single pallet (weight 400kg and max height 1m). Great for quick delivery of documents, computers, pharmaceuticals, etc. Volkswagen Caddy, Ford Connect or similar. Can carry two pallets (max total weight 800kg and max height 1.4m). VW Transporter, Vauxhall Vivaro or similar. Able to transport heavy and long consignments, these vans have the highest weight capacity making them ideal for bulk printed material, IBC liquid containers, or other heavy palletised loads. Ford Transit LWB, Mercedes Sprinter LWB or similar. Ford Transit XLWB, Mercedes Sprinter XLWB or similar. The large cargo door of the Luton van allows for very large bulky items to be loaded. Great for office furniture, fabricated metalwork, etc. Available with or without tail lift. From city cyclists to international haulage, we can arrange anything to suit your transportation needs.
{ "redpajama_set_name": "RedPajamaC4" }
4,194
package zip import ( "archive/zip" "bytes" "fmt" "io/ioutil" "github.com/dvyukov/go-fuzz-corpus/fuzz" ) func Fuzz(data []byte) int { // Read in the archive. z, err := zip.NewReader(bytes.NewReader(data), int64(len(data))) if err != nil { if z != nil { panic("non nil z") } return 0 } var headers []*zip.FileHeader var contents [][]byte for _, f := range z.File { r, err := f.Open() if err != nil { continue } if f.UncompressedSize64 < 1e6 { c, err := ioutil.ReadAll(r) if err != nil { continue } if uint64(len(c)) != f.UncompressedSize64 { println("bad size:", len(c), f.UncompressedSize64) panic("bad size") } hdr := f.FileHeader headers = append(headers, &hdr) contents = append(contents, c) } r.Close() } if len(headers) == 0 { return 1 } // Write a new archive with the same files. buf := new(bytes.Buffer) w := zip.NewWriter(buf) for i, h := range headers { w1, err := w.CreateHeader(h) if err != nil { panic(err) } n, err := w1.Write(contents[i]) if err != nil { panic(err) } if n != len(contents[i]) { panic("short write") } } err = w.Close() if err != nil { panic(err) } // Read in the new archive. z1, err := zip.NewReader(bytes.NewReader(buf.Bytes()), int64(len(buf.Bytes()))) if err != nil { panic(err) } var headers1 []*zip.FileHeader var contents1 [][]byte for _, f := range z1.File { r, err := f.Open() if err != nil { panic(err) } if f.UncompressedSize64 >= 1e6 { panic("corrupted length") } c, err := ioutil.ReadAll(r) if err != nil { panic(err) } if uint64(len(c)) != f.UncompressedSize64 { println("bad size:", len(c), f.UncompressedSize64) panic("bad size") } hdr := f.FileHeader headers1 = append(headers1, &hdr) contents1 = append(contents1, c) r.Close() } // Compare that we have the same data after compress/decompress. for i, h := range headers { // These fields are set by archive/zip package. h.Flags |= 0x8 h.CreatorVersion = headers1[i].CreatorVersion h.ReaderVersion = headers1[i].ReaderVersion // These are not set correctly initially. //h.CompressedSize = headers1[i].CompressedSize //h.CompressedSize64 = headers1[i].CompressedSize64 if !fuzz.DeepEqual(h, headers1[i]) { fmt.Printf("hdr0: %#v\n", h) fmt.Printf("hdr1: %#v\n", headers1[i]) panic("corrupted header") } if !fuzz.DeepEqual(contents[i], contents1[i]) { panic("corrupted data") } } return 1 }
{ "redpajama_set_name": "RedPajamaGithub" }
2,954
\section{Introduction} For more than a century, Principal-Component Analysis (PCA) has been a core operation in data/signal processing \cite{Pearson,SVD_ref}. Conceptually, PCA can be viewed as the pursuit of a coordinate system (defined by the principal components) that reveals underlying linear trends of a data matrix. In its conventional form, the new coordinate system is calculated such that it preserves the energy content of the data matrix to the maximum possible extend. Conventionally, the energy content of a data point is expressed by means of its L2-norm --i.e., its Euclidean distance from the center of the coordinate system. Thus, for any complex data matrix $\m X = [\m x_1, \m x_2, \ldots, \m x_N] \in\mathbb C^{D\times N}$, PCA searches for the size-$K$ ($1\leq K<\text{rank}(\m X)$) orthonormal basis (or, $K$-dimensional coordinate system) that solves \begin{align}\label{L2Prob} \opt Q_{L2}=\argmax{\m Q\in\mathbb C^{D\times K};~\m Q^H\m Q=\m I_K}{\|\m Q^H\m X \|_2} \end{align} where, for any $\mathbf A \in \mathbb C^{m \times n}$, its L2-norm\footnote{The L2-norm of a matrix is also known as its Frobenius or Euclidean norm \cite{Golub, Meyer}.} is defined as $\| \mathbf A \|_2 = \sqrt{\sum_{i=1}^m \sum_{j=1}^n |A_{i,j}|^2}$, $| \cdot |$ denotes the magnitude of a complex number (coinciding with the absolute value of a real number), and $\m I_K$ is the size-$K$ identity matrix. Due to its definition in \eqref{L2Prob}, PCA is also commonly referred to as L2-norm PCA, or simply L2-PCA. A practical reason of the tremendous popularity of L2-PCA is the computational simplicity by which the solution to \eqref{L2Prob} can be obtained. Specifically, a solution matrix $\opt Q_{L2}$ can be formed by the $K$ dominant singular vectors of $\m X$ and is, thus, obtainable by means of Singular-Value Decomposition (SVD) of $\m X$, with quadratic complexity in the number of data samples $N$ \cite{Golub}. Moreover, L2-PCA is a scalable operation in the sense that the $(k+1)$-th PC can be calculated using directly the first $k$ PCs that are always preserved. In addition, there are several algorithms that can efficiently update the solution to \eqref{L2Prob} as new data points become available \cite{streamPCA}. Finally, by the Projection Theorem \cite{Golub} it is easy to show that the maximum-L2-norm-projection problem in \eqref{L2Prob} is equivalent to the familiar minimum-L2-norm-error problem \begin{align}\label{L2minErr} \underset{\substack{\m Q\in\mathbb C^{D\times K};~ \m Q^H\m Q=\m I_K \\ \m Z \in\mathbb C^{K\times N}}}{\min.}{\|\m X-\m Q\m Z\|_2}. \end{align} On the downside, conventional L2-PCA, seeking to maximize the L2-norm of the projected data-points in \eqref{L2Prob}, is well-known to be overly sensitive to outlying measurements in the processed matrix. Such outliers may leek into the data matrix due to a number of different causes, such as sensing/hardware malfunctions, external interference, and errors in data storage or transcription. Regardless of their cause, outliers are described as unexpected, erroneous values that lie far from the nominal data subspace and affect tremendously a number of data analysis methods, including L2-PCA. Since the original conception of L2-PCA \cite{Pearson}, engineers and mathematicians have been trying robustify its against outliers. Popular robust versions of PCA are weighted PCA (WPCA) \cite{WPCA,rank1WPCA}, influence-function PCA \cite{pFunctions}, and L1-norm PCA (or, simply L1-PCA) \cite{L1PCA_0, L1PCA_m1,L1PCA_1, L1PCA_1v2, L1PCA_1v3, L1PCA_2,L1PCA_4,L1PCA_6,L1PCA_9,L1PCA_10,L1PCA_11,L1PCA_12,L1PCA_14,L1PCA_18,L1PCA_19,L1PCA_20,L1PCA_22,L1PCA_25,L1PCA_26,L1PCA_27,L1PCA_28, L1PCA_100, L1PCA_101, L1PCA_102, L1PCA_103, L1PCA_104, L1PCA_105, L1PCA_106, L1PCA_107, L1PCA_108}. From an algebraic viewpoint, of all robust versions of PCA, L1-PCA is arguably the most straightforward modification. Mathematically, L1-PCA of real-valued data is formulated as \begin{align} \label{L1ProbR} \underset{\m Q\in\mathbb R^{D\times K};~\m Q^T\m Q=\m I_K}{\max.}{ \|\m Q^T\m X \|_1} \end{align} where $\| \cdot \|_1$ is the L1-norm operator, such that for any $\mathbf A \in \mathbb C^{m \times n}$, $\| \mathbf A \|_1 = \sum_{i=1}^m \sum_{j=1}^n |A_{i,j}|$. That is, L1-PCA derives from L2-PCA, by substituting the L2-norm with the more robust L1-norm. By not placing squared emphasis on the magnitude of each point (as L2-PCA does), L1-PCA is far more resistant to outlying, peripheral points. Importantly, thorough recent studies have shown that when the processed data are not outlier corrupted, then the solutions of L1-PCA and L2-PCA describe an almost identical subspace. Due to its outlier resistance, L1-PCA of real-valued data matrices has attracted increased documented research interest in the past decade. Interestingly, it was shown that real-valued L1-PCA can be converted into a combinatorial problem over antipodal binary variables ($\pm 1$), solvable with intrinsic complexity polynomial in the data record size, $\mathcal{O}(N^{\text{rank}(\m X)K-K+1})$ \cite{L1PCA_m1}. Despite its increasing popularity for outlier-resistant processing real-valued data, L1-PCA for complex-data processing remains to date unexplored. Similar to \eqref{L1ProbR}, complex L1-PCA is formulated as \begin{align} \label{L1Prob} \opt Q_{L1}=\argmax{\m Q\in\mathbb C^{D\times K};~\m Q^H\m Q=\m I_K}{ \|\m Q^H\m X\|_1}. \end{align} Interestingly, in contrast to real-valued L1-PCA, complex L1-PCA in \eqref{L1Prob} has no obvious connection to a combinatorial problem. Moreover, no finite-step algorithm (exponential or otherwise) has ever been reported for optimally solving \eqref{L1Prob}. Yet, as a robust analogous to complex L2-PCA, complex L1-PCA in \eqref{L1Prob} can be traced to many important applications that involve complex-valued measurements, e.g., in the fields communications, radar processing, or general signal processing, tailored to complex-domain transformations (such as Fourier) of real-valued data. Our contributions in this present paper are summarized as follows. \begin{enumerate} \item We prove that \eqref{L1Prob} can be cast as an optimization problem over the set of unimodular matrices.\footnote{In this work a matrix is called unimodular if every entry has values on unitary complex circle. A unimodular matrix under our definition is not to be confused with the integer matrices with $\{-1,0,+1\}$-ternary minors.} \item We provide the first two fast algorithms to solve \eqref{L1Prob} suboptimally. \item We offer numerical studies that evaluate the performance of our complex L1-PCA algorithms. \end{enumerate} Importantly, our numerical studies illustrate that the proposed complex L1-PCA exhibits sturdy resistance against outliers, while it performs similarly to L2-PCA when the processed data are outlier-free. The rest of the paper is organized as follows. Section II offers as brief overview of technical preliminaries and notation. Section III is devoted to the presentation of out theoretical findings and the derivation of the proposed algorithms. Section IV holds our numerical studies. Finally, some concluding remarks are drawn in Section V. \section{Preliminaries and Notation} Our subsequent algebraic developments involve extensively the \emph{sign} of a complex number and the \emph{nuclear norm} of a matrix. In this section, we provide the reader with the definitions of these two measures, as well as useful pertinent properties. \subsection{The Sign of a Complex Number} Every complex number $z$ can be written as the product of its magnitude and a complex exponential. The complex exponential part is what we call ``sign" of the complex number and is denoted by $\s{z}$. That is, $\forall z\in\mathbb C: ~z = |z| \s{z}$, where $\s{z} \triangleq e^{j \angle z}$. Clearly, the sign of any complex number belongs to the unitary complex circle \begin{align} U\triangleq \{z\in\mathbb C:~ |z|=1\}. \end{align} Fig. \ref{Fig_sgn} shows that the sign of any non-zero complex number is unique and satisfies the property presented Lemma \ref{signdist}. \begin{mylemma} For every $z \in C$, with $|z|>0$, $\s z$ is the point on $U$ that lies nearest to $z$ in the magnitude sense. That is, $\s z = \argmin{a\in U}{|a-z|}$. \label{signdist} \end{mylemma} Through elementary algebraic manipulations, Lemma \ref{signdist} implies that \begin{align}\label{sgn_prop1} \s a = \argmax{b\in U}{\Re\{b^*a\}}. \end{align} In addition, the optimal value of \eqref{sgn_prop1} is the magnitude of $a$. The above definition and properties of the sign can be generalized into vectors and matrices. Let us define the sign of a matrix $\m A\in\mathbb C^{m\times n}$, for any $n$ and $m$, as the matrix that contains the signs of the individual entries of $\mathbf A$. That is, we define \begin{align} \s{\m A}\triangleq \left[ \begin{array}{ccc} \s{a_{1,1}} & \dots & \s{a_{1,n}} \\ \vdots & \ddots & \vdots \\ \s{a_{m,1}} & \dots & \s{a_{m,n}} \end{array} \right]. \end{align} In accordance to \eqref{sgn_prop1}, the sign of $\m A$ can be expressed as the solution to the maximization problem \begin{align} \label{sgn_prop1_mat} \s{\m A} = \argmax{\m B\in U^{n\times m}}{ \Re\{\tr{\m B \m A} \}}. \end{align} Moreover, the optimal objective value of \eqref{sgn_prop1_mat} is the L1-norm of $\m A$; that is, \begin{align} \label{sgn_prop1_mat2} \| \m A\|_1 = \underset{\m B\in U^{ n \times m}}{\max}~{ \Re\{\tr{\m B \m A} \}} = \tr{\text{sgn}(\m A)^H\m A} . \end{align} Finally, by the above definitions it holds that the sign of the product of two complex numbers equals the product of individual signs. In addition, it is clear that the sign of a number is $+1$ if-and-only-if the number is real and positive and $-1$ if-and-only-if the number is real and negative. \subsection{The Nuclear Norm} Consider matrix $\m A\in\mathbb C^{m\times n}$, with $m > n$ with no loss of generality. Then, let $\m A$ admit SVD $\mathbf A \overset{svd}{=} \m U \text{Diag}(\boldsymbol \sigma) \m V^H$, where $\m U^H \m U = \m V^H \m V = \m I_n$ and $\boldsymbol \sigma \in \mathbb R_{\geq 0}^{n}$ contains the singular values of $\m A$ in descending order (i.e., $\sigma_{1} \geq \sigma_{2} \geq \ldots \geq \sigma_{n}$).\footnote{Consider $\m a \in \mathbb C^{m}$ and $\m A=\text{Diag}(\m a)$; it holds $A_{i,i}=a_i$ for every $i \in \{1, 2, \ldots, m\}$ and $A_{i,j}=0$ for every $i \neq j$.} The nuclear norm of $\mathbf A$ is then defined as the summation of the singular values of $\m A$, \begin{align} \|\m A\|_* \triangleq \sum_{i=1}^{r} \sigma_i = \| \boldsymbol \sigma \|_1. \end{align} Clearly, it holds that $\| \m A\|_* = \| \m A^H \|_*$. Being a fundamental quantity in linear algebra, the nuclear norm can be expressed in several different ways. For example, in connection to the \emph{Orthogonal Procrustes Theorem} \cite{procrustes,Golub}, it holds that \begin{align} \label{proc_label} \| A \|_*= \underset{\m Q\in\mathbb C^{m\times n};~\m Q^H\m Q=\m I_n}{\max}~{\Re\{\tr{\m Q^H\m A}\}}. \end{align} Moreover, denoting by $\text{unt}(\m A)$ the $m \times n$ unitary matrix that maximizes \eqref{proc_label} and assuming that $\m A$ has full column rank (i.e., $\text{rank}(\m A)=n$), it holds that \begin{align} \m A = \text{unt}(\m A)(\m A^H \m A)^\frac12, \end{align} which is known as the \emph{polar decomposition} of $\m A$ \cite{polar,Golub}. Finally, $\text{unt}(\m A)$ can calculated by the SVD of $\m A$ as \begin{align} \unt{\m A} = \mathbf U \mathbf V^H. \label{untSVD} \end{align} Based on the above preliminaries, in the following section we present our developments on complex L1-PCA. \section{Complex L1-PCA} \subsection{Problem Connection to Unimodular Optimization} In view of \eqref{sgn_prop1_mat} and \eqref{proc_label} we can rewrite the complex L1-PCA problem in \eqref{L1Prob} as \begin{align} \underset{\m Q\in\mathbb C^{D\times K};~\m Q^H\m Q =\m I_K}{\max}~{\|\m Q^H \m X\|_1} & = \underset{{\m Q\in\mathbb C^{D\times K};~\m Q^H\m Q=I_K}}{\max}~{ \tr{\text{sgn}(\m Q^H \m X)^H\m Q^H\m X} } \label{nuc_norm_opt1} \\ & \overset{\eqref{sgn_prop1_mat}}= \underset{\substack{\m Q\in\mathbb C^{D\times K};~\m Q^H\m Q=I_K \\ \m B\in U^{N\times K}}}{\max}~{\Re\left\{ \tr{\m B \m Q^H\m X} \right\}} \label{nuc_norm_opt2} \\ & \overset{\eqref{proc_label} }= \underset{\m B\in U^{N\times K}}{\max}~{\|\m X\m B\|_*}. \label{nuc_norm_opt} \end{align} That is, complex L1-PCA is directly connected to a maximization problem over the set of $N \times K$ unimodular matrices. Interestingly, Markopoulos et. al. \cite{L1PCA_m1,L1PCA_1} have proven a similar result for the real case. Specifically, \cite{L1PCA_m1} reformulated real-valued L1-PCA to a nuclear-norm maximization over the set of $N \times K$ $(\pm 1)$-valued matrices, $\{ \pm 1\}^{N \times K}$. Considering that $\{ \pm 1\}$ is in fact the intersection of $U$ with the axis of real numbers, we realize that the binary-nuclear-norm maximization to which real-valued L1-PCA corresponds \cite{L1PCA_m1} constitutes a relaxation of the unimodular-nuclear-norm maximization in \eqref{nuc_norm_opt}. Due to the finite size of $\{ \pm 1\}^{N \times K}$, finite-step algorithms could be devised for the solution of real-valued L1-PCA. Regretfully, since $U^{N \times K}$ has uncountably infinite elements, this is not the case for complex L1-PCA. Even though unimodular-nuclear-norm maximization in \eqref{nuc_norm_opt} cannot be solved exhaustively, there are still necessary optimality conditions that we can use to devise efficient algorithms for solving it at least locally. The following proposition introduces the first of these optimality conditions. \begin{myprop}\label{prop1} Let $(\opt{Q_L1}, \opt{B})$ be an optimal solution pair for \eqref{nuc_norm_opt2}. Then, it holds that \begin{align} \opt{Q_{L1}} = \unt{\m X\opt B} ~~\text{and}~~ \opt B=\s{\m X^H \opt{Q_{L1}} }. \label{opt_condition_K_general} \end{align} Moreover, $\opt{Q_{L1}}$ is a solution to \eqref{L1Prob} and $\opt B$ is a solution to \eqref{nuc_norm_opt}. \end{myprop} Proposition \ref{prop1} derives directly from \eqref{sgn_prop1_mat}, \eqref{proc_label}, and the fact that both \eqref{L1Prob} and \eqref{nuc_norm_opt} are equivalent to \eqref{nuc_norm_opt2}. Most importantly, this proposition establishes that, if $\opt B$ is a solution to \eqref{nuc_norm_opt}, then $\opt Q_{L1} = \unt{ \m X \opt B} $ is a solution to the L1-PCA in \eqref{L1Prob}. Thus, one can focus on solving \eqref{nuc_norm_opt} and then use its solution to derive the L1-PCs by means of simple SVD (see the definition of $\unt{\cdot}$ in \eqref{untSVD}). In addition, the two equations in \eqref{opt_condition_K_general} can be combined into forming a new pair of necessary optimality conditions that concern the individual problems \eqref{L1Prob} and \eqref{nuc_norm_opt}. The new optimality conditions are presented in the following Corollary \ref{cor1}. \begin{mycol} \label{cor1} Let $\opt{Q_{L1}}$ be a solution to \eqref{L1Prob}; then, it holds that $\opt{Q_{L1}} = \unt{\m X\s{\m X^H \opt{Q_{L1}}}}$. Let $\opt B$ be a solution to \eqref{nuc_norm_opt}; then, it holds that $\opt B=\s{\m X^H \unt{\m X \opt B}}$. \end{mycol} \subsection{Complex L1-PCA when rank$(\m X) < D$} Consider $\m X \in \mathbb C^{D \times N}$ with $r=\text{rank}(\m X) < D$. $\m X$ admits thin SVD\footnote{In ``thin SVD" a matrix is written only in terms of its singular vectors that correspond to non-zero singular values.} $\m X \overset{svd}{=} \m U_x \m S_x \m V_x^H$, where $\m U_x^H \m U_x = \m V_x^H \m V_x = \m I_{r}$ and $\m S_x$ is the $r \times r$ diagonal matrix that contains the non-zero singular values of $\m X$. In accordance to \eqref{prop2} above, to obtain the $K \leq r$ L1-PCs of $\m X$, $\opt Q_{L1}$, we can work in two steps: (i) obtain the solution $\opt B$ to \eqref{nuc_norm_opt} and (ii) conduct SVD on $\m X \opt B$ and return $\opt Q_{L1} = \unt{\m X \opt B}$. Let us focus for a moment on the first step. We observe that\footnote{ For any square matrix $\m Z \in \mathbb C^{n \times n}$, $\sqrt{\m Z}$ is defined such that $\m Z = \sqrt{\m Z} \sqrt{\m Z}$. Also, for any $\m A \in \mathbb C^{m \times n}$, it holds that $\| \m A\|_* = \text{Tr}(\sqrt{\m A^H \m A})$. } \begin{align} \| \m X \m B \|_* & = \| \m U_x \m S_x \m V_x^H \m B\|_* \label{short1}\\ & = \text{Tr} (\sqrt{ ( \m U_x \m S_x \m V_x^H \m B)^H\m U_x \m S_x \m V_x^H \m B }) \\ & = \text{Tr} (\sqrt{ \m B^H \m V_x^H \m S_x^H \m S_x \m V_x^H \m B }) \\ & = \| \m S_x \m V_x^H \m B\|_* = \| \m X_{short} \m B \|_* \label{short2} \end{align} where $\mathbf X_{short} \triangleq \m S_x \m V_x^H \in \mathbb C^{r \times N}$. Then, $\opt B$ maximizes both \eqref{short1} (by definition) and \eqref{short2} (by equivalence). Notice also that $\unt{\m X \opt B} = \m U_x \unt{\m X_{short} \opt B}$. By the above analysis, the following proposition holds true. \begin{myprop} Consider $\m X \in \mathbb C^{D \times N}$, with $r=\text{rank}(\m X) \leq \min \{D,N\}$, admitting thin SVD $\m X = \m U_x \m S_x \m V_x^H$ (i.e., $\m S_{x}$ is $r \times r$). Define $\m X_{short} = \m S_{X} \m V_x^H$. Let the $\opt Q_{L1, short} \in \mathbb C^{r \times K}$ be the $K < r$ L1-PCs of $\m X_{short}$, solution to $\underset{\m Q \in \mathbb C^{r \times K};~\m Q^H \m Q = \m I_{K}}{max.}~\|\m Q^H \m X_{short} \|_1$. Then, it holds that \begin{align} \opt Q_{L1} = \m U_x \opt Q_{L1, short} \end{align} is the solution to the L1-PCA in \eqref{L1Prob}. Moreover, $\| {\opt Q_{L1}}^H \m X\|_1 = \| \m X \opt B\|_* = \| \m X_{short} \opt B\|_* = \| {\opt Q_{L1, short}}^H \m X_{short} \|_1$. \label{prop3} \end{myprop} Proposition \ref{prop3} shows that the L1-PCs of a rank-$r$ $D \times N$ matrix can always be obtained through the L1-PCA of a rank-$r$ $r \times N$ matrix. Therefore, Proposition \ref{prop3} steers our algorithmic focus to problems where $\m X$ has full row-rank (i.e., $D=r=\text{rank}(\m X)$). \subsection{The Single-Component Case and L1-PCA Hardness} In its simplest non-trivial form, complex L1-PCA is the search of a single ($K=1$) component $\m q\in\mathbb C^{D\times 1}$ such that $\|\m q^H X\|_1$ is maximized. In accordance to our more generic developments for the multi-component ($K \geq 1$) case above, the pursuit of a single L1-PC can also be rewritten as a unimodular nuclear-norm maximization. That is, \begin{align} \underset{\m q \in C^{D}; \| \m q\|_2=1}{\max}~\| \m q^H \mathbf X \|_1 &= \underset{\m b \in U^{N \times 1}}{\max}~\| \mathbf X \m b\|_* \label{l1pca1} \\ & = \underset{\m b \in U^{N \times 1}}{\max}~\| \mathbf X \m b\|_2 = \sqrt{\underset{\m b \in U^{N \times 1}}{\max}~ \m b^H \mathbf X^H \mathbf X \m b} \label{maxQuad}. \end{align} Equation in \eqref{maxQuad} derives by the fact that any complex vector $\m a \in \mathbb C^{m}$ admits SVD $\m a \overset{svd}{=} \m u \sigma $, with $\m u=\mathbf a \| \m a\|_2^{-1}$ and $\sigma = \| \m a \|_2$ (trivially, the dominant right-hand singular vector is $1$). By Proposition \ref{prop1}, for the L1-PC $\opt q_{L1}$ that solves \eqref{l1pca1} it holds \begin{align} \opt q_{L1} = \unt{\m X \opt b} = \m X \opt b \| \m X \opt b\|_2^{-1}, \label{b2q} \end{align} where $\opt b$ is the solution to \eqref{maxQuad}. Also, $\opt b = \s{\m X^H \opt q_{L1}}$. The unimodular quadratic maximization (UQM) problem in \eqref{maxQuad} is a well-studied problem in the literature, with several interesting applications, including among others the design of maximum-signal-to-interference-plus-noise-ratio (max-SINR) phased-arrays and and the design of unomodular codes \cite{Stoica}. For the real-data case, UQM in \eqref{maxQuad} takes the form of the well- binary-quadratic-maximization \cite{Kary}, as proven in \cite{L1PCA_m1}. Certainly, the necessary optimality condition presented in Corollary \ref{cor1} for \eqref{nuc_norm_opt} also applies to \eqref{maxQuad}. Specifically, if $\opt b$ is a solution to \eqref{maxQuad}, then \begin{align} \opt b = \s{\m X^H \unt{\m X \opt b}} = \s{\m X^H \m X \opt b}. \label{cond} \end{align} However, in contrast to what has been conjectured for the real case \cite{L1PCA_10,L1PCA_11} for the real case, \eqref{cond} is not a sufficient condition for local optimality. The reason is that $\m b = \s{\m X^H \m X \m b}$ is also satisfied by some ``saddle" points of $\| \m X \m b\|_2$. In this section we provide a stronger optimality condition than \eqref{cond}, that is necessary and sufficient for local optimality. For compactness in notation, we begin by defining \begin{align} \boldsymbol\omega(\m b) \triangleq\m b^*\odot (\m X^H \m X\m b) \in\mathbb C^{N\times 1}, \end{align} for any $\m b \in U^{N}$, where $(\cdot)^*$ performs complex-conjugation and $\odot$ is the element-wise product (Hadamard) operator. Even though the entries of $\boldsymbol\omega(\m b)$ are complex, their summation $\sum_{n=1}^N \omega_n$ is real and positive, equal to the quadratic form $\m b^H \m X^H \m X \m b$. The following Proposition \ref{prop2} presents a necessary condition for optimality in the UQM of \eqref{maxQuad} that is satisfied by all local maximizers, but not by minimizers or saddle points. \begin{myprop} \label{prop2} A unimodular vector ${\m b}$ is a local maximizer to \eqref{maxQuad}, if-and-only-if \begin{align}\label{key4} {\omega}_n(\m b)\in \mathbb R \text{~~and~~} {\omega}_n(\m b) \geq \|\m x_n\|_2^2~~~\forall n\in \{1, \ldots, N \}. \end{align} \end{myprop} \noindent \emph{Proof.} For any $\m b\in U^{N\times 1}$ there is an angle-vector $\boldsymbol\phi\in[0,2\pi)^{N\times 1}$ such that $\m b=e^{j\boldsymbol\phi}=[e^{j\phi_1},\dots,e^{j\phi_N}]^T$. The quadratic in the UQM of \eqref{maxQuad} can be then rewritten as $(e^{j\boldsymbol\phi})^H \m X^H \m X (e^{j\boldsymbol\phi})$, which is a function continuously twice differentiable in the angle-vector $\boldsymbol\phi$; the corresponding first and second derivatives are \begin{align} \m g(\m b) & =2\Im\{ \boldsymbol\omega (\m b) \} \text{~~and} \label{grad} \\ \m H(\m b) & =2\Re\Big\{\mathrm{Diag} (\m b)^H \m X^H\m X \mathrm{Diag}(\m b) -\mathrm{Diag}(\boldsymbol\omega (\m b)) \Big\}, \label{hessian} \end{align} respectively. Any local maximizer of \eqref{maxQuad} will null the gradient and render the Hessian in \eqref{hessian} negative-semidefinite. In the sequel we prove both directions of the equivalence (``if-and-only-if'' statement) in Proposition \ref{prop2}. \paragraph{Direct} If ${\m b}$ is local maximizer of \eqref{maxQuad} then $\m H (\m b) \preceq 0$. Therefore, for every $n \in \{1, 2, \ldots, N\}$, \begin{align} \m e_n^T\m H (\m b) \m e_n \leq 0 ~\Leftrightarrow~ 2\Re\Big\{ b_n^*\m x_n^H\m x_n b_n - \omega_n(\m b) \Big\} \leq 0 ~ \Leftrightarrow ~ \omega_n(\m b) \geq \|\m x_n\|_2^2, \end{align} where $\m e_n$ the $n$-th column of $\mathbf I_{N}$. \paragraph{Reverse} Consider ${\m b}$ such that all entries of ${\boldsymbol\omega}(\m b)$ satisfy \eqref{key4}. Then, for every $\mathbf z \in \mathbb C^{N}$, \begin{align} \m z^T\m H (\m b) \m z & = 2 \m z^T \left(\mathrm{Diag}( {\m b} )^H \m X^H\m X\mathrm{Diag} ({\m b}) -\mathrm{Diag}( {\boldsymbol\omega} (\m b))\right) \m z \\ & = 2\left\| \sum_{n=1}^N z_n b_n \m x_n \right\|_2^2 - 2\sum_{n=1}^N z_n^2 \omega_n(\m b) \\ & \leq 2\sum_{n=1}^N\left\| z_n b_n \m x_n \right\|_2^2 - 2\sum_{n=1}^N z_n^2 \omega_n(\m b) \\ & = 2\sum_{n=1}^Nz_n^2 \left( \left\| \m x_n \right\|_2^2 - \omega_n(\m b) \right) \leq 0 \end{align} which implies that the Hessian at $\m b$, $\m H (\m b)$, is negative-semidefinite. This, in turn, implies that $\hat{\m b}$ is a local maximizer of \eqref{maxQuad}. \hfill $\square$ In the sequel, we present a direct Corollary of Proposition \ref{prop2}. \begin{mycol} A unimodular vector ${\m b}$ is a local maximizer to \eqref{maxQuad}, if-and-only-if \begin{align}\label{key5} {\m b}= \s{ \m A_d {\m b}} \end{align} where $\m A_d \triangleq \m X^H \m X - \text{Diag}([\| \m x_{1}\|_2^2, \ldots, \| \m x_{N}\|_2^2]^T)$. \end{mycol} \noindent \emph{Proof.} In view of the sign of the previous section, for every $n \in \{1, 2, \ldots, N\}$ it holds \begin{align} \omega_n(\m b) \geq \|\m x_n\|_2^2 & \Leftrightarrow \omega_n(\m b) -\|\m x_n\|_2^2 \geq 0 \\ ~ & \Leftrightarrow \s{\omega_n(\m b) -\|\m x_n\|_2^2}=1 \\ ~ & \Leftrightarrow \s{ b_n^* \m x^H\m X { \m b} -\|\m x_n\|_2^2}=1 \\ ~ & \Leftrightarrow b_n= \s{ \sum_{m\neq n}\m x_n^H\m x_m b_m } \end{align} which, by the definition of $\m A_d$, implies \eqref{key5}. \hfill $\square$ The quantitative difference between the conditions \eqref{cond} and \eqref{key5} lie in the corresponding $\boldsymbol\omega (\cdot)$ variables. On the one hand, condition in \eqref{cond} guarantees that $\omega_n(\opt b)$ is positive; on the other hand, condition in \eqref{key5} guarantees that, for every $n \in \{1, 2, \ldots, N\}$, $\omega_n(\opt b)$ is not only positive, but also greater-or-equal to $\|\m x_n\|_2^2$. Hence, \eqref{key5} is a clearly a stronger condition than \eqref{cond}, in the sense that \eqref{key5} implies \eqref{cond} but not vise versa. For example, saddle points in $U^{N}$ could satisfy the mild condition in \eqref{cond} but not the necessary-and-sufficient local optimality condition in \eqref{key5}. Proposition \ref{prop2} and the corollary condition \eqref{key5} brought us a step closer to solving \eqref{maxQuad} optimally. Specifically, based on \eqref{key5} we can also prove the following corollary. \begin{mycol}\label{cor2} The UQM in \eqref{maxQuad} can be equivalently rewritten as \begin{align} \label{norm1Ad_prob} \underset{\m b\in U^{N\times 1}}{\max.}~{\|\m A_d\m b\|_1}. \end{align} \end{mycol} \noindent \emph{Proof.} This problem equivalence results directly from \eqref{key5}. We have \begin{align*} \underset{\m b\in U^{N\times 1}}{\max}~{\m b^H\m X^H\m X\m b} & = \underset{\m b\in U^{N\times 1}}{\max}~{\m b^H\m A_d\m b+\tr{\m X^H\m X}} \\ ~ & \overset{\eqref{key5}}{=} \underset{\m b\in U^{N\times 1};~ b=\s{\m A_d \m b}}{\max} {\m b^H\m A_d\m b+\tr{\m X^H\m X}} \\ ~ & = \underset{\m b\in U^{N\times 1};~ b=\s{\m A_d \m b}}{\max}~{\|\m A_d\m b\|_1+\tr{\m X^H\m X}}, \end{align*} which implies that the UQM in \eqref{maxQuad} and the problemin \eqref{norm1Ad_prob} have identical optimal arguments and their optimal values differ by the constant $\tr{\m X^H\m X}$. \hfill $\square$ By Corollary \ref{cor2}, any effort to solve \eqref{norm1Ad_prob} counts toward solving the UQM and $K=1$ L1-PCA in \eqref{maxQuad}. Next, we discuss the hardness of UQM and complex L1-PCA ($K=1$). We notice that UQM in \eqref{maxQuad} is, in fact, a quadratically-constrained quadratic program (QCQP) with concave objective function and non-convex constraints \cite{PardalosVavasis}.\footnote{In its standard form, a QCQP is expressed as a minimization. Accordingly, the function that we minimize here is $-\m b^H \m X^H \m X \m b$, which is concave.} Therefore, it is formally $\mathcal{NP}$-hard. Since UQM in \eqref{maxQuad} is $\mathcal{NP}$-hard, the equivalent complex L1-PCA for $K=1$ is also $\mathcal{NP}$-hard. Accordingly complex L1-PCA in \eqref{L1Prob} must also be at least $\mathcal{NP}$-hard as a generalization of \eqref{maxQuad} for $K \geq 1$. In conclusion, in contrast to the real-field case of \cite{L1PCA_m1}, complex L1-PCA remains $\mathcal{NP}$-hard in the sample size $N$, even for fixed dimension $D$. \subsection{Proposed Algorithms for Complex L1-PCA} Based on the theoretical analysis above, in the sequel we present two algorithms for complex L1-PCA. Both algorithms are iterative and guaranteed to converge. With proper initialization, both algorithms could return upon convergence the global optimal solution of \eqref{nuc_norm_opt}. Our first algorithm relies on \eqref{cond} and can be applied for general $K$. Our second algorithm relies on the stronger condition \eqref{key5} and is applicable only to the $K=1$ case. \subsubsection{Algorithm 1} For any given data matrix $\m X\in\mathbb C^{D\times N}$ and number of sought-after L1-PCs $K < \text{rank}(\m X)$, the algorithm initializes at an arbitrary unimodular matrix $\m B^{(0)}\in U^{N\times K}$; then, in view of the mild optimality condition in \eqref{cond}, the algorithm performs the iteration \begin{align} \m B^{(i)}=\s{\m X^H\unt{\m X\m B^{(i-1)}}},~~i=1,2,\dots \label{algo1} \end{align} until the objective value in \eqref{nuc_norm_opt} converges. That is, the algorithm terminates at the first iteration $t$ that satisfies \begin{align}\label{conv} \|\m X\m B^{(t)}\|_*-\|\m X\m B^{(t-1)}\|_* \leq \delta \end{align} for some arbitrarily low convergence threshold $\delta \geq 0$. Then, the algorithm returns $\hat{\m B}$ as (approximate) solution to \eqref{nuc_norm_opt} and, in accordance to \eqref{opt_condition_K_general}, $\hat{\m Q}_{L1} = \unt{\m X\m B^{(t)}}$ as (approximate) solution to the L1-PCA problem in \eqref{L1Prob}. Below we provide a proof of convergence for the iterations in \eqref{algo1}, for any initialization $\m B^{(0)}$. \noindent \emph{Proof of Convergence of \eqref{algo1}.} The convergence of \eqref{algo1} is guaranteed because the sequence $\{\|\m X\m B^{(i)}\|_*\}_{i=0}^{\infty}$ is (a) upper bounded by $\| \m X \opt B\|_*$ and (b) monotonically increasing; that is, for every $i$, $\|\m X\m B^{(i-1)}\|_*\leq \|\m X\m B^{(i)}\|_* \leq \| \m X \opt B\|_*$. The monotonicity of the sequence can be proven as follows. \begin{align} \|\m X\m B^{(i)}\|_* & = \max_{\m Q \in\mathbb C^{D\times K};~\m Q ^H\m Q = \m I_K}~{\Re\left\{ \tr{\m Q^H\m X\m B^{(i)}} \right\}} \label{aDummyLabel3} \\ ~ & \geq \Re\left\{ \tr{\unt{\m X\m B^{(i-1)}}}^H \m X\m B^{(i)}\right\} \label{aDummyLabel1} \\ ~ & = \Re\left\{ \tr{\unt{\m X\m B^{(i-1)}}}^H \m X\s{\m X^H\unt{\m X\m B^{(i-1)}}}\right\} \\ ~ & =\max_{\m B\in U^{N\times K}}~\Re\left\{ \tr{\unt{\m X\m B^{(i-1)}}}^H \m X\m B'\right\} \label{aDummyLabel4} \\ ~ & \geq \Re\left\{ \tr{\unt{\m X\m B^{(i-1)}}}^H \m X\m B^{(i-1)}\right\} \label{aDummyLabel2} \\ ~ & = \|\m X\m B^{(i-1)}\|_*. \end{align} The inequality in \eqref{aDummyLabel1} holds because we have substituted $\m Q$ with a point in the the feasibility set of the maximization in \eqref{aDummyLabel3}, but not necessarily the maximizer. Similarly, inequality in \eqref{aDummyLabel2} holds because we have substituted $\m B$ with a point in the the feasibility set of the maximization in \eqref{aDummyLabel4}, but not necessarily the maximizer. \hfill $\square$ A detailed pseudocode of the proposed Algorithm 1 is presented in Fig. \ref{fig:algo1}. \subsubsection{Algorithm 2 (for $K=1$)} Our second algorithm has the form of converging iterations, similarly to Algorithm 1. However, this algorithm relies on the strong optimality condition of \eqref{key5}, instead of the mild condition of \eqref{cond}. Specifically, given $\m X\in\mathbb C^{D\times N}$ and an initialization point $\m b \in U^{N}$, Algorithm 2 iterates as \begin{align} \m b^{(i)}=\s{\m A_d \m b^{(i-1)}},~~i=1,2,\dots \label{algo2} \end{align} until $ \|\m A_d\m b^{(t)}\|_1-\|\m A_d\m b^{(t-1)}\|_1 \leq \delta $ for some arbitrary small threshold $\delta$ and converging-iteration index $t$. Then the algorithm returns $\hat{\m b} = \m b^{(t)}$ as (approximate) solution to \eqref{maxQuad} and, in accordance to Proposition \ref{prop1} and \eqref{b2q}, $\hat{\m q}_{L1}={\m X \hat{\m b}}{\|\m X \hat{\m b} \|_2}^{-1}$ as approximate solution to \eqref{l1pca1}. Clearly, the iteration in \eqref{algo2} will converge because the sequence $\{\|\m A_d\m b^{(i)}\|_1\}_{i=1}^{\infty}$ is (a) upper bounded by $\| \m A_d \opt b\|_1$ and (b) increases monotonically as \begin{align} \|\m A_d \m b^{(i)}\|_1 & = \max_{\m b\in U^{N\times 1}}~\Re\left\{ \m b^H \m A_d \m b^{(i)} \right\} \\ ~ & \geq \Re\left\{ {\m b^{(i-1)}}^H \m A_d \m b^{(i)} \right\} \\ ~ & = \Re\left\{ {\m b^{(i-1)}}^H \m A_d \s{\m A_d \m b^{(i-1)}} \right\} \\ ~ & = \|\m A_d \m b^{(i-1)}\|_1. \end{align} A detailed pseudocode for Algorithm 2 is offered in Fig. \ref{fig:algo2}. \section{Numerical Studies} \subsection{Convergence} The convergence of Algorithm 1 was formally proven in the previous Section. At this point, to visualize the convergence, we fix $D=10$, $N=100$, and $K=5$, and generate $\mathbf X \in \mathbb R^{D \times N}$ with entries drawn independently from $\mathcal {CN} (0, 1)$. Then, we run on $\m X$ Algorithm 1 (initialized at arbitrary $\mathbf B^{(0)}$) and plot in Fig. \ref{fig:conv1} $\| \mathbf X \m B^{(i)}\|_*$ versus the iteration index $i$. We observe that, indeed, the objective nuclear-norm-maximization metric increases monotonically. Next, we wish to examine the number of iterations needed for Algorithm 1 to converge, especially as the problem-size parameters $D$, $N$, and $K$ take different values. First, we set $D=10$ and $K=3$ and vary $N =10, 15, \ldots, 40$. We draw again the entries of $\m X$ independently from $\mathcal{CN}(0,1)$ and plot in Fig. \ref{fig:vsN} the average number of iterations needed for Algorithm 1 to converge (averaging is conducted over 1000 independent realizations of $\m X$) . We observe that, expectedly, the number of iterations increases along $N$. However, importantly, it appears to increase sub-linearly in $N$. In Fig. \ref{fig:vsD} fix $K=3$ and $N=30$ and plot the average number of iterations needed for convergence versus $D$; in Fig. \ref{fig:vsK} we fix $D=10$ and $N=20$ and plot the average number of iterations versus $K$. We observe that the number of iterations increases sub-sub-linearly along $D$ and rather linearly along $K$. \subsection{Subspace Calculation} In this first experiment, we investigate and compare the outlier resistance of L2-PCA and L1-PCA. We consider the data matrix $\m X$ of \eqref{Xexp1}, consisting of $N=10$ data points of size $D=5$. A data processor wants to calculate the $(K=3)$-dimensional dominant subspace of $\m X$, spanned by its $K$ highest-singular-value left singular-vectors in $\mathbf Q_{n} \in \mathbb C^{D \times K}$. However, unexpectedly, 1 out of the $N=10$ measurements (say, the first one) has been additively corrupted by a random point $\m c $ drawn from $\mathcal{CN}(\m 0_{D}, \sigma^2 \m I_{D})$. Therefore, instead of $\mathbf X$, what is available is the corrupted counterpart $\m X_{cor}= [ \m x_{1} + \m c, \m x_2, \ldots, \m x_N]$ (the data processor is not aware of the corruption). Instead of the nominal, sought-after $\text{span}(\m Q_{n})$, the data processor calculates the span of the $K$ L2-PCs of $\m X_{cor}$, $\m Q_{L2}$, and the span of the $K$ L1-PCs of $\m X_{cor}$, $\m Q_{L1}$. To quantify the corruption-resistance of the two subspace calculators, we measure the subspace proximity (SP) \begin{align} \text{SP}(\m Q, \m Q_{n}) = \frac{1}{\sqrt{K}}\| \m Q_{n}^H\m Q\|_2 \in [0,1], \end{align} for $\m Q = \m Q_{L2}$ and $\m Q=\m Q_{L1}$. Certainly, if $\text{span}(\m Q)$ is orthogonal to the sought-after $\text{span}(\m Q_{n})$, then $\m Q^H \m Q_{n} = \m 0_{K \times K}$ and $\text{SP}(\m Q, \m Q_{n})=0$. On the other hand, if-and-only-if $\text{span}(\m Q)$ coincides with $\text{span}(\m Q_{n})$, then $\text{SP}(\m Q, \m Q_{n})=1$. \begin{figure*}[t!] \footnotesize{ \begin{align} \mathbf X = \begin{bmatrix} -0.3003 - i~1.0117 & 0.4618 + i~0.0705 & -0.3924 - i~0.1602 & -0.5327 + i~0.1129 & 1.9368 - i~0.5685\\ -0.3886 - i~0.6530 & -0.6204 - i~0.3556 & 0.7040 - i~1.3574 & 0.3315 + i~0.9675 & -1.5390 - i~0.8711\\ -0.5961 + i~0.1708 & 0.6005 - i~1.8511 & -0.5541 - i~0.6086 & -0.4701 + i~0.3234 & 1.0896 + i~1.3071\\ -0.0893 + i~0.1863 & -0.6031 + i~0.3869 & -0.7038 + i~0.0123 & 1.0782 + i~1.4440 & 0.9593 - i~0.9096\\ -0.1678 + i~1.7097 & 0.5883 - i~0.7234 & -0.5185 - i~0.2924 & -0.3291 - i~1.7799 & -1.1252 - i~0.5569\\ 0.2485 + i~0.6433 & -1.3913 - i~1.7947 & 0.1189 + i~0.1334 & 0.0509 - i~0.1326 & -1.2163 + i~0.4921\\ 0.5302 - i~0.1632 & -0.9533 - i~0.3757 & 1.4074 - i~1.2147 & -0.4419 + i~0.8734 & -0.8092 - i~0.6724\\ 0.0428 + i~0.6675 & -1.1010 + i~0.6750 & 0.6385 - i~0.7620 & 0.4554 + i~0.5840 & -0.7863 + i~1.2148\\ -1.3608 + i~0.5011 & 1.0467 - i~0.1282 & 0.5043 + i~0.1808 & 0.2366 - i~0.8010 & 0.0459 - i~0.3441\\ 0.5409 - i~0.7822 & 0.0075 - i~1.5285 & 1.4829 + i~0.9075 & -0.5216 - i~0.0030 & 0.8504 + i~0.8860\\ \end{bmatrix}^T. \label{Xexp1} \end{align} } \hrule \end{figure*} In Fig. \ref{spexp1} we plot the average SP (over $10~000$ independent corruption realizations) for L2-PCA and L1-PCA versus the corruption variance $\sigma^2$. We observe that for weak corruption of variance $\sigma^{2}<0$dB, L1-PCA and L2-PCA exhibit almost identical performance, with SP approaching the ideal value of $1$. We also notice that for very strong corruption of variance $\sigma^2 > 35$dB, both L1-PCA and L2-PCA get similarly misled converging to a minimum SP of about $0.82$. Interestingly, for all intermediate values of $\sigma^2$, L1-PCA exhibits significantly superior performance in calculating the nominal subspace. For example, for $\sigma^2=10$dB, L1-PCA attains $93\%$ SP, while L2-PCA attains $87\%$ SP. \subsection{Cognitive Signature Design} Next, we investigate an application example for complex L1-PCA, drawn from the field of wireless communications. We consider a system of $K$ single-antenna primary sources using unknown complex-valued spread-spectrum signatures of length $L$ chips. The signatures of the $K$ sources are linearly independent (possibly, orthogonal), so that they do not interfere with each-other, spanning a $K$-dimensional subspace in $\mathbb C^{L}$. We consider now that $L-K$ secondary sources wish also to attempt using the channel, using length-$L$ spread spectrum signatures. Of course, the secondary sources should not interfere with the primary ones; for that reason, the $L-K$ signatures of the secondary sources should be orthogonal to the $K$ signatures of the primary sources --i.e., the secondary sources should transmit in the nullspace of the primary ones. Therefore, the secondary users wish to estimate the subspace spanned by the primary signatures and then design signatures in its orthogonal complement. \subsubsection{Training Phase} With this goal in mind, we consider a collection of $N$ snapshots that correspond to primary transmissions in the presence of additive white Gaussian noise (AWGN); these snapshots will be used for estimating the primary-source signature subspace (and then design secondary signatures in its orthogonal complement). To make the problem more challenging, we consider that while these snapshots are collected, an unexpected, strong interference source is also sporadically active. That is, the $n$-th recorded snapshot vector (after down-conversion and pulse-matching), $\m x (n)\in \mathbb C^{L}$, has the form \begin{align} \label{model1} \m x (n) = \underbrace{\sum_{k=1}^K \m s_k y_k(n) + \m n (n)}_{\text{nominal}} + \underbrace{\gamma (n) \m i(n)}_{ \begin{smallmatrix} \text{unexpected} \\ \text{corruption} \end{smallmatrix} } \in\mathbb C^{L\times 1}. \end{align} In \eqref{model1}, $y_k(n)$ accounts for the product of the $n$-th power-scaled information symbol transmitted by the $k$-th primary source with the flat-fading channel between the $k$-th source and the receiver, with $E\{|y_k(n)|^2\}=1$; $\m s_k \in \mathbb C^{L}$ is the signature of the $k$-th primary source, designed such that $\| \m s_{k}\|_2=1$ and $\m s_{k}^H \m s_{l}= 0$, for $k \neq l$; $\m n(n)$ is additive white Gaussian noise (AWGN), drawn from $\mathcal{CN}\left(\m 0_{L\times 1},\frac{1}{L}\m I_L\right)$; and $\m i(n)$ accounts for unexpected sporadic interference, drawn from $\mathcal{CN}\left(\m 0_{L\times 1},\frac{100}{L}\m I_L\right)$. $\{\gamma (n)\}_{n=1}^N$ are independent and identically distributed (i.i.d.) $\{0,1\}$-Bernoulli($\epsilon$) variables that indicate interference activity. That is, each snapshot is corrupted by an unexpected interference signal with probability $\epsilon$. According to the chosen values of symbol and noise variance, the primary users operate at signal-to-noise ratio (SNR) of $0$dB. The recorded snapshots are organized in the complex data record $ \m X \triangleq [ \m x(1), \m x(2), \dots \m x (N) ] \in\mathbb C^{L\times N}. $ Then, we analyze $\m X$ to estimate the $K$-dimensional primary-source transmission subspace, $\mathcal S \triangleq \mathrm{span}( [\m s_{1}, $ $ \m s_{2}, \ldots, \m s_{K}] )$. Traditionally, $\mathcal S$ would be estimated as the span of $ \m Q_{L2} = \underset{\m Q \in \mathbb C^{L\times K}; ~\m Q^H \m Q=\m I_K}{\text{argmax}}{\| \m X^H \m Q \|_2}. $ The reason is that, if all snapshots are nominal (i.e., no unexpected impulsive interference), as $N$ tends to infinity the span of $\m Q_{L2}$ provably coincides with that of the $K$ dominant eigenvectors of the snapshot autocorrelation matrix $E \{ \m x(n) \m x(n)^H\}$, which, in turn, coincides with $\mathcal S$. To examine the performance of complex L1-PCA, in this experiment we also estimate $\mathcal S$ by the span of $ \m Q_{L1} = \underset{\m Q \in \mathbb C^{L\times K}; ~\m Q^H \m Q=\m I_K}{\text{argmax}}{\| \m X^H \m Q \|_1}. $ After $\mathcal S$ is estimated, we pick $L-K$ orthogonal secondary signatures from the orthogonal complement of $\text{span}(\m Q_{L2})$ (or $\text{span}(\m Q_{L1})$). The secondary users employ these signatures and conduct transmissions concurrently with the primary sources. \subsubsection{Concurrent Operation of Primary and Secondary Sources} At this point, the assume that the impulsive corruption that interfered with secondary-signature training is no longer active. As both the primary and secondary sources transmit, the receiver applies matched filtering for each primary source. To evaluate the ability of L2/L1-PCA to identify the actual primary-source subspace (and thus enable secondary signatures that do not interfere with the primary sources), we measure and plot the post-filtering signal-to-interference-plus-noise ratio (SINR) for the primary users. It is expected that if $\text{span}(\m Q_{L2})$ (or $\text{span}(\m Q_{L1})$) is close to $\mathcal S$, then the interference of the secondary-sources to the primary sources will be minimized and, accordingly, the aggregate post-matched-filtering SINR (sum-SINR) of all $K$ primary sources will be high. We denote by $\m s_{l}'$ the signature of the $k$-th secondary source. It holds $\| \m s_{l}'\|_2=1$ and $\m s_{l}'^H\m s_{k}=0$ for $k \neq l$. Also, we denote by $y_{k}'(n)$ the $n$-th symbol/channel compound for the $k$-th secondary source, with $E \{ | y_{k}'(n) |^2\}=\rho^2$ for all $k$. Sum-SINR is formally defined as $\text{sum-SINR}=\sum_{k=1}^K \text{SINR}_k$, where \begin{align} \text{SINR}_k & \triangleq \frac{ E\left\{ \left|\m s_k^H\m s_ky_k(n)\right|^2 \right\} }{E\left\{\left|\m s_k^H\left( \sum_{m\neq k}\m s_m y_m(n) + \sum_{l=1}^{L-K} \m s_l'y_k'(n) + \m n (n) \right)\right|^2\right\}} \\ ~ & = \frac{1}{1+ \rho^2 \left|\sum_{l=1}^{L-K} \m s_{k}^H\m s_l' \right|^2 }. \label{aDummyLabel5} \end{align} Certainly, sum-SINR is a decreasing function of the transmission energy of the secondary sources, $\rho^2$. We also observe that if $\mathcal S$ was perfectly estimated and, accordingly $\{ \m s_{k}'\}_{k=1}^{L-K}$ were designed in its orthogonal complement, then $\m s_k^H\m s_l'=0$ for all $k,l$ and \eqref{aDummyLabel5} takes its maximum value $1$ (or $0$dB), independently of $\rho$. In our numerical simulation we set $N=200$, $L=8$, and $K=3$. In Fig. \ref{cogSigDes_fig} we plot the average value of sum-SINR (calculated over $10~000$ independent experiments) versus $\rho^2$, for snapshot-corruption probability $\epsilon = 0\%$ and $\epsilon = 1.2\%$. As a benchmark, we also plot the horizontal line of $10\log_{10}(K)$ dB, which corresponds to the sum-SINR if $\mathcal S$ was accurately estimated (i.e., $\m s_k^H\m s_l'=0$ for all $k,l$). We observe that if $\epsilon=0$ (i.e., all $N$ training snapshots nominal), L2-PCA-based and L1-PCA-based signature designs yield almost identical high sum-SINR performance. When however $\epsilon$ increases from $0$ to $1.2\%$, then L2-PCA gets significantly impacted; accordingly, the sum-SINR performance of the L2-PCA-based signature design diminishes significantly. On the other hand, we observe that the L1-PCA-based design exhibits sturdy robustness against the corruption of the training snapshots, maintaining high sum-SINR performance close to the nominal one. \subsection{Direction-of-Arrival Estimation} Direction-of-Arrival (DoA) estimation is a key operation in many applications, such as wireless node localization and network topology estimation. Super-resolution DoA estimation relies, traditionally, on the L2-PCA of a collection of --as, e.g., in MUltiple-SIgnal Classification (MUSIC) \cite{MUSIC}. In this numerical study, we consider a receiver equipped with a uniform linear array (ULA) of $D$ antenna elements which receives signals from $K$ sources of interest located at angles $\Theta=\{\theta_1, \theta_2, \ldots, \theta_K \}$ with respect to the broadside. The inter-element spacing of the array, $d$, is fixed at half the wavelength of the received signal and the array response vector for a signal that arrives from angle $\phi \in [\frac{-\pi}{2}, \frac{\pi}{2})$ is \begin{align} \m s(\phi) \triangleq [1, e^{-j\pi d \sin(\phi)}, \ldots, e^{-j (D-1)\pi d \sin(\phi)}]^T \in\mathbb C^{D\times 1}. \end{align} To make the problem more challenging, we assume that apart from the $K$ sources of interest, there are also $J$ unexpected, sporadically interfering sources (jammers), impinging on the ULA from angles $\Theta' = \{\theta_1', \theta_2', \ldots, \theta_J' \}$. Therefore, the $n$th snapshot at the receiver is of the form \begin{align} \label{model2} \m x (n) = \underbrace{\sum_{k=1}^K \m s({\theta_k}) y_k(n) ~~+ \m n (n)}_{\text{nominal}} + \underbrace{\sum_{j=1}^J\gamma_{n,j}\m s({\theta_j}') y_j'(n)}_{\text{unexpected jamming}} \in\mathbb C^{D\times 1}. \end{align} In \eqref{model2}, $y_k(n)$ and $y_k(n)'$ account for the compound symbol of the $k$-th source and $j$-th jammer, respectively; $\gamma_{n,j}$ is a $\{0,1\}$-Bernoulli($\epsilon$) activity indicator for jammer $j$ at snapshot $n$; $\m n (n)$ is the AWGN component, drawn from $\mathcal{CN}(\mathbf 0_D, \sigma^2 \mathbf I_{D})$. The popular MUSIC DoA estimation method collects all $N $ snapshots in $\m X = [\m x(1), \m x(2), \ldots, \m x(N)]$ and calculates the source-signal subspace by the span of $ \m Q_{L2} = \underset{\m Q \in \mathbb C^{D\times K}; ~\m Q^H \m Q=\m I_K}{\text{argmax}}{\| \m X^H \m Q \|_2}. $ If all snapshots are nominal, as $N$ tends to infinity $\text{span}(\m Q_{L2})$ tends to coincide with $\text{span}([\m s(\theta_1), \ldots, \m s(\theta_K)])$ and allows for accurate DoA estimation, by identifying the $K$ peaks of the, so called, MUSIC spectrum \begin{align} P(\phi; \m Q_{L2}) = \| (\mathbf I_{D} - \m Q_{L2} \m Q_{L2}^H ) \m s(\phi) \|_2^{-1}. \end{align} In this work, we also conduct DoA estimation by finding the peaks of the L1-PCA spectrum $P(\phi; \m Q_{L1})$, where $ \m Q_{L1} = \underset{\m Q \in \mathbb C^{D\times K}; ~\m Q^H \m Q=\m I_K}{\text{argmax}}{\| \m X^H \m Q \|_1}. $. In this numerical study, we set $D=12$, $K=4$, $J=3$, source DoAs $\Theta = \{-40^\circ, -21^\circ, -7^\circ, 60^\circ\}$, and jammer DoAs $\Theta' = \{ 0^\circ, 20^\circ, 80^\circ \}$. The number of snapshots available for DoA estimation (i.e., the columns of $\m X$), $N$, vary from $10$ to $100$ with step $10$. The $K$ sources of interest operate at SNR 0dB; when active, jammers operate at the much higher SNR of 15dB. We conduct $10~000$ independent DoA estimation experiments and evaluate the average DoA estimation performance of the L2-PCA-based and L1-PCA-based methods by means of the standard root-mean-squared error (RMSE), defined as \begin{align} \text{RMSE} \triangleq \sqrt{\frac{1}{10000} \sum_{m=1}^{10000} \sum_{k=1}^K | \theta_{k}-\hat{\theta}_k(m) |^2} \end{align} where $\hat{\theta_k}(m)$ is the estimate of $\theta_{k}$ at the $m$-th experiment. In Fig. \ref{NoJammer_Fig} we set $\epsilon=0$ (i.e., no unexpected jamming) and plot the RMSE for L2-PCA and L1-PCA versus the number of snapshots, $N$. We observe that L2-PCA and L1-PCA have almost identical RMSE performance, which improves towards $0^\circ$ as $N$ increases. Then, in Fig. \ref{Fig_with_jammer} we increase $\epsilon$ to $2\%$. The performance of L2-PCA (standard MUSIC \cite{MUSIC}) changes astoundingly. We observe that now the performance of MUSIC deteriorates as the sample-support $N$ increases converging on a plateau of poor performance at RMSE=$16^\circ$. On the other hand, quite interestingly, L1-PCA resists the directional corruption of the jammers and, as $N$ increases it attains decreasing RMSE which reaches as low as $3^\circ$ for $N=100$. That is, in contrast to L2-PCA, L1-PCA is benefited by an increased number of processed data points when the corruption ratio remains constant. \section{Conclusions} We showed that, in contrast to the real-valued case, complex L1-PCA is formally $\mathcal{NP}$-hard in the number of data points. Then, we showed how complex L1-PCA can be cast and solved through a unimodular nuclear-norm maximization problem. We conducted optimality analysis and provided necessary conditions for global optimality. For the case of K = 1 principal component, we provided necessary and sufficient conditions for local optimality. Based on the optimality conditions, we presented the first two sub-optimal/iterative algorithms in the literature for L1-PCA. Finally, we presented extensive numerical studies from the fields of data analysis and wireless communications and showed that when the processed complex data are outlier-free, L1-PCA and L2-PCA perform very similarly. However, when the processed data are corrupted by faulty measurements L1-PCA exhibits sturdy resistance against corruption and significantly robustifies applications that rely on principal-component data feature extraction. \newpage \bibliographystyle{IEEEbib}
{ "redpajama_set_name": "RedPajamaArXiv" }
5,144
Pull Requests are great! Readme/Documentation changes are ok in the master branch. 1. Fork the Repo on github. 2. If you are adding functionality or fixing a bug, please add a test! 3. Add your name to AUTHORS.txt **Try to use the PEP8 style guide.**
{ "redpajama_set_name": "RedPajamaGithub" }
9,840
Q: Unable to access Plesk 9 on a dedicated linux server It would seem that when I attempt to access my Plesk admin page this is unavailable. After doing some cursory checks I discovered that the server was not listening for incoming requests on port 8443 (verified remotely via telnet). I therefore assumed that the service was not running and therefore issued the following command via SSH under the root account: /etc/init.d/psa start This results in the following output: Starting xinetd service... done Starting named service... done Starting mysqld service... done Starting postgresql service... done Starting psa-spamassassin service... done Plesk: Starting Mail Server... already started Starting Plesk... failed Starting drwebd service... not installed As a possibly related issue, today I experience a server outage for reasons yet unknown. As part of the investigation into this I used Plesk to reboot the server. This action completed successfully and was the last action I performed in Plesk. Since the reboot I can access all services other than plesk itself. Having never encountered this issue before, and also being something of a newbie when it comes to administering a Linux based server, I have no idea where to go next? Is there are log file I can check for service start errors? Is the solution staring me in the face? Or maybe something else. Thanks in advance EDIT Something else that may or may not be relevant: After rebooting, and before attempting to access Plesk again I installed sysstat with yum in order to be able to use iostat. Other than that no other changes have been made to the server. EDIT 2 The last entry in the /var/log/sw-cp-server/error_log file is 2010-12-10 18:35:34: (log.c.75) server started 2010-12-10 18:35:34: (network.c.336) SSL: error:00000000:lib(0):func(0):reason(0) If I attempt to start Plesk with the command I mentioned earlier 2 new entries, the same as the one above, are added to the error log. I have found some reference to this error log entry and these relate to an updated version of SSL breaking plesk, however I have not (to my knowledge) updated SSL (unless the install of sysstat did this in the background??) Output from df -h is: Filesystem Size Used Avail Use% Mounted on /dev/md1 9.2G 5.2G 3.7G 59% / /dev/md5 9.4G 2.2G 7.2G 23% /usr /dev/md6 446G 5.2G 440G 2% /var none 2.0G 8.0K 2.0G 1% /tmp EDIT 3 As the errors in the log seem to point to the issue that whose resolution is detailed here: http://kb.parallels.com/8338 I have followed the guidelines in that KB, with the following outputs: [root@s15421692 dumping]# wget -c http://kb.parallels.com/Attachments/12669/Attachments/sw-cp-server-1.0-6.201004011105.centos5.i386.rpm --2010-12-10 19:05:53-- http://kb.parallels.com/Attachments/12669/Attachments/sw-cp-server-1.0-6.201004011105.centos5.i386.rpm Resolving kb.parallels.com... 64.131.90.47 Connecting to kb.parallels.com|64.131.90.47|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 429868 (420K) [application/x-redhat-package-manager] Saving to: `sw-cp-server-1.0-6.201004011105.centos5.i386.rpm' 100%[======================================>] 429,868 509K/s in 0.8s 2010-12-10 19:05:54 (509 KB/s) - `sw-cp-server-1.0-6.201004011105.centos5.i386.rpm' saved [429868/429868] [root@s15421692 dumping]# rpm -Uhv sw-cp-server-1.0-6.201004011105.centos5.i386.rpm error: Failed dependencies: libbz2.so.1 is needed by sw-cp-server-1.0-6.201004011105.centos5.i386 libcrypto.so.6 is needed by sw-cp-server-1.0-6.201004011105.centos5.i386 libpcre.so.0 is needed by sw-cp-server-1.0-6.201004011105.centos5.i386 libssl.so.6 is needed by sw-cp-server-1.0-6.201004011105.centos5.i386 [root@s15421692 dumping]# yum install sw-cp-server-1.0-6.201004011105.centos5.i386.rpm Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * atomic: www6.atomicorp.com addons | 951 B 00:00 atomic | 1.9 kB 00:00 atomic/primary_db | 412 kB 00:00 base | 2.1 kB 00:00 extras | 2.1 kB 00:00 updates | 1.9 kB 00:00 Setting up Install Process Examining sw-cp-server-1.0-6.201004011105.centos5.i386.rpm: sw-cp-server-1.0-6.201004011105.centos5.i386 Marking sw-cp-server-1.0-6.201004011105.centos5.i386.rpm as an update to sw-cp-server-1.0-3.200811141432.centos5.x86_64 Resolving Dependencies --> Running transaction check ---> Package sw-cp-server.i386 0:1.0-6.201004011105.centos5 set to be updated --> Processing Dependency: libbz2.so.1 for package: sw-cp-server --> Processing Dependency: libcrypto.so.6 for package: sw-cp-server --> Processing Dependency: libpcre.so.0 for package: sw-cp-server --> Processing Dependency: libssl.so.6 for package: sw-cp-server --> Running transaction check ---> Package bzip2-libs.i386 0:1.0.3-6.el5_5 set to be updated ---> Package bzip2-libs.x86_64 0:1.0.3-6.el5_5 set to be updated ---> Package openssl.i686 0:0.9.8e-12.el5_4.6 set to be updated ---> Package pcre.i386 0:6.6-2.el5_1.7 set to be updated --> Finished Dependency Resolution Dependencies Resolved ================================================================================ Package Arch Version Repository Size ================================================================================ Updating: sw-cp-server i386 1.0-6.201004011105.centos5 /sw-cp-server-1.0-6.201004011105.centos5.i386 1.2 M Installing for dependencies: bzip2-libs i386 1.0.3-6.el5_5 updates 37 k openssl i686 0.9.8e-12.el5_4.6 base 1.4 M pcre i386 6.6-2.el5_1.7 base 112 k Updating for dependencies: bzip2-libs x86_64 1.0.3-6.el5_5 updates 35 k Transaction Summary ================================================================================ Install 3 Package(s) Update 2 Package(s) Remove 0 Package(s) Total size: 2.8 M Total download size: 1.6 M Is this ok [y/N]: y Downloading Packages: (1/4): bzip2-libs-1.0.3-6.el5_5.x86_64.rpm | 35 kB 00:00 (2/4): bzip2-libs-1.0.3-6.el5_5.i386.rpm | 37 kB 00:00 (3/4): pcre-6.6-2.el5_1.7.i386.rpm | 112 kB 00:00 (4/4): openssl-0.9.8e-12.el5_4.6.i686.rpm | 1.4 MB 00:00 -------------------------------------------------------------------------------- Total 6.0 MB/s | 1.6 MB 00:00 Package sw-cp-server-1.0-6.201004011105.centos5.i386.rpm is not signed [root@s15421692 dumping]# /etc/init.d/psa start Starting xinetd service... done Starting named service... done Starting mysqld service... done Starting postgresql service... done Starting psa-spamassassin service... done Plesk: Starting Mail Server... already started Starting Plesk... failed Starting drwebd service... not installed A: I'm going to put my money on the partition containing /var being full. Check df -h and see if that's the case. There is a log file at (from memory so use tab complete if I'm slightly off) /var/log/sw-cp-server/error_log
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,410
Bathypterois longipes är en fiskart som beskrevs av Günther, 1878. Bathypterois longipes ingår i släktet Bathypterois och familjen Ipnopidae. Inga underarter finns listade. Bildgalleri Källor Externa länkar Laxtobisartade fiskar longipes
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,657
<!DOCTYPE html> <html> <head> </head> <body> <style> .form-outer{ background: lightgrey; width: 600px; height: 80vh; } </style> <div id="random-interface"> <style> #random-interface{ position: absolute; z-index: 1000; top: 0px; left: 0px; } #random-interface button{ color: red; } </style> </div> <div class="form-outer"> <!-- Conversational Form will auto-run because of attribute "cf-form" --> <form id="form" action=""> <div class="field name-firstname"> <label for="billing:firstname" class="required"><em>*</em>Fornavn</label> <div class="input-box"> <input type="text" id="billing:firstname" name="billing[firstname]" value="" title="Fornavn" maxlength="255" class="input-text required-entry" cf-validation="window.testValidation" cf-questions="Put on the sherlock holmes hat to get past this step, and look at the custom validation method window.testValidation" cf-error="This value goes through the window.javascript method called testValidation, check that for guide.." > </div> </div> <div class="radio-control" cf-questions="Check flowStepCallback for hints" > <input required cf-label="Sign-in with Facebook" type="radio" name="login-type" tabindex="1" value="facebook"> <input required cf-label="Sign-in with Twitter" type="radio" name="login-type" tabindex="2" value="twitter"> <input required cf-label="Sign-in with E-mail" type="radio" name="login-type" tabindex="3" value="email"> <input required cf-label="Sherlock will sign in for you'" type="radio" name="login-type" tabindex="4" value="sherlock"> </div> </div> </div> </form> </div> <!-- <script src="https://code.jquery.com/jquery-3.1.1.min.js" integrity="sha256-hVVnYaiADRTO2PzUGmuLJr8BLUSjGIZsDYGmIJLv2b8=" crossorigin="anonymous"></script> --> <script type="text/javascript" src="../build/scripts/bower_components/promise-polyfill/promise.js" ></script> <script type="text/javascript" src="../build/scripts/bower_components/custom-event-polyfill/custom-event-polyfill.js" ></script> <script id="conversational-form-development" type="text/javascript" src="../build/cf/ConversationalForm.js" development ></script> <script type="text/javascript" src="../build/cf/ConversationalForm.plugin.js" development ></script> <script type="text/javascript" src="../build/cf/logic/Helpers.js" ></script> <script type="text/javascript" src="../build/cf/ui/BasicElement.js" ></script> <script type="text/javascript" src="../build/cf/ui/control-elements/ControlElement.js" ></script> <script type="text/javascript" src="../build/cf/ui/control-elements/ControlElements.js" ></script> <script type="text/javascript" src="../build/cf/ui/ScrollController.js" ></script> <script type="text/javascript" src="../build/cf/data/Dictionary.js" ></script> <script type="text/javascript" src="../build/cf/form-tags/Tag.js" ></script> <script type="text/javascript" src="../build/cf/form-tags/TagGroup.js" ></script> <script type="text/javascript" src="../build/cf/form-tags/InputTag.js" ></script> <script type="text/javascript" src="../build/cf/form-tags/SelectTag.js" ></script> <script type="text/javascript" src="../build/cf/form-tags/ButtonTag.js" ></script> <script type="text/javascript" src="../build/cf/form-tags/OptionTag.js" ></script> <script type="text/javascript" src="../build/cf/ui/control-elements/Button.js" ></script> <script type="text/javascript" src="../build/cf/ui/control-elements/RadioButton.js" ></script> <script type="text/javascript" src="../build/cf/ui/control-elements/CheckboxButton.js" ></script> <script type="text/javascript" src="../build/cf/ui/control-elements/OptionButton.js" ></script> <script type="text/javascript" src="../build/cf/ui/control-elements/OptionsList.js" ></script> <script type="text/javascript" src="../build/cf/ui/control-elements/UploadFileUI.js" ></script> <script type="text/javascript" src="../build/cf/ui/UserInput.js" ></script> <script type="text/javascript" src="../build/cf/ui/chat/ChatResponse.js" ></script> <script type="text/javascript" src="../build/cf/ui/chat/ChatList.js" ></script> <script type="text/javascript" src="../build/cf/logic/FlowManager.js" ></script> <link type="text/css" rel="stylesheet" href="../build/cf/cf.css"/> <link type="text/css" rel="stylesheet" href="../build/cf/ui/control-elements/cf-control-elements.css"/> <link type="text/css" rel="stylesheet" href="../build/cf/ui/control-elements/cf-button.css"/> <link type="text/css" rel="stylesheet" href="../build/cf/ui/control-elements/cf-radio-button.css"/> <link type="text/css" rel="stylesheet" href="../build/cf/ui/control-elements/cf-checkbox-button.css"/> <link type="text/css" rel="stylesheet" href="../build/cf/ui/control-elements/cf-options-list.css"/> <link type="text/css" rel="stylesheet" href="../build/cf/ui/control-elements/cf-upload-file-ui.css"/> <link type="text/css" rel="stylesheet" href="../build/cf/ui/cf-input.css"/> <link type="text/css" rel="stylesheet" href="../build/cf/ui/cf-info.css"/> <link type="text/css" rel="stylesheet" href="../build/cf/ui/cf-list-button.css"/> <link type="text/css" rel="stylesheet" href="../build/cf/ui/chat/cf-chat-response.css"/> <link type="text/css" rel="stylesheet" href="../build/cf/ui/chat/cf-chat.css"/> <!-- OR --> <!--<script type="text/javascript" id="conversational-form-development" src="../dist/conversational-form.min.js" crossorigin></script>--> <!-- OR --> <!--<script type="text/javascript" id="conversational-form-development" src="https://conversational-form-0iznjsw.stackpathdns.com/conversational-form.min.js" crossorigin></script>--> <script> var testValidation = function(dto, success, error){ console.log("testValidation, dto:", dto); if(dto.text.indexOf("holmes") != -1) return success(); return error(); }; (function(){ var conversationalForm = new cf.ConversationalForm({ formEl: document.getElementById("form"), flowStepCallback: function(dto, success, error){ console.log("flowStepCallback, dto:", dto); if(dto.text.indexOf("holmes") != -1 || dto.input.currentTag.value[0].indexOf("sherlock") != -1) success(); else error(); }, submitCallback: function(){ // remove Conversational Form conversationalForm.remove(); alert("Custom submit callback reached, removing Conversational Form, see markup of this file") } }); })(); </script> </body> </html>
{ "redpajama_set_name": "RedPajamaGithub" }
6,562
\section{Introduction} General existence questions involving the Ricci curvature of compact K\"ahler manifolds go back at least to work of Calabi in the 1950's \cite{kn:Cal1}, \cite{kn:Cal2}. We begin by recalling some very basic notions in K\"ahler geometry. \begin{itemize}\item All the K\"ahler metrics in a given cohomology class can be described in terms of some fixed reference metric $\omega_{0}$ and a potential function, that is \begin{equation}\omega= \omega_{0}+ i \dbd \phi. \end{equation} \item A hermitian holomorphic line bundle over a complex manifold has a unique {\it Chern connection} compatible with both structures. A Hermitian metric on the anticanonical line bundle $K_{X}^{-1}=\Lambda^{n}TX$ is the same as a volume form on the manifold. When this volume form is derived from a K\"ahler metric the curvature of the Chern connection can be identified with the Ricci tensor. (In general in this article we will not distinguish between metrics and Ricci tensors regarded as symmetric tensors or $(1,1)$-forms.) \item If the K\"ahler class is a $2\pi$ times an integral class a metric can be regarded as the curvature of the Chern connection on a holomorphic line bundle. The K\"ahler potential parametrising metrics in (1) has a geometrical meaning as a change in the Hermitian metric on the line bundle: $\vert \ \vert =e^{\phi}\vert\ \vert_{0}$. \end{itemize} One of the questions initiated by Calabi was that of prescribing the volume form of a K\"ahler metric in a fixed cohomology class. By the $\dbd$-lemma this is the same as prescribing the Ricci tensor (as a closed $(1,1)$-form in the class $c_{1}(X)$). This Calabi conjecture was established by Yau in 1976 \cite{kn:Y}. In particular when $c_{1}(X)=0$ this gives the existence of {\it Calabi-Yau metrics}, with vanishing Ricci curvature. Another question raised by Calabi involved {\it K\"ahler-Einstein metrics} with \lq\lq cosmological constant'' $\lambda$, so $ {\rm Ricci}=\lambda \omega$. The case $\lambda=0$ is the Calabi-Yau case as above and and if $\lambda$ is non-zero we may assume that it is $\pm 1$. The K\"ahler-Einstein condition can then be expressed as saying that we require that the Hermitian metric on the holomorphic line bundle $K_{X}^{-1}$ given by the volume form of $\omega$ realises $\pm \omega$ as the curvature form of its Chern connection. Explicitly, in complex dimension $n$ the equation to be solved for a K\"ahler potential $\phi$ is \begin{equation} (\omega_{0}+ i \dbd\phi)^{n}= \omega_{0}^{n} \exp(\pm \phi +h_{0})\end{equation} where $h_{0}$ is the solution of $\dbd h_{0}+\omega_{0}=\pm {\rm Ricci}(\omega_{0}) $ given by the $\dbd$-lemma. In the case when $\lambda=-1$ there is a straightforward existence theorem, established by Aubin and Yau. In fact the proof is significantly simpler than that when $\lambda=0$. The point is that the nonlinearity from the exponential in (2) occurs with a favourable sign. The condition on the K\"ahler class means that the manifold $X$ is of general type and the metrics are generalisations of metrics of constant curvature $-1$ on Riemann surfaces of genus 2 or more. The harder case is when $\lambda=1$. Then the condition on the K\"ahler class means that $X$ is a {\it Fano manifold}. A result of Matsushima \cite{kn:Mat}, also from the 1950's, shows that existence can fail in this case. Matsushima showed that if there is a solution the holomorphic automorphism group of $X$ is reductive (the complexification of a compact Lie group). This means that Fano manifolds, such as the projective plane blown up in one or two points, with non-reductive automorphism groups cannot support such a metric. The same Calabi-Aubin-Yau scheme which proved existence in the cases $\lambda\leq 0$--the \lq\lq continity method'' (see 4.2 below)--- can be set up in the positive case, but must break down for these manifolds. More precisely, the difference arises because of the absence of a $C^{0}$-estimate for the K\"ahler potential due to the unfavourable sign in the equation (2). On the other hand there are many cases where the existence of a solution has been established, using arguments exploiting detailed geometric features of the manifolds and the theory of \lq\lq log canonical thresholds''. The question which arises is to characterise in terms of the complex geometry of the manifold $X$ exactly when a solution exists. In the early 1990's Yau conjectured that the appropriate criterion should be in terms of the {\it stability} of the manifold $X$ and, after two decades of work by many mathematicians, this is now known to the case. The precise formulation is in terms of a algebro-geometric notion of {\it K-stability} and the statement is that a Fano manifold admits a K\"ahler-Einstein metric if and only if it is K-stable. One of the attractive features of this problem is the range of techniques which can be brought to bear. By its nature, the statement involves an interaction between algebraic geometry and complex differential geometry and, as we shall see, there are important connections with global Riemannian geometry, with pluripotential theory in complex analysis and with nonlinear PDE. Four different proofs of the main result have appeared up to the time of writing. \begin{enumerate} \item Deformation of cone singularities. (Chen, Donaldson and Sun\cite{kn:CDS}); \item The continuity method (Datar, Szekelyhidi \cite{kn:DaS}); \item Proof via Kahler-Ricci flow (Chen, Sun, Wang \cite{kn:CSW}); \item Proof by the variational method (Berman, Boucksom, Jonnson \cite{kn:BBJ}-the statement proved here is slightly different). \end{enumerate} The purpose of this article is to survey these recent developments. \section{K-stability} The notion of \lq\lq stability'', in this context, arose from the study of moduli problems in algebraic geometry and geometric invariant theory. It is not usually possible to give a good structure to a set of all isomorphism classes of algebro-geometric objects. For example, generic 4-tuples of points in the projective line up to the action of projective transformations are classified by the cross-ratio, but there is no satisfactory way to define the cross-ratio if three or more points coincide. The idea is that the isomorphism classes of a suitable restricted class of stable objects do form a good space. There is a general circle of ideas relating the algebraic approach to these questions to metric structures and differential geometry involving the notion of a moment map and the \lq\lq equality of symplectic and complex quotients''. In particular there is a large literature, going back to the early 1980's, developing these notions in the framework of gauge theories and results such as the existence of Hermitian Yang-Mills connections on stable vector bundles \cite{kn:UY}. But the author recently wrote a survey which emphasised this side of the story \cite{kn:D2}, so we will not go into it here beyond noting that the K\"ahler-Einstein problem fits naturally into wider questions of the existence of constant scalar curvature and extremal K\"ahler metrics; questions which remain largely open. Typically, stability is defined by a {\it numerical criterion} on {\it degenerations} of the objects in question. In our situation, we consider polarised varieties $(X,L)$, so $L$ is an ample line bundle over $X$ and the sections of $L$ embed $X$ as a projective variety in some projective space. The relevant degenerations are {\it test configurations} which are defined as follows. For an integer $m>0$ a test configuration of exponent $m$ for $(X,L)$ is a flat family of schemes $\pi:{\cal X}\rightarrow \bC$, with a relatively ample line bundle ${\cal L}\rightarrow {\cal X}$ and a $\bC^{*}$-action on ${\cal X},{\cal L}$ covering the standard action on $\bC$. For $t\in \bC$ we write $X_{t}$ for the scheme-theoretic fibre $\pi^{-1}(t)$ and $L_{t}$ for the restriction of ${\cal L}$ to $X_{t}$. We require that for all non-zero $t$ the pair $(X_{t}, L_{t})$ is isomorphic to $(X,L^{m})$. (Note that, due to the $\bC^{*}$-action, it suffices to know this for {\it some} non-zero $t$.) For technical reasons we also suppose that the total space ${\cal X}$ is normal. The numerical criterion, in our situation, is provided by the Futaki invariant. In its general form, \cite{kn:D0}, this is defined for any $n$-dimensional projective scheme $Z$ with $\bC^{*}$-action and $\bC^{*}$-equivariant ample line bundle $\Lambda\rightarrow Z$, as follows. For each integer $k\geq 0$ we have a vector space $H^{0}(Z;\Lambda^{k})$ with an induced $\bC^{*}$ action. Write $d(k)$ for the dimension of the space and $w(k)$ for the sum of the weights of the action. For large $k$, $d(k)$ is given by a Hilbert polynomial which has degree exactly $n$ (since $\Lambda$ is ample), while $w(k)$ is given by a polynomial of degree at most $n+1$. Thus $d(k)/kw(k)$ has an expansion, for large $k$: $$ \frac{d(k)}{k w(k)} = F_{0} + k^{-1} F_{1} + \dots, $$ and the Futaki invariant is defined to be the co-efficient $F_{1}$. In our situation, we define the Futaki invariant ${\rm Fut}({\cal X})$ of a test configuration of exponent $m$ to be $m^{-1}$ times the Futaki invariant of the central fibre, with line bundle $L_{0}$ and the induced $\bC^{*}$-action. With these definitions in place we can state the main definition of this section. \begin{defn} A polarised variety $(X,L)$ is K-semistable if for any test configuration ${\cal X}$ we have ${\rm Fut}({\cal X})\geq 0$. It is $K$-stable if equality holds only when ${\cal X}$ is a product $X\times \bC$. \end{defn} Note that in the last clause we allow the $\bC^{*}$-action on $X\times \bC$ to be induced from a non-trivial action on $X$. What we have called K-stability is often called K-polystability in the literature. The precise statement of the result mentioned in the previous section, verifying Yau's conjecture, is \begin{thm} A Fano manifold $X$ admits a Kahler-Einstein metric if and only if $(X,K_{X}^{-1})$ is K-stable. \end{thm} Here the \lq\lq only if''is usually regarded as the easier direction and is due to Berman \cite{kn:Berm}, following related results of various authors The uniqueness of the metric, modulo holomorphic automorphisms, is a relatively old result of Bando and Mabuchi \cite{kn:BM}. We will not say more about these results here but focus on the \lq\lq if'' direction. To give some background to the technical aspects of the proofs sketched in Section 4 below we will now try to explain why Theorem 1 is plausible. First we go back to the definition of the Futaki invariant of $(Z,\Lambda)$ in the case when $Z$ is a manifold, which was in fact the original context for Futaki's definition \cite{kn:Fut}. Choose a K\"ahler metric $\omega$ on $Z$ in the class $c_{1}(\Lambda)$ preserved by the action of $S^{1}\subset \bC^{*}$. Viewing $\omega$ as a symplectic structure, this action is generated by a Hamiltonian function $H$ on $Z$. Then the Futaki invariant can be given by a differential geometric formula \begin{equation} \int_{Z} (R-\hat{R}) H \frac{\omega^{n}}{n!}, \end{equation} where $R$ is the scalar curvature of $\omega$ and $\hat{R}$ is the average value of $R$ over $Z$. This formula can be derived from the equivariant Riemann-Roch theorem and can also be understood in terms of the asymptotic geometry of sections of $L^{k}$ as $k\rightarrow \infty$, in the vein of quasi-classical asymptotics in quantisation theory. What this formula shows immediately is that if $\omega$ can be chosen to have constant scalar curvature---in particular if it is a K\"ahler-Einstein metric---then the Futaki invariant vanishes. This given another way, different from the Matshusima theorem, of ruling out K\"ahler-Einstein metrics on 1 or 2 point blow-ups of $\bC\bP^{2}$. The definition of K-stability employs the Futaki invariant in a more subtle way; it is not just the automorphisms of $X$ which need to be considered but of the degenerations. The {\it Mabuchi functional} gives a way to understand this phenomenon. This is a functional ${\cal F}$ on the space ${\cal H}$ of K\"ahler metrics in a given cohomology class on a manifold $X$ defined via its first variation \begin{equation} \delta {\cal F} = \int_{X} (R-\hat{R}) \delta \phi \frac{\omega_{\phi}^{n}}{n!}. \end{equation} Here $\delta \phi$ is an infinitesimal variation in the K\"ahler potential and one shows that such a functional ${\cal F}$ is well-defined, up to the addition of an arbitrary constant. By construction a critical point of ${\cal F}$ is exactly a constant scalar curvature metrics which, in the setting of Theorem 1 can be shown to be K\"ahler-Einstein. (We mention here that there is another functional, the {\it Ding functional} which has many similar properties to the Mabuchi functional and plays an important part in many developments. This is only defined for manifolds polarised by $K^{\pm 1}$.) There are three possibilities: \begin{itemize} \item ${\cal F}$ is bounded below on ${\cal H}$ and attains its infimum; \item ${\cal F}$ is bounded below but does not attain its infimum; \item ${\cal F}$ is not bounded below. \end{itemize} An extension of Theorem 1 is the statement that these three possibilities correspond to $X$ being respectively $K$-stable, $K$-semistable (but not $K$-stable) and not $K$-semistable. Now suppose that ${\cal X}$ is a test configuration for $(X,K_{K}^{-1})$ and, for simplicity, that the total space is smooth. Choose a K\"ahler metric on this total space, invariant under $S^{1}\subset \bC^{*}$. Pulling back by the $\bC^{*}$-action the restrictions of this metric to the fibres $X_{t}$ for non-zero $t$ can be regarded as a family of metrics $\omega(t)$ on the fixed manifold $X$ parametrised by $t\in \bC^{*} $ but these metrics have no limit, among metrics on $X$, as $t\rightarrow 0$. It is natural to think of this limit as a \lq\lq point at infinity'' in the space ${\cal H}$ of K\"ahler metrics on $X$. As we discuss further in Section 4.4 below, the role of the Futaki invariant is to determine the asymptotic behaviour as $t\rightarrow 0$ of the Mabuchi functional in such families obtained from test configurations. Theorem 1 can be understood roughly as saying that if there is no minumum of ${\cal F}$ this can be detected by studying the asymptotics at points at infinity of this kind (derived from algebro-geometric data). Apart from its intrinsic interest---in giving an algebro-geometric criterion for the solubility of a PDE---Theorem 1 result also has implications for the construction of compactified moduli spaces of Fano manifolds, see work of Odaka \cite{kn:O2}, Spotti, Sun and Yao \cite{kn:SSY} and Li, Wang and Xu \cite{kn:LWX0}, \cite{kn:LWX}. For moduli questions, the notion of K-stability is also relevant in the negative case of varieties with ample canonical bundle. In this case Odaka \cite{kn:O1} showed that a variety is K-stable if and only if has semi log canonical singularities, which is equivalent to stability in the sense of Alexeev, K\'ollar and Shepherd-Barron. Berman and Guenancia showed that for such varieties this is equivalent to the existence of a K\"ahler-Einstein metric \cite{kn:BG} (with a suitable definition in the singular case). Compactifying moduli spaces of manifolds involves adding points corresponding to singular varieties and it is interesting to relate the behaviour of the K\"ahler-Einstein metrics to the algebraic geometry of the singularities. There is much recent progress in this direction, see \cite{kn:HS} for example. \section{Riemannian convergence theory and projective embeddings} In this section we will discuss some ideas which play an important role in three of the four proofs considered in Section 4 below. The general context can be explained as follows. In solving a PDE problem {\it compactness}---the ability to take limits in some kind of approximating scheme---is usually crucial. On the other hand in our problem we need to exhibit the obstruction to solving the problem (the existence of a K\"ahler-Einstein metric) as an algebro-geometric object (a test configuration with non-positive Futaki invariant). In the framework of Ricci curvature in Riemannian geometry there is a well-developed convergence theory of {\it Gromov-Hausdorff limits}; thus, in the K\"ahler situation, we would like to relate these limits to algebraic geometry and that is the topic of this section. We begin by recalling some of the main results from the Riemannian theory of manifolds with a lower bound on the Ricci curvature. The foundation of the theory is the link between the Ricci curvature and volume expressed by the Bishop comparison theorem. For simplicity we just consider the case of an $m$-dimensional Riemannian manifold $M$ with ${\rm Ricci}\geq 0$. Then Bishop's theorem states that for each $p\in M$ the volume ratio \begin{equation} v_{p}(r) = \frac {{\rm Vol}(B_{p}(r))}{\Omega_{m} r^{m}},\end{equation} is a weakly decreasing function of $r$. Here $B_{p}(r)\subset M$ is the metric ball of radius $r$, and we introduce the normalising constant $\Omega_{m}$---the volume of the unit ball in $\bR^{m}$---so that $v_{p}(r)$ tends to $1$ as $r$ tends to $0$. If $M$ is compact with total volume $V$ and diameter $\leq D$ it follows that \begin{equation} {\rm Vol}(B_{p}(r)) \geq \kappa r^{m}, \end{equation} with $\kappa= V/D^{m}$. Recall that the {\it Gromov-Hausdorff distance} between two compact metric spaces $A,B$ is defined as the infimum of the numbers $\delta$ such that there is a metric on the disjoint union $A\sqcup B$ which extends the given metrics on $A,B$ and such that both $A,B$ are $\delta$-dense in $A\sqcup B$. If $(M_{i}, g_{i})$ is a sequence of compact Riemannian $m$-manifolds with ${\rm Ricci}\geq 0$, ${\rm Vol}(M_{i})=V$, ${\rm diam}(M_{i})\leq D$ then Gromov's compactness theorem asserts that there is a subsequence which converges in the sense of this Gromov-Hausdorff distance to some limiting metric space $(Z, d_{Z})$. (More generally, the same result applies if we have any fixed lower bound on the Ricci curvatures.) The proof is an elementary argument based on ball-packing considerations and the lower bound (6). It is sometimes convenient to express this Gromov-Hausdorff convergence in terms of a natural topology on the disjoint union $$ {\cal M}= Z\cup \bigsqcup_{i}M_{i} . $$ Thus for $q\in Z$ it makes sense to talk about points $p_{i}\in M_{i}$ which are close to $q$. Results of Anderson \cite{kn:And}, Cheeger-Colding \cite{kn:CC} and Cheeger-Colding-Tian\cite{kn:CCT} give finer information about such \lq\lq non-collapsed'' Gromov-Hausdorff limits. (Here the non-collapsing refers to the volume lower bound, which rules out the collapse of the sequence of $m$-dimensional manifolds to some lower dimensional space.) The notion of Gromov-Hausdorff convergence can be extended to sequences of spaces with base points: the metric balls of any fixed radius centred at the base points are required to converge as above. For each point $q\in Z$ and sequence of real numbers $\lambda_{j}\rightarrow \infty$ we consider the sequence of based metric spaces $(q, Z, \lambda_{i} d_{Z})$. After perhaps passing to a subsequence we have a based Gromov-Hausdorff limit which is a metric cone, a {\it tangent cone} of $Z$ at $q$. The regular set $R\subset Z$ is defined to be the subset where some tangent cone is ${\bf R}^{m}$ and the complement $Z\setminus R$ is the singular set $S$. If the manifolds $M_{i}$ satisfy a fixed bound on the Ricci curvature $\vert {\rm Ricci}\vert \leq \Lambda$ then more can be said. Anderson showed that there are fixed $\delta_{m},\kappa_{m}$, depending only the dimension, such that if $p\in M_{i}$ and $r\leq \Lambda^{-1/2}$ then if the if the volume ratio $v_{p}(r)$ is greater than $1-\delta_{m}$ there are harmonic co-ordinates on the sub-ball $B_{p}(\kappa r)$ in which the metric satisfies $C^{1,\nu}$ estimates. Together with results of Cheeger and Colding this shows that the regular set is open in $Z$ and the Riemannian metrics converge to a $C^{1,\nu}$-Riemannian metric $g_{\infty}$ on $R$. ( This means that if $U$ is a pre-compact open set in $R$ there are $C^{2,\nu}$ diffeomorphisms $\chi_{i}:U\rightarrow M_{i}$ which, regarded as maps into ${\cal M}$, converge to the inclusion $U\rightarrow Z$ and such that the $\chi^{*}_{i}(g_{i})$ converge in $C^{1,\nu}$ to $g_{\infty}$.) The singular set $S$ has Hausdorff codimension at least $4$. (In the general Riemannian context, this is a recent result of Cheeger and Naber \cite{kn:CN}, but in the K\"ahler case which will be our concern it goes back to Cheeger, Colding and Tian \cite{kn:CCT}.) The corresponding statements apply to tangent cones: each has a smooth Ricci-flat metric outside a closed singular set of codimension at least 4. We want to relate these ideas to algebraic geometry and in this section we will focus on the case considered in \cite{kn:DS}. Thus we suppose that $(X_{i}, \omega_{i})$ are K\"ahler manifolds of complex dimension $n$ with fixed volumes $V$, diameters $\leq D$ and $\vert {\rm Ricci}\vert\leq \Lambda$ and with a Gromov-Hausdorff limit $Z$, as above. We suppose that that these are polarised manifolds, so that $\omega_{i}$ is the curvature of a Hermitian holomorphic line bundle $L_{i}\rightarrow X_{i}$. The main result is that $Z$ can be endowed with the structure of an algebraic variety. More precisely, we allow passage to a subsequence and form the disjoint union ${\cal M}$ as above. Then there is a continuous map $I:{\cal M}\rightarrow \bC\bP^{N}$ with the two properties. \begin{itemize}\item There is some fixed $k$ such that for sufficiently large $i$, the restriction of $I$ to $X_{i}$ is an embedding defined by the holomorphic sections of $L_{i}^{k}\rightarrow X_{i}$; \item The restriction of $I$ to $Z$ is a homeomorphism to its image, which is a normal projective variety in $\bC\bP^{N}$. \end{itemize} This result can be seen as an extension of the Kodaira embedding theorem to singular limit spaces and the proof extends some the ideas in one approach to the Kodaira theorem (an approach which seems to be well-known to experts but does not feature in standard textbooks). Suppose that $L\rightarrow X$ is a holomorphic line bundle over a compact complex manifold and $\sigma_{0}$ is a holomorphic section of $L$ over an open subset $V\subset X$. Let $\beta$ be a cut-off function with compact support in $V$, extended by $0$ over $X$ and with $\beta=1$ on some interior region $V_{0}\subset V$. Then we can regard $\sigma=\beta \sigma_{0}$ as a smooth section of $L$ over $X$ in an obvious way. This will not be a holomorphic section because we have a term coming from the cut-off function $$ \db \sigma = (\db \beta) \sigma_{0}, $$ but if we have Hermitian metrics on $L$ and $X$ we can project $\sigma$ to the space of holomorphic sections using the $L^{2}$ inner product on sections of $L$, arriving at a holomorphic section $s$. Of course in this generality the construction need not be useful---the projection $s$ could be $0$. The idea is that under suitable hypotheses we can arrange that $s$ is not zero and is very close to the original section $\sigma_{0}$ over $V_{0}$. The key is to estimate the error term $\eta=\sigma-s$, which is given by a Hodge Theory formula $$ \eta= \db^{*} \Delta^{-1} \db \sigma, $$ where $\Delta$ is the $\db$-Laplacian on the $L$-valued $(0,1)$ forms. Of course this formula only makes sense if $\Delta$ is invertible, i.e. if the cohomology group $H^{1}(X;L)$ is zero. Then we have $$ \Vert \eta\Vert^{2}_{L^{2}}= \langle \db^{*} \Delta^{-1} \db \sigma, \db^{*}\Delta^{-1} \db \sigma\rangle= \langle \db\db^{*}\Delta^{-1}\db \sigma, \Delta^{-1}\db\sigma\rangle=\langle \db\sigma, \Delta^{-1}\db\sigma\rangle, $$ and so $$ \Vert \eta\Vert_{L^{2}}^{2}\leq \Vert \Delta^{-1}\Vert \ \Vert \db\sigma\Vert^{2}, $$ where $\Vert \Delta^{-1}\Vert$ is the $L^{2}$-operator norm. Now suppose that $L$ is a positive line bundle and the metric on $X$ is the K\"ahler metric $\omega$ given by the curvature of $L$. There is then a formula of Weitzenb\"ock type \begin{equation} \Delta = P^{*} P + {\rm Ric} + 1 , \end{equation} where $P$ is the $(0,1)$-component of the covariant derivative on $L$-valued $(0,1)$ forms. But all we use is that $P^{*}P$ is a non-negative operator, so if ${\rm Ric} \geq -1/2$ (say) then $\Delta\geq 1/2$ and $\Delta^{-1}$ is defined and has $L^{2}$-operator norm at most $2$. So we get $$\Vert \eta\Vert_{L^{2}}\leq \sqrt{2} \Vert \db \sigma \Vert_{L^{2}}.$$ So if we can arrange that $\db\sigma$ is small, in $L^{2}$ norm compared with $\sigma$, then we get a non-trivial holomorphic section $s$. By construction, $\eta$ is holomorphic over $V_{0}$ and the $L^{2}$ norm of $\eta$ controls all derivatives there, by the usual elliptic estimates, and we can hope to show that $s$ is a small perturbation of $\sigma_{0}$ in any $C^{r}$ norm. The whole approach extends easily to a case when the original section $\sigma_{0}$ is not exactly holomorphic but approximately so, measured in terms of bounds on the $L^{2}$ norm of $\db \sigma_{0}$. To prove the usual Kodaira embedding theorem in this way we first make the simple observation that changing $L$ to $L^{k}$ for some $k>1$ corresponds to rescaling the metric by a factor $k$---i.e. scaling lengths of paths by a factor $\sqrt{k}$. Under this rescaling the Ricci curvature is multiplied by $k^{-1}$ so if we start with any positive line bundle and take a suitable power we can arrange that the condition ${\rm Ric}\geq - 1/2$ holds after rescaling. We take $U\subset X$ to be a small co-ordinate ball centred on a point $p\in X$. The flat model is given by the trivial line bundle $\Lambda$ over $\bC^{n}$ with a Hermitian structure corresponding to the Euclidean metric on $\bC^{n}$. In this structure the trivialising section $\tau$ of $\Lambda$ has Gaussian norm $$ \vert \tau(z) \vert = e^{-\vert z\vert^{2}/4}, $$ which decays very rapidly at infinity. After rescaling, the geometry of the manifold $X$ in the small ball $U$ is close to the flat model and we get an approximately holomorphic section $\sigma_{0}$ modelled on $\tau$. By suitable choice of parameters one arranges a cut-off function $\beta$ with the support of $\nabla \beta$ contained in the region where $\sigma_{0}$ is very small, so that $\Vert \db \sigma\Vert$ is also very small. Here the \lq\lq suitable choice of parameters'' involves choosing $k$ large. The upshot is that for a suitable $k$ and for each $p\in X$ we construct a holomorphic section of $L^{k}$ \lq\lq peaked'' around $p$ and in particular not vanishing at $p$. Thus the sections of $L^{k}$ define a holomorphic map of $X$ to a projective space and one can go further to show that this map is an embedding, once $k$ is sufficiently large. (Note that the formula (7) is the same as that used in the proof of the Kodaira-Nakano vanishing theorem, which is invoked in the usual proof of the embedding theorem via blow-ups.) Returning to our main discussion, we do not want to do analysis directly on the Gromov-Hausdorff limit $Z$ but instead establish uniform estimates on the converging sequence $X_{i}$. Consider any polarised manifold $(X,L)$ with K\"ahler metric given by the curvature of $L$. We endow $H^{0}(X,L)$ with the standard $L^{2}$ metric and for $x$ in $X$ we consider the evaluation map $${\rm ev}_{x}: H^{0}(X,L)\rightarrow L_{x}.$$ We define $\rho_{L}(x)$ to be square of the norm of this map so the statement that $\rho_{L}(x)>0$ for all $x\in X$ is the same as saying that the sections of $L$ define a map $\tau: X\rightarrow \bP(H^{0}(X,L)^{*})=\bP$. More generally, a lower bound on the $\rho_{L}(x)$ gives metric control of this map. The operator norm of $$ d\tau_{x}: TX_{x}\rightarrow T\bP_{\tau(x)}$$ is $\rho_{L}(x)^{-1/2} \max (\vert (\nabla s)_{x}\vert) $, where the maximum is taken over holomorphic sections $s$ of $L$ with $L^{2}$ norm $1$ vanishing at $x$. In our situation, with Ricci curvature and diameter bounds, there is a well-known upper bound on $\vert \nabla s\vert$, so a strictly positive lower bound on the $\rho_{L}(x)$ gives a Lipschitz bound on the map $\tau$. Replacing $L$ by $L^{k}$ we see that the crucial point is to find some $k$ and $b>0$ so that for all $i$ and for all $x$ in $X_{i}$ we have \begin{equation} \rho_{L^{k}}(x)\geq b. \end{equation} (Such a bound is sometimes referred to as a {\it partial $C^{0}$-estimate}.) It is straightforward to reduce to the case when the dimension of $H^{0}(X_{i},L_{i}^{k})$ is independent of $i$, so that we can regard $\tau_{i}:X_{i}\rightarrow \bP$ as mapping into a fixed projective space. If the bound (8) holds then, after possibly taking a subsequence, we can pass to the Gromov-Hausdorff limit and define a continuous map $\tau_{\infty}:Z\rightarrow \bP$ with image a projective variety. Further arguments then show that after perhaps increasing $k$ this map $\tau_{\infty}$ is an homeomorphism from $Z$ to a normal projective variety. The central issue then is to establish the lower bound (8), as has been emphasised in many places by Tian. Let $q$ be a point in $Z$ and $C(Y)$ be some tangent cone to $Z$ at $q$. If $q$ is a smooth point then $C(Y)$ is $\bC^{n}$ and we have the model Gaussian section as discussed above. In general we always have a Hermitian line bundle $\Lambda$ over the regular part of $C(Y)$ with a holomorphic section $\tau$ satisfying exactly the same Gaussian decay with respect to the distance to the vertex of the cone. We choose a suitable open subset $U$ of the regular part of the cone. It follows from the definitions that for large $i$ there are diffeomorphisms $\chi_{i}:U\rightarrow X_{i}$ which are approximately holomorphic isometries with respect to rescalings of the metrics $k\omega_{i}$ for some suitable large $k$. (Here the approximation can be made as close as we like by taking $k$ large.) Then are then two main technical points to address. \begin{itemize} \item We want to have lifts $\tilde{\chi}_{i}:\Lambda \rightarrow \chi^{*}_{i}(L^{k})$ to approximate isomorphisms of Hermitian line bundles over $U$. \item We want a suitable cut-off function $\beta$ on $U$ with $\vert \db \beta \tau\vert$ small in $L^{2}$. \end{itemize} Given these, we can transport the section $\beta \tau$ to an approximately holomorphic section of $L_{i}^{k}$ over $\chi_{i}(U)$and follow the projection procedure to get a holomorphic section $s$ of $L_{i}$ modelled on $\tau$. The derivative bounds on $s$ give a lower bound on $\vert s\vert$ over all points in $X_{i}$ close to $q$ and an elementary covering argument establishes the bound (8). The first technical point involves considerations of the holonomy of the connection on $\Lambda^{*}\otimes \chi_{i}^{*}(L_{i}^{k})$, which has very small curvature by construction--- this is straightforward if $U$ is simply connected. The second technical point involves the singular set in $C(Y)$, and in particular the fact that this has Hausdorff codimension strictly greater than $2$ (see 4.1 below). \section{Four proofs} We now come to the core of this survey in which we discuss four different proofs of the equivalence between stability and K\"ahler-Einstein metrics. In total these proofs run to many hundreds of pages so it is impossible to give any kind of thorough account of them here. All we can do is to explain general strategies and some salient points in the arguments. \subsection{ The proof by cone singularities} (Note that the announcement \cite{kn:CDS0} contains an outline of this proof.) Given a Fano manifold $X$ we fix some suitable $m\geq 1$ and a smooth divisor $D$ in $\vert -m K_{X}\vert$. For $0<\beta\leq 1$ we can define a class of K\"ahler metrics on $X$ with cone singularity of angle $2\pi \beta$ along $D$ and extend the whole theory to this case. (When $\beta= r^{-1}$ for an integer $r$ these are orbifold metrics, and hence well-established. There are also close analogies with the theory of parabolic structures and singular Hermitian Yang-Mills connections as developed in \cite{kn:Biq} for example.) We can define a modified Mabuchi functional \begin{equation}\delta {\cal F}_{\beta}= \int_{X} (R-\hat{R}) \delta \phi + (1-\beta) \int_{D} \delta \phi - c \int_{X}\delta \phi \end{equation} where $c= (m c_{1}(X)^{n})^{-1}$ is the ratio of the volume of D to the volume of X, so that the right hand side vanishes when $\delta \phi$ is a constant. Roughly speaking, we have a family of functionals ${\cal F}_{\beta}$ with critical points the K\"ahler-Einstein metrics with cone angle $\beta$ and the strategy of the proof is to follow a family of such critical points as $\beta$ increases. We want to show that either the family continues up to $\beta=1$, which gives our desired K\"ahler-Einstein metric on $X$, or that the critical point moves off to infinity and that this yields a test configuration violating the K-stability condition. To begin we need to show that a solution exists for {\it some} $\beta$. Take $m>1$ and $\beta= r^{-1}$ so that we are in the orbifold case. If $r>m/m-1$ then characteristic class arguments show that the we are in the situation of negative Ricci curvature and the desired solution follows from a straightforward orbifold extension of the standard Aubin-Yau theory. Next one shows that the set of $\beta$ for which a solution exists is {\it open}. This can be achieved using a suitable linear elliptic theory on manifolds with cone singularities \cite{kn:D1}. Suppose then that there is some $\beta_{0}\in (0,1]$ such that a solution $\omega_{\beta}$ exists for $\beta<\beta_{0}$ but that there is no solution for $\beta=\beta_{0}$. We need to extend the theory sketched in Section 3, for smooth metrics with bounded Ricci curvature, to K\"ahler-Einstein metrics with cone singularities. To begin we show that a metric with cone singularity can be approximated in the Gromov-Hausdorff sense by smooth metrics with Ricci curvature bounded below (\cite{kn:CDS}, Part I). Then the Cheeger-Colding theory implies that there is a subsequence $\beta_{i}$ increasing to $\beta_{\infty}$ and a Gromov-Hausdorff limit $Z$ of the $(X,\omega_{\beta_{i}})$ and $Z$ has metric tangent cones at each point. We call a tangent cone $C(Y)$ \lq\lq good'' if the regular set $ C(Y_{\reg})$ is open, the metric is induced by a smooth K\"ahler metric there and and for each compact subset $K\subset Y_{ \reg}$ and each $\eta>0$ there is a cut-off function $\gamma$ of compact support in $Y_{\reg}$, equal to $1$ on $K$ and with \begin{equation} \Vert \nabla \gamma\Vert_{L^{2}}\leq \eta. \end{equation} The main technical result is that all tangent cones to $Z$ are good. Given this, an extension of the arguments outlined in Section 3 above show that $Z$ is naturally a normal projective variety, carrying a singular K\"ahler-Einstein metric $\omega_{\infty}$. Moreover if we write $X_{i}$ for the metric space $(X,\omega_{\beta_{i}})$ and $D_{i}\subset X_{i}$ for the divisor $D$ then there is a divisor $\Delta\subset Z$ such that the pairs $(X_{i}, D_{i})$ converge to $(Z,\Delta)$. The new feature in this case, which leads to the difficulty in proving that tangent cones are \lq\lq good'', is the possibility of codimension 2 singular sets. This is the critical dimension with respect to the cut-off control (10). If $\psi$ is a compactly-supported function on $\bR^{m}$ and if $\psi_{\lambda}$ is the rescaled function $\psi_{\lambda}(x)=\psi(\lambda^{-1} x)$ then $$ \Vert \nabla \psi_{\lambda}\Vert_{L^{2}} = O(\lambda^{(m-2)/2})$$ which tends to $0$ with $\lambda$ if $m>2$. This allows one to construct cut-off functions with derivative arbitrarily small in $L^{2}$ adapted to a compact set $A$ of Hausdorff codimension strictly greater than $2$. In the codimension 2 case one needs appropriate control of the volume of the $\lambda$-neighbourhood $N_{\lambda}(A)$: $$ {\rm Vol}\ N_{\lambda}(A))\leq C \lambda^{2}. $$ This is equivalent to the notion of {\it Minkoswki codimension}$\geq 2$ i.e. for any $r$ the set $A$ can be covered by $O(r^{-2})$ balls of radius $r$. To complete the proof of the Theorem one wants to show that if indeed the family of solutions breaks down at some $\beta_{\infty}$ as considered above then there is test configuration with central fibre $Z$ and with non-positive Futaki invariant, so the original manifold $X$ is not K-stable. To this end one can extend the whole theory of stability and test configurations to pairs consisting of a variety and divisor, with a real parameter $\beta$. There is a modified Futaki invariant ${\rm Fut}_{\beta}$ which compares with the usual formula (3) (in the smooth case) just as the modified Mabuchi functional (9) compares with (4). One wants to construct a test configuration $({\cal X}, {\cal D})$ for the pair $(X,D)$ with central fibre $(Z,\Delta)$ and show that \begin{equation} {\rm Fut}_{\beta_{\infty}}({\cal X}, {\cal D})= 0. \end{equation} The Futaki invariant ${\rm Fut}_{\beta}$ depends linearly on $\beta$ so the fact that $(X,D)$ is stable for small $\beta$ implies that ${\rm Fut}({\cal X})={\rm Fut}_{1}({\cal X}, {\cal D})\leq 0$. Let $G$ be the automorphism group of the pair $(Z,\Delta)$--a complex Lie group. The existence of this test configuration $({\cal X}, {\cal D})$ follows from general principles once it is established that $G$ is {\it reductive}; the complexification of a compact subgroup $K\subset G$. Using projective embeddings in some $\bC\bP^{N}$, the pairs correspond to points $[X,D]$, $[Z,\Delta]$ in a suitable Hilbert scheme ${\bf S}$ which in turn is embedded in some large projective space $\bP$. The group $PGL(N+1,\bC)$ acts on ${\bf S}$ and $\bP$ and what we know from the convergence discussion above is that $[Z,\Delta]$ is in the closure of the orbit of $[X,D]$. The group $G$ can be identified with the stabiliser of $[Z,\Delta]$ in $PGL(N+1,\bC)$. If $G$ is reductive a version of the Luna slice theorem gives a slice for the action of $PGL(N+1,\bC)$ at $[Z,\Delta]$ (one takes a $G$-invariant complement to the action of $G$ on the tangent space of $\bP$ at $[Z,\Delta]$). A well-known result of Hilbert and Mumford, applied to the $G$ action on this slice, shows that there is a $1$-parameter subgroup $\bC^{*}\subset G$ such that $[Z,\Delta]$ is in the closure of the $\bC^{*}$-orbit of $[X,D]$ and this is equivalent to the desired test configuration. The reductivity of $G$ is an extension of the Matsushima result to the case of pairs and singular varieties. Likewise the vanishing of the Futaki invariant (11) is an extension of the simple case considered in Section 2 above, for manifolds of constant scalar curvature. For the proofs one has to work with the singular metric $\omega_{\infty}$ using techniques from pluripotential theory. \subsection{The proof by the continuity method} In the continuity method one fixes a positive $(1,1)$ form $\alpha$ representing $c_{1}(X)$ and tries to solve the family of equations for $\omega_{s}$, with parameter $s\in [0,1]$: \begin{equation} {\rm Ricci}(\omega_{s})= (1-s) \alpha + s \omega_{s}. \end{equation} Yau's solution of the Calabi conjecture shows that there is a solution for $s=0$ and it is well-known that the set of parameters for which a solution exists is open; if the solution can be continued up to $s=1$ we have a K\"ahler Einstien metric, so the problem is to prove closedness (under the stability hypothesis). Thus we suppose that for some $S<1$ there are solutions for $s<S$ but none for $s=S$. The set-up is very similar to that of cone singularities above, indeed the latter can be regarded as a variant of the continuity method, replacing the smooth form $\alpha$ with the current defined by a divisor. If one knew that the fixed form $\alpha$ was bounded with respect to $\omega_{s}$ then the $\omega_{s}$ would have bounded Ricci curvature and the results discussed in Section 3 above would apply immediately to give a limiting metric on a normal projective variety. So the major difficulty is that we do not have such a bound. The Ricci curvature of the $\omega_{s}$ is positive so the fundamentals of the Cheeger-Colding theory apply and we obtain a Gromov-Hausdorff limit $Z$ (more precisely, a limit of some sequence $(X, \omega_{s_{i}})$ with $s_{i}$ increasing to $S$). But this theory does not ensure that the regular set in $Z$ is open, or give the good convergence properties over the regular set exploited in the argument of Section 3. This is one of the main problems overcome by Sz\'ekelyhidi in \cite{kn:Gabor2}. To explain some of Sz\'ekelyhidi's arguments, we restrict attention to the case when $S<1$. Consider first a unit ball $B$ centred at a point $p$ in a K\"ahler manifold, with metric $\omega$, and a vector-valued holomorphic function $ f:B\rightarrow \bC^{m}$. Suppose that the Ricci curvature is bounded in $B$, say $\vert {\rm Ric}\vert \leq 4$, and that the pair $(\omega, f)$ satisfy the equation \begin{equation} {\rm Ric}(\omega) = \sigma \omega + f^{*}(\OmegaEuc), \end{equation} for some $\sigma\in [0,1]$ where $\OmegaEuc$ is the standard Euclidean K\"ahler form on $\bC^{m}$. We claim that for any $\epsilon>0$ we can find a $\delta$ (independent of $\sigma$) such that for any such $\omega$, if the volume ratio $v_{p}(1)$ exceeds $(1-\delta)$ then in fact $$\vert {\rm Ric}(p)\vert \leq \epsilon. $$ First, the Ricci curvature is non-negative so by the Bishop inequality we can pass to a smaller ball with centre $p$ (rescaled) and preserve the volume bound. By the results of Anderson we may as well assume that the metric on $B$ is $C^{1,\nu}$-close to the Euclidean metric in harmonic co-ordinates. By a suitable version of the Newlander-Nirenberg integrability theorem we can also suppose that these co-ordinates are actually holomorphic. The bound on the Ricci curvature and the equation (13) means that $\vert \nabla f\vert^{2} \leq 2n $ and since $f$ is holomorphic we get interior bounds on all higher derivatives of $f$ and hence on the Ricci tensor, in these holomorphic co-ordinates. Thus if $\vert{\rm Ricci}(p)\vert>\epsilon$ we will have $\vert {\rm Ricci}\vert>\epsilon/2$ over a ball of definite size centred at $p$. The $C^{1,\nu}$ bound on the metric tensor gives $C^{,\nu}$ bounds on the Christoffel symbols. From this Sz\'ekelyhidi shows that there is some unit tangent vector $v$ at $p$ such that, in geodesic polar-coordinates centred at $p$, there is a definite lower bound on ${\rm Ric} (\frac{\partial}{\partial r}, \frac{\partial}{\partial r})$ at all points close to $p$ and along geodesics starting from $p$ at a sufficiently small angle to $v$. Then the proof of the Bishop inequality shows that this Ricci curvature {\it reduces} the volume of $B$ by a definite amount (compared with the Euclidean ball) determined by $\epsilon$. So ${\rm Vol}(B)\leq (1-\delta(\epsilon))\Omega_{2n}$, say. Choosing $\delta=\delta(\epsilon)$ we get a contradiction to the hypothesis that $\vert {\rm Ric}(p)\vert\geq \epsilon$ and the claim is established. Clearly the result extends (with a suitable $\delta(\epsilon)$) to the case when $\OmegaEuc$ is replaced by any smooth positive $(1,1)$-form $A$ defined over a suitable neighbourhood of the image of $f$. In the case at hand one can cover $X$ by a finite number of holomorphic co-ordinate charts. Working near a given point in $X$ such a chart yields the holomorphic map $f$ above (with $m=n$) and we take the form $A$ corresponding to $(1-s)\alpha$ in this chart. If $s\leq S<1$ we get a $\delta(\epsilon)$ such that the discussion above applies to any rescaling of a small ball in $(X,\omega_{s})$. Now let $q$ be a point in the regular set of the Gromov-Hausorff limit $Z$ and let $p_{i}\in (X,\omega_{s_{i}})$ be a sequence converging to $q$ in the sense of the Gromov-Hausdorff convergence. Let $B_{i}$ be the unit ball obtained by rescaling a small ball (of fixed radius $\rho$, independent of $i$) about $p_{i}$. For any given $\delta>0$ we can suppose that for all subballs $\tilde{B}\subset B_{i}$ (not necessarily centred at $p_{i}$) the \lq\lq volume defect'' of $\tilde{B}$ is less than $\delta$. We choose $\delta$ as above, for some $\epsilon<1$. Now let $$M={\rm max}_{x\in B_{i}} \left(\vert {\Ric}(x)\vert\ d(x,\partial B_{i})^{2}\right). $$ A standard line of argument shows that in fact $M\leq 4$. For if not let $\tilde{p}\in B_{i}$ be a point where the maximum is attained and $\tilde{B}$ be the ball of radius $\tilde{d}/\sqrt{M}$ centred at $\tilde{p}$, where $\tilde{d}$ is the distance from $\tilde{p}$ to the boundary of $B_{i}$. Rescaling $\tilde{B}$ to unit size we get a ball to which the previous results apply. After rescaling those results give $\vert {\rm Ricci}(\tilde{p})\leq \epsilon d_{0}^{-2} M< d_{0}^{-2}M$ which is a contradiction to the choice of $\tilde{x}$. Thus $M\leq 4$ and in particular $\vert {\rm Ricci}(p_{i})\vert \leq 4 \rho^{-2}$. The conclusion is that Sz\'ekelyhidi is able to show that, when $S<1$ the Ricci curvature is bounded near points in the regular set in $Z$. It follows that the regular set is open and carries a $C^{1,\nu}$ K\"ahler metric. Going further, he extends the discussion to tangent cones and shows that these are all \lq\lq good'' in the sense discussed in 4.1 above. Note that in this situation the singular set can have real codimension 2, different from the simpler situation considered in Section 3. (In fact results of C. Li \cite{kn:Li} in the toric case suggest that the limit as $s\rightarrow S$ will develop cone singularities along a divisor.) In \cite{kn:Gabor2}, Sz\'ekelyhidi established the partial $C^{0}$ estimate along the continiuty method and used this to show that another notion of stability, introduced by S. Paul, implies the existence of a K\"ahler-Einstein metric. The corresponding result for $K$-stability was established in the subsequent paper \cite{kn:DaS} of Datar and Sz\'ekelyhidi. The output from the convergence theory is that if the continuity method breaks down at $S\in (0,1]$ there is a limiting projective variety $Z$, a singular K\"ahler metric $\omega_{S}$ and a closed non-negative $(1,1)$ current $\alpha_{\Psi}$ on $Z$ satisfying the equation $$ {\Ric}(\omega_{S})= (1-S)\omega_{S}+ S \alpha_{\Psi}. $$ The possible presence of singularities means that that this equation needs to be interpreted. The current $\alpha_{\Psi}$ is locally written as $i\dbd\psi$ where $ \psi$ is an $L^{1}$ purisubharmonic function. Globally, $\Psi$ is a singular Hermitian metric on the anticanonical bundle. Datar and Sz\'ekelyhidi set up a theory for pairs consisting of a variety and a $(1,1)$-current, analogous to the theory for pairs (variety, divisor) discussed above. The new feature is that their space of pairs is infinite dimensional. They are able to carry through a similar strategy to that outlined in 4.1 above by approximating $(1,1)$-currents by those defined by divisors. The form $\alpha$ on $X$ can be taken to be the restriction of the Fubini-Study metric under an embedding $X\subset \bP$, then there is a integral geometry formula $$ \alpha= \int_{\bP^{*}} [H\cap X] d\mu(H), $$ where $\bP^{*}$ is the dual projective space parametrising hyperplanes $H\subset \bP$, $\mu$ is the standard measure on $\bP^{*}$ and $[H\cap X]$ is the current of the divisor $H\cap X$ in $X$. Replacing the integral by a finite sum gives the approximation procedure which is the starting point for these arguments. The results of Datar and Sz\'ekelyhidi go further than the statement of Theorem 1 in two directions. First they prove an analogous result for solutions of the {\it K\"ahler-Ricci soliton} equation. Recall that this equation is \begin{equation} {\rm Ricci}(\omega) - \omega= L_{v}\omega, \end{equation} where $v$ is a holomorphic vector field and $L_{v}$ is the Lie derivative. Such metrics are the appropriate analogues of K\"ahler-Einstein metrics on Fano manifolds with non-vanishing Futaki invariant and represent fixed points of the K\"ahler-Ricci flow (modulo holomorphic diffeomorphisms), which we discuss further below. In another direction, Datar and Sz\'ekelyhidi's proof is compatible with group actions so they prove that to test K-stability of a Fano manifold $X$ it suffices to consider test configurations with an additional compatible action of ${\rm Aut}(X)$. This is important because an outstanding defect of the general theory is that it is very hard to verify K-stability of a polarised variety. The problem becomes more tractable for manifolds with large symmetry groups. Toric manifolds, with a complex torus action having an dense orbit, can be described in terms of polytopes. Both K\"ahler metrics invariant under the action of the corresponding real torus and toric test configurations can be described by convex functions on the polytope and the stability condition is relatively explicit. However in the toric Fano case the existence problem for K\"ahler-Einstein metrics and K\"ahler-Ricci solitons was completely settled by Wang and Zhu in 2004 \cite{kn:WZ}, and no interesting phenomena arise, from the point of view of stability. Ilten and S\"uss \cite{kn:IS} consider $n$-dimensional varieties with an action of an $(n-1)$-dimensional complex torus and develop a combinatorial description of these. In this way they are able to produce new examples, of manifolds which are K-stable, and the theorem of Datar and Szekelyhidi gives corresponding explicit new results about the existence of K\"ahler-Einstein metrics. In a similar vein, Delcroix studied group compactifications \cite{kn:Delcroix1}; that is he considered a manifold $X$ which contains a complex reductive Lie group $G$ as a dense open subset and such that both left and right translations on $G$ extend to $X$. These can be described by polytopes in the Lie algebra of a maximal compact real torus in $G$ and Delcroix extends the arguments of Wang and Zhu to find an explicit condition for the existence of a K\"ahler-Einstein metric. This work of Delcroix was essentially self-contained and did not invoke general existence results such as Theorem 1. A subsequent paper \cite{kn:Delcroix2} later showed that his condition emerges from an analysis of equivariant degenerations of $X$ and extended the results to the larger class of spherical varieties, using the theorem of Datar and Sz\'ekelyhidi. \subsection {The proof by Ricci flow} If $X$ is a Fano mainfold the relevant version of the Ricci flow is the evolution equation \begin{equation} \frac{\partial \omega_{t}}{\partial t} = \omega_{t} -{\rm Ricci}(\omega_{t}), \end{equation} for a one-parameter family of metrics $\omega_{t}$ in the class $c_{1}(X)$. This can be expressed in terms of the K\"ahler potential. For each $t$ there is a unique $h_{t}$ such that $$ \omega_{t}-{\rm Ricci}(\omega_{t}) = i\dbd h_{t}, $$ normalised so that $\max_{X} h_{t}=0$ and we can write $\omega_{t}=\omega_{0}+ i \dbd \phi_{t}$ where $\phi_{t}$ evolves by: \begin{equation} \frac{\partial\phi_{t}}{\partial t} = h_{t}. \end{equation} It has been known for many years that this equation has a solution for all $t\in[0,\infty)$, starting with any initial condition. The main result of the paper of Chen, Sun and Wang \cite{kn:CSW} is that this flow converges as $t\rightarrow \infty$ to a \lq\lq weak'' Kahler-Ricci soliton metric $\omega_{\infty}$ on a normal projective variety $X_{\infty}$ (in fact a \lq\lq Q-Fano variety''). That is to say there is an algebraic torus action on $X_{\infty}$ an element $\xi$ in the Lie algebra of this torus and a metric on the regular part of $X_{\infty}$ (locally, in $X_{\infty}$, given by bounded potential) and which satisfies the equation (12) with respect to the holomorphic vector field generated by $\xi$. In the case when the limiting metric is weak K\"ahler-Einstein but $X_{\infty}$ is not isomorphic to $X$ the same arguments as in 4.1 above, using the reductivity of the automorphism group, show that there is a test configuration for $X$ with central fibre $X_{\infty}$ and Futaki invariant zero. When the limit is a genuine Kahler-Ricci soliton the statment of \cite{kn:CSW} is slightly more subtle. They show that there is a destabilising test configuration for $X$, with central fibre $\overline{X}$ and strictly negative Futaki invariant, and a further degeneration (which might be trivial) of $\overline{X}$ to $X_{\infty}$. The upshot then is a trichotomy: \begin{itemize} \item $X$ is K-stable: the Ricci flow converges to a Kahler-Einstein metric on $X$; \item $X$ is K-semistable but not $K$-stable, the limit is K\"ahler-Einstein (possibly singular) but $X_{\infty}$ is not isomorphic to $X$; \item $X$ is not K-semistable: the limit is a genuine Ricci soliton and $X_{\infty}$ is not isomorphic to $X$. \end{itemize} The foundation for these results is the subsequential convergence of the flow established by Chen and Wang in the earlier paper \cite{kn:CW}. That is, they prove that for any sequence $t_{i}\rightarrow \infty$ there is a subsequence $i'$ such that $(X, \omega_{t_{i'}})$ converges (in the same sense as in the previous subsections) to a weak K\"ahler-Ricci soliton metric on a Q-Fano variety. A fundamental difficulty in proving this is that it is not known that the Ricci curvature is bounded, either above or below, along the flow. This prevents a direct application of the Cheeger-Colding convergence theory. Results from the previous literature on K\"ahler-Ricci flow, including those of Perelman elaborated and extended by Sesum and Tian \cite{kn:ST}, yield three important pieces of information. \begin{enumerate}\item The scalar curvature $R_{t}$ is bounded along the flow: $\vert R_{t}\vert\leq C_{1}$ \item The potential $h_{t}$ is bounded along the flow: $\vert h_{t}\vert\leq C_{2}$ \item There is a uniform Sobolev inequality \cite{kn:Ye}, \cite{kn:Zhang}: $$ \Vert f \Vert_{L^{2n/n-1}}\leq C_{3}\left( \Vert \nabla f \Vert_{L^{2}} + \Vert f \Vert_{L^{2}}\right) . $$ \end{enumerate} The general strategy of Chen and Wang is to establish a compactness theorem for segments of the flow over a fixed time interval, say $(T-1\leq t\leq T+1)$. That is, they show that if $T_{i}$ is a sequence tending to infinity then, after passing to a subsequence, these segments of the flow converge (to a possibly singular limit). They show that the limit is \lq\lq stationary'', in that it is the solution of the K\"ahler-Ricci flow given by a K\"ahler-Ricci soliton, evolving by the action of holomorphic automorphisms. The proofs of Chen and Wang are based on a blow-up argument and the comparison with suitable \lq\lq canonical neighbourhoods''. For $\kappa>0$, Chen and Wang define a class $\canon$ of non-compact length spaces. A space $W$ in $\canon$ is a smooth Ricci-flat K\"ahler $n$-manifold outside a closed singular set of codimension $>3$, and with asymptotic volume ratio $\geq \kappa$, that is $$ {\rm liminf}_{r\rightarrow \infty} v_{p}(r)\geq \kappa . $$ In their application $\kappa=\kappa(C_{3})$ is determined by the constant $C_{3}$ in the Sobolev bound. They prove from their definition that spaces in $\canon$ have many of the properties of the limit spaces treated by the Cheeger-Colding theory. In particular, they adapt that theory to show the existence of metric tangent cones. They also establish a compactness property of $\canon$, with respect to based convergence over bounded sets. This compactness means that spaces in $\canon$ satisfy certain uniform estimates. Chen and Wang's blow-up argument is governed by a canonical radius $cr(p)\in (0,\infty]$ which they define for any point $p$ in a Riemannian manifold $M$. This notion is in the same order of ideas as others in the literature such as the harmonic radius and curvature scale, but Chen and Wang's definition is tailored to the particular case at hand. The general idea is that $cr(p)\geq r$ if the $r$-ball centred at $p$, scaled to unit size, satisfies various definite estimates. The parameters in these estimates are chosen in line with the uniform estimates established in $\canon$, so that roughly speaking $cr(p)=\infty$ for a point in a space $W$ in $\canon$ (or more precisely $cr(p)$ is arbitarily large for a point in a Riemannian manifold which is sufficiently close to some $W\in \canon$). Now Chen and Wang establish a lower bound $cr(p)\geq \epsilon>0$ for any $p$ in a manifold $(X,\omega_{t})$ along the Ricci flow. In outline, the argument is to suppose not, so there is a sequence of times $t_{i}$ and points $p_{i}\in (X,\omega_{t_{i}})$ such that $cr(p_{i})\rightarrow 0$. Rescaling by $r_{i}^{-1}$ they arrive at a sequence of based manifolds $(p_{i}, M_{i})$ with $cr$ bounded below by $1$ and with $cr(p_{i})=1$. They show that these manifolds converge to a space in $\canon$ and derive a contradiction from the fact that $cr=\infty$ on $\canon$. In this argument the bound $\vert R\vert\leq C_{1}$ on the scalar curvature enters in the following way. Under the Ricci flow the scalar curvature $R$ evolves by $$ \frac{\partial R}{\partial t}= \Delta R + \vert {\rm Ric}\vert^{2} - n $$ After rescaling a portion of the Ricci flow, by a large factor $r_{i}^{-1}$ in the space direction and by $r_{i}^{-1/2}$ in the time direction, the scalar curvature $R'$ satisfies $$ \frac{\partial R'}{\partial t}= \Delta R' + \vert {\rm Ric'}\vert^{2} - n r_{i}^{2} $$ and $\vert R'\vert \leq C_{1} r_{i}^{2}$. Thus on any region where the rescaled flows, with a sequence of scalings $r_{i}^{-1}\rightarrow \infty$, converge in $C^{\infty}$ the limit must be a stationary Ricci-flat manifold. This is the fundamental mechanism which leads to the singular Ricci-flat spaces in $\canon$. The parameter $\kappa=\kappa(C_{3})$ is determined by a standard relation between the Sobolev constant and volume ratio. (If a space has small asymptotic volume ratio one can write down a compactly supported function $f$ with $\Vert\nabla f\Vert_{L^{2}}$ small compared with $\Vert f \Vert_{L^{2n/n-1}}$. If the volume ratio is less than $\kappa(C_{3})$, such a space cannot arise as a blow-up limit of manifolds with Sobolev bound $C_{3}$.) The $L^{2}$ construction of holomorphic sections features in two ways in Chen and Wang's arguments. One is global, to produce a projective embedding of the limit space. The tangent cone information from the blow-up limit in $\canon$ is transferred to the manifolds in the limiting sequence and the techniques outlined in Section 3 apply. The other is local, to produce local holomorphic co-ordinates as ratios of suitable holomorphic sections. The bound on the potential $\vert h_{t}\vert \leq C_{2}$ is important here. First, since there is no lower bound on the Ricci curvature the argument based on the formula (7)does not immediately apply. Changing the metric on the line bundle by a factor $e^{h_{t}}$ introduces an extra term which precisely cancels the Ricci curvature contribution in (7) and the bound on the $h_{t}$ means that this change does not substantially effect the estimates. Second, the evolution equation (13) gives control of the change in the metric on the line bundle in time, and Chen and Wang are able to use this to obtain local holomorphic co-ordinates that are adapted to the metrics $\omega_{t}$ over definite time intervals. Alongside these complex geometry arguments they also use the the Ricci flow techniques of Perelman. The existence of a limit of the K\"ahler-Ricci flow, as established by Chen, Sun and Wang in \cite{kn:CSW} introduces further new ideas. The Chen and Wang result leaves open the possibility that different sequences of times $t_{i}\rightarrow \infty$ could lead to different limits. Let ${\cal C}$ be the set of all limits that arise. By general principles this set is connected. One major step is to show that all $X_{\infty}$ in ${\cal C}$ can be embedded in projective space in such a way that the soliton vector fields are generated by the same fixed 1-parameter subgroup. This uses an algebro-geometric characterisation of the vector field of a Ricci soliton, via a generalisation of the Futaki invariant theory, which leads to a rigidity property. \subsection{ Proof by variational method} The result proved by Berman, Boucksom and Jonsson in \cite{kn:BBJ} involves a notion of \lq\lq uniform K-stability''. Let ${\cal X}$ be a test configuration for a polarised manifold $(X,L)$, so we have a $\bC^{*}$-action on the central fibre $X_{0}$. Suppose first that $X_{0}$ is smooth and we fix an $S^{1}$-invariant K\"ahler metric in the class $c_{1}({\cal L}_{0})$ which yields a symplectic structure. The $S^{1}$-action is generated by a Hamiltonian function $H$ which we can normalise to have maximum value $0$. Then we define $\Vert {\cal X}\Vert$ to be the $L^{1}$ norm of $H$. This is a quantity which is independent of the choice of metric in the cohomology class. The definition can be extended to any scheme, using the asymptotics of the trace of the action on sections of ${\cal L}^{k}$ as $k\rightarrow\infty$, similar to the definition of the Futaki invariant. Then $(X,L)$ is said to be {\it uniformly K-stable} if there is some $\epsilon>0$ such that \begin{equation} {\rm Fut}({\cal X})\geq \epsilon \Vert {\cal X}\Vert \end{equation}for all non-trivial test configurations ${\cal X}$. The main result of \cite{kn:BBJ} is that for a Fano manifold $X$, polarised by $K_{X}^{-1}$ and with finite automorphism group, the existence of a K\"ahler-Einstein metric is equivalent to uniform $K$-stability. Note that {\it a priori} uniform $K$-stability is a stronger condition than $K$-stability although {\it a posteriori} they are equivalent, for Fano manifolds with finite automorphism group. One can consider many other norms on test configurations and the general notion of uniform stability goes back to the thesis of Szekelyhidi. It is also related to another variant of $K$-stability developed by Szekelyhidi which considers filtrations of the co-ordinate ring $\bigoplus H^{0}(X,L^{k})$ \cite{kn:Gabor1}. A definition of uniform stability which turns out to be equivalent to that in \cite{kn:BBJ} was given by Dervan \cite{kn:Dervan}. To give some indication of the proof in \cite{kn:BBJ} we begin by considering an analogous situation in finite dimensions. Let $V$ be a complete (finite-dimensional) Riemannian manifold with the property that the each two points can be joined by a unique geodesic segment---for example a simply connected manifold of non-positive curvature. Let $v_{0}\in V$ be a base point and let $F$ be function on $V$ which is convex along geodesics. If $\gamma:[0,\infty)\rightarrow V$ is a geodesic ray emanating from $v_{0}$, parametrised by arc length, the ratio $\gamma(t)/t$ is increasing with $t$ and we can define the {\it asymptotic slope} $ S_{\gamma}\in [0,\infty]$ to be the limit as $t\rightarrow \infty$. If $S_{\gamma}>0$ then for any $\delta< S_{\gamma}$ there is a $C_{\gamma}$ such that \begin{equation} F(\gamma(s)) \geq \delta s -C_{\gamma}. \end{equation} An elementary argument, hinging on the compactness of the set of geodesics through $v_{0}$, shows that if there is some $\delta>0$ such that if $S_{\gamma}\geq \delta$ for all such rays $\gamma$ then given some $\delta'<\delta$ we can find $C$ such \begin{equation} F(v)\geq \delta' d(v,v_{0}) - C \end{equation} for all $v\in V$. It also follows easily that $F$ attains a minimum in $V$. The relevance of this to the K\"ahler-Einstein problem on a Fano manifold $X$ is that, as we have discussed in Section 2, a K\"ahler-Einstein metric can be seen as a critical point of the Mabuchi functional ${\cal F}$ on the space of K\"ahler metrics ${\cal H}$. The pair $({\cal H}, {\cal F})$ has many properties analogous to $(V,F)$ above. There is a Mabuchi metric which makes ${\cal H}$ formally a symmetric space of non-positive curvature and ${\cal F}$ is convex along geodesics. The programme roughly speaking, is to extend the arguments above to this infinite-dimensional setting and to relate the asymptotics of the Mabuchi functional, analogous to the asymptotic slope $S_{\gamma}$, to the condition (17) on test configurations. A point $\omega$ in ${\cal H}$ defines a volume form on $X$ and the tangent space to ${\cal H}$ at $\omega$ can be identified with the functions $\delta\phi$ of integral $0$. For each $p\geq1$ the $L^{p}$ norm defines a Finsler structure on ${\cal H}$. The case $p=2$ gives the infinite dimensional Riemannian structure first considered by Mabuchi \cite{kn:Mab1} but the case $p=1$ is also important, as shown by Darvas \cite{kn:Darvas}. The completion, in the metric defined by this Finsler structure, is a space ${\cal E}^{1}$ of currents defined by \lq\lq finite energy'' potentials. Geodesics in ${\cal H}$ also have a good geometric meaning. Smooth geodesics segments parametrised by an interval $[a,b]$ correspond to $S^{1}$-invariant closed $(1,1)$-forms $\Omega$ on the product $X\times S^{1}\times [a, b]$ which satisfy $\Omega^{n+1}=0$ and which restrict to a metric in ${\cal H}$ on each copy of $X$ in the product. (The different Finsler structures share the same geodesics.) If we stay in the smooth category it is not true that any two points in ${\cal H}$ can be joined by a geodesic \cite{kn:Lempert} but this is the case if one relaxes the definitions to allow singularities---in fact forms with $C^{1,1}$ potentials---as shown by Chen \cite{kn:XC}. It is an elementary calculation that the Mabuchi functional is convex along smooth geodesics. The convexity in the general case is a deep recent result of Berman and Berndtsson \cite{kn:BB}. Now fix a base point $\omega_{0}$ in ${\cal H}$---analogous to $v_{0}\in V$. According to Darvas, the distance to $\omega_{0}$ in the $L^{1}$-Finsler structure is equivalent to functional $J$ which is well-known in the literature and which can be characterised by the property that $J(\omega_{0})=0$ and $$ \delta J = \int_{X} \delta \phi (\omega^{n}-\omega_{0}^{n}) . $$ Thus the analogue of (19) is an inequality. \begin{equation} {\cal F}(\omega)\geq \delta' J(\omega)-C\end{equation} for some fixed $\delta'>0, C$ and $\omega$. This is sometimes referred to as the \lq\lq properness'' of the Mabuchi functional. Tian showed \cite{kn:Ti} that if this inequality (20) holds then a K\"ahler-Einstein metric exists. The Mabuchi functional is decreasing along the continuity path, as discussed in 4.2 above, and so ${\cal F}(\omega_{s})$ controls $J(\omega_{s})$ and Tian showed that this allows the continuity path to be continued to $s=1$. Darvas and Rubinstein have given another proof of this result, and generalisations, recently \cite{kn:DR}. There is a large circle of results relating geodesic rays in ${\cal H}$ to test configurations, see \cite{kn:PS} for example. Using the conformal equivalence between $S^{1}\times [0,\infty)$ and the punctured disc $\Delta^{*}$, geodesic rays correspond to $S^{1}$-invariant closed $(1,1)$-forms $\Omega$ on the product $X\times \Delta^{*}$ with $\Omega^{n+1}=0$ and which are positive on each $X\times \{t\}$. For certain purposes one can work with {\it subgeodesics} which correspond to positive definite forms $\Omega$. In particular if ${\cal X}$ is a test configuration we can consider a \lq\lq smooth'' metric $\Omega$ on ${\cal X}$. For example we can embed ${\cal X}$ in some $\Delta\times \bC\bP^{N}$ and take the restriction of a metric on the ambient manifold. The ${\bf C}^{*}$-action on ${\cal X}$ gives an open embedding $X\times \Delta^{*}$ and we get a subgeodesic ray $\omega_{s}$ in ${\cal H}$. Boucksom, Hisamoto and Jonsson \cite{kn:BHJ} prove that (if the central fibre is reduced) $$ J(\omega_{s}) \sim \Vert {\cal X}\Vert s\ \ \ ;\ \ \ \ {\cal F}(\omega_{s})\sim {\rm Fut}({\cal X}) s , $$ as $s\rightarrow \infty$. (There are many earlier results of this kind in the literature, under various hypotheses.) Thus the uniform stability condition is equivalent to the statement that there is a $\delta>0$ such that for any such subgeodesic ray, arising from a test configuration, we have \begin{equation} {\cal F}(\omega_{s})\geq \delta J(\omega_{s})- C, \end{equation} where $C$ depends on the ray. There are now two main aspects to the proof. (The exposition in \cite{kn:BBJ} involves some sophisticated techniques which go well beyond this writers knowledge, and indeed \cite{kn:BBJ} is described by the authors as an outline to be followed by a more detailed version. What we write below is extremely sketchy.) \begin{itemize} \item To pass from the subgeodesic rays arising from test configurations to general geodesic rays and establish an inequality (21) along any geoedesic ray. A sub-geodesic ray comes with a family of K\"ahler potentials which can be viewed as a metric on the pull-back of $L$ to $X\times \Delta^{*}$ or as a singular metric on the pull-back to $X\times \Delta$. This metric defines a multiplier ideal sheaf: the local holomorphic sections which are in $L^{2}$ with respect to the metric. Fundamental results of Nadel show that this is a coherent sheaf and so one can construct corresponding blow-ups of $X\times \Delta$ along the powers of this ideal sheaf, which yield test configurations. Bermann, Boucksom and Jonnson use these to approximate the original ray by those arising from test configurations and eventually to pass from the algebro-geometric uniform K-stability hypothesis to the estimate (18) on general geodesic rays. For these purposes they also work with the Ding functional (which we mentioned briefly in Section 2). They also use ideas and results from non-Archimedean geometry. \item To carry the elementary arguments from the finite dimensional model over to the infinite dimensional situation. Results of Berman, Boucksom, Eyssidieux, Guedj and Zeriahi \cite{kn:BBEGZ} are used here to give the relevant compactness property in ${\cal E}^{1}$ for geodesics segments with bounded Mabuchi functional. \end{itemize} These variational techniques based on convex geometry in the space ${\cal H}$ of K\"ahler metrics have been used by Darvas and Rubinstein \cite{kn:DR} and Berman, Darvas and Lu \cite{kn:BDL} to produce interesting results in the more general framework of constant scalar curvature metrics. The outstanding problem is to prove the regularity of weak solutions produced by minimising the Mabuchi functional.
{ "redpajama_set_name": "RedPajamaArXiv" }
9,852
package org.elasticsearch.common; import java.util.Objects; /** * Holds a value that is either: * a) set implicitly e.g. through some default value * b) set explicitly e.g. from a user selection * * When merging conflicting configuration settings such as * field mapping settings it is preferable to preserve an explicit * choice rather than a choice made only made implicitly by defaults. * */ public class Explicit<T> { private final T value; private final boolean explicit; /** * Create a value with an indication if this was an explicit choice * @param value a setting value * @param explicit true if the value passed is a conscious decision, false if using some kind of default */ public Explicit(T value, boolean explicit) { this.value = value; this.explicit = explicit; } public T value() { return this.value; } /** * * @return true if the value passed is a conscious decision, false if using some kind of default */ public boolean explicit() { return this.explicit; } @Override public boolean equals(Object o) { if (this == o) return true; if (o == null || getClass() != o.getClass()) return false; Explicit<?> explicit1 = (Explicit<?>) o; return explicit == explicit1.explicit && Objects.equals(value, explicit1.value); } @Override public int hashCode() { return Objects.hash(value, explicit); } }
{ "redpajama_set_name": "RedPajamaGithub" }
5,716
{"url":"https:\/\/plainmath.net\/8638\/it-is-given-%E2%88%A5m%E2%88%A5-equal-%E2%88%A5n%E2%88%A5-equal-sqrt-equal-135-find-the-norm-the-vector-plus","text":"Question\n\n# It is given: \u2225m\u2225=4,\u2225n\u2225=sqrt{2},\u27e8m,n\u27e9=135 find the norm of the vector m+3n.\n\nVectors and spaces\n\nIt is given: $$\\displaystyle\u2225{m}\u2225={4},\u2225{n}\u2225=\\sqrt{{{2}}},\u27e8{m},{n}\u27e9={135}$$ find the norm of the vector $$m+3n$$.\n\n$$\\displaystyle\u2225{m}+{3}{n}\u2225^{{{2}}}=\u27e8{m}+{3}{n},{m}+{3}{n}\u27e9=\u27e8{m},{m}\u27e9+\u27e8{m},{3}{n}\u27e9+\u27e8{3}{n},{m}\u27e9+\u27e8{3}{n},{3}{n}\u27e9=\u2225{m}\u2225^{{{2}}}+{6}\u27e8{m},{n}\u27e9+{9}\u2225{n}\u2225^{{{2}}}={16}+{6}\u00d7{135}+{18}={16}+{6}\u00d7{135}+{18}={844}$$\n$$\\displaystyle\u2225{m}+{3}{n}\u2225=\\sqrt{{{844}}}$$","date":"2021-09-26 15:10:46","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7793412804603577, \"perplexity\": 1473.2651623155411}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-39\/segments\/1631780057882.56\/warc\/CC-MAIN-20210926144658-20210926174658-00357.warc.gz\"}"}
null
null
The 1975 Rutgers Scarlet Knights football team represented Rutgers University in the 1975 NCAA Division I football season. In their third season under head coach Frank R. Burns, the Scarlet Knights compiled a 9–2 record while competing as an independent and outscored their opponents 347 to 91. The team's statistical leaders included Jeff Rebholz with 715 passing yards, Curt Edwards with 1,157 rushing yards, and Mark Twitty with 544 receiving yards. The Scarlet Knights played their home games at Rutgers Stadium in Piscataway, New Jersey, across the river from the university's main campus in New Brunswick, New Jersey. Schedule References Rutgers Rutgers Scarlet Knights football seasons Rutgers Scarlet Knights football
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,381
Q: MS ACCESS - How to detect arrow keys on Form.KeyDown or KeyUp events I want to detect on a form the right and left arrow keys in VBA in order to use the Form.KeyDown or KeyUp events for X purpose. I tried this: Private Sub Form_KeyDown() If KeyCode = vbKeyRightArrow Then KeyCode = 0 'Suppress normal effect On Error GoTo exitsub 'ActiveControl causes a runtime error if none is active DoCmd.GoToRecord , , acNext elseif KeyCode = vbKeyLeftArrow Then DoCmd.GoToRecord , , acPrevious KeyCode = 0 'Suppress normal effect On Error GoTo exitsub 'ActiveControl causes a runtime error if none is active End If exitsub: End Sub This doens't work of course.. help A: You need to set the Form.KeyPreview property to True. Otherwise only the current control sees the keydown event. But note that if you have editable textboxes etc. on this form, your users may hate this behavior. They need the left/right arrow keys while editing.
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,656
Q: Exchange parameters between adorner and adorned control I need to pass some parameters between adorner and adorned control. How this can be done? Should I remove and add new adorner with new parameters every time parameters change? For example, one of my parameters: public static readonly DependencyProperty ThetaProperty = DependencyProperty.Register("Theta", typeof (double), typeof (SplitControl), new PropertyMetadata(default(double), SetTheta)); public double Theta { get { return (double) GetValue(ThetaProperty); } set { SetValue(ThetaProperty, value); } } private static void SetTheta(DependencyObject d, DependencyPropertyChangedEventArgs e) { _adorner.Theta = (double) e.NewValue; } In adorner Theta: public double Theta { get { return (Math.Atan(((_middleTop - _middleBottom) / AdornedElement.DesiredSize.Height))) * 180 / Math.PI; } set { double deltaX = (Math.Tan((Math.PI/180)*value))*(AdornedElement.DesiredSize.Height/2); _middleTop = _middle + deltaX; _middleBottom = _middle - deltaX; } } A: Here's a sample (type something into the textbox and watch the adorner): Code: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Windows; using System.Windows.Controls; using System.Windows.Data; using System.Windows.Documents; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Imaging; using System.Windows.Navigation; using System.Windows.Shapes; using System.Globalization; namespace Adorners { /// <summary> /// Interaction logic for MainWindow.xaml /// </summary> public partial class MainWindow : Window { public MainWindow() { InitializeComponent(); this.Loaded += (o, e) => { AdornerLayer layer = AdornerLayer.GetAdornerLayer(this.t); MyAdorner myAdorner = new MyAdorner(this.t); layer.Add(myAdorner); this.t.Text = "Modified Value"; }; } } public class MyAdorner : Adorner { public static DependencyProperty TextProperty = DependencyProperty.Register("Text", typeof(string), typeof(MyAdorner), new PropertyMetadata("Default Text", (o, e) => { ((MyAdorner)o).InvalidateVisual(); })); // Be sure to call the base class constructor. public MyAdorner(UIElement adornedElement) : base(adornedElement) { this.DataContext = this.AdornedElement; this.SetUpBindings(); } private void SetUpBindings() { BindingOperations.SetBinding(this, MyAdorner.TextProperty, new Binding() { Path = new PropertyPath("Text"), UpdateSourceTrigger = UpdateSourceTrigger.PropertyChanged }); } public string Text { get { return (string)this.GetValue(MyAdorner.TextProperty); } set { this.SetValue(MyAdorner.TextProperty, value); } } protected override void OnRender(DrawingContext drawingContext) { Rect adornedElementRect = new Rect(this.AdornedElement.DesiredSize); drawingContext.DrawText(new FormattedText(this.Text, CultureInfo.CurrentCulture, FlowDirection.LeftToRight, new Typeface("Arial"), 20, Brushes.Black), new Point(0, 150)); } } } Markup: <Window x:Class="Adorners.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="MainWindow" Height="350" Width="525"> <Grid x:Name="AdornedGrid"> <TextBox x:Name="t" Width="200" Height="100" Background="Green"></TextBox> </Grid> </Window>
{ "redpajama_set_name": "RedPajamaStackExchange" }
3
{"url":"http:\/\/math.stackexchange.com\/questions\/167838\/an-extension-of-the-birthday-problem","text":"# An extension of the birthday problem\n\nTh birthday problem (or paradox) has been done in many way, with around a dozen thread only on math.stackexchange. The way it is expressed is usually the following:\n\n\"Let us take $n$ people \"independently\" (no twins, etc.). What is the probability that no two people share the same birthday?\"\n\nIt is abstracted in the following way:\n\n\"Let $X_1$, $\\cdots$, $X_n$ be $n$ i.i.d. random variables taken uniformly in $[[1, 365]]$. What is the probability that all the $X_i$'s are distinct?\"\n\nThere are many generalizations, for instance:\n\n\"Let $n$, $d$ be two positive integers, $n \\leq d$. Let $X_1$, $\\cdots$, $X_n$ be $n$ i.i.d. random variables taken uniformly in $[[1, d]]$. What is the probability $p(n,d)$ that all the $X_i$'s are distinct?\"\n\nOne can show that in the regimen $1 \\ll n \\ll d$, the probability $p(n,d)$ is logarithmically equivalent to something like $e^{-\\frac{n^2}{2d}}$ (Wikipedia) or $e^{-\\frac{n^2}{d}}$ (my computations)*. This problem can be reduced to simple combinatorics, and Stirling's formula (for instance) gives the solution.\n\nHowever, in the real world, the birthdays are not distributed that way. One might also want to estimate the probability that two peoples are born the same half-day, the same hour, etc. The following generalization seems natural:\n\n\"Let $\\mu$ be a probability measure on $[0,1]$ absolutely continuous with respect to the Lebesgue measure. Let $n$, $d$ be two positive integers. For $k \\in [[0,d-1]]$, let $a_k := [k\/d, (k+1)\/d]$. Let $X_1$, $\\cdots$, $X_n$ be $n$ i.i.d. random variables in $[0,1]$ with distribution $\\mu$. What is the probability $p(n,d)$ that all the $X_i$'s lie in different elements of the partition?\"\n\nI would expect the solution to be something like $e^{-C(\\mu) \\frac{n^2}{d}}$, with perhaps some explicit expression of $C(\\mu)$. But the combinatorial solutions do not work as well in this setting, and all I can get are very crude bounds when the density of $\\mu$ is bounded. In addition, I would expect $C(\\mu)$ to be minimal when $\\mu$ is the Lebesgue measure, but I don't know how to prove it. One might wonder what happens when $\\mu$ is no longer absolutely continuous, but this might be a bit too broad of a generalization.\n\nI am sure this problem has been done to death, but I don't have any access to the literature right now, and quick search didn't yield anything (the generalizations of the birthday problem I found are quite different). Any result\/proof\/reference related to the problems above would be nice.\n\n.* By the way, any rigorous proof of either of the two facts (or of any similar-sounding result) is appreciated. I don't know which I can trust more, between my computations and Wikipedia.\n\n-\n\nLet's rephrase the problem:\n\nThere are $m$ bins, and $n$ items placed independently into a random bin with probability $p_k$ of going into bin $k\\in\\{1,\\ldots,m\\}$. What is the likelihood that no to items go into the same bin?\n\nLet's start with an approximate solution which is good for giving some intuition about the problem and works well as long as $n\\ll m$.\n\nFor any to items, the probability of them going into the same bin is $q=\\sum_{k=1}^m p_k^2$. Hence, the probability that they do not go into the same bin is $1-q$. Since there are $n(n-1)\/2$ different pairs of item, if we make the approximation that any two pairs of items are independent (true if pairs have no items in common and almost true if the two pairs have one item in common), we find the likelihood that no pair go into the same bin to be $\\approx(1-q)^{n(n-1)\/2}\\approx e^{qn(n-1)\/2}$.\n\nNow, let's do this a bit more formally. If we define the polynomial $$F(x)=\\prod_{k=1}^m (1+p_kx)=\\sum_{j=0}^m \\frac{f_j x^j}{j!},$$ the coefficient $f_n$ is the likelihood that that $n$ items go into $n$ distinct bins: sums over all different ways to pick $n$ bins and the likelihood that the $n$ items be placed into these bins in any order (the factor $n!$).\n\nWe can now make an approximation: $$\\begin{split} F(x) =&\\prod_{k=1}^m (1+p_kx) = \\exp\\left\\{\\sum_{j=1}^m\\ln(1+p_jx)\\right\\}\\\\ =&\\exp\\left\\{\\sum_{j=1}^m p_jx-\\frac{p_j^2x^2}{2}+\\frac{p_j^3x^3}{3}-\\cdots\\right\\}\\\\ =&\\exp\\left\\{x+\\sum_{j=1}^m\\left(-\\frac{q_2x^2}{2}+\\frac{q_3x^3}{3}-\\cdots\\right)\\right\\}\\\\ =&e^x\\cdot e^{-q_2x^2\/2}\\cdot e^{q_3x^3\/3}\\cdot e^{-q_4x^4\/4}\\cdots\\\\ \\end{split}$$ where $q_r=\\sum_{k=1}^m p_k^r$ (so the above $q=q_2$ while $q_1=1$). We can show that $q_k\\le q_2^{k-1}$ (e.g. from $q_r^2\\ge q_{r-1}q_{r+1}$) which tells us $q_2$ is the dominant adjustment.\n\nIf we make the expansion $$\\begin{split} \\exp\\left\\{\\sum_{j=1}^m\\left(-\\frac{q_2x^2}{2}+\\frac{q_3x^3}{3}-\\cdots\\right)\\right\\} =&\\sum_{k=0}^\\infty\\frac{1}{k!} \\left(-\\frac{q_2x^2}{2}+\\frac{q_3x^3}{3}-\\cdots\\right)^k\\\\ =&1-a_2x^2+a_3x^3-a_4x^3+\\cdots\\\\ \\end{split}$$ we get $a_2=q_2\/2$, $a_3=q_3\/3$, $a_4=q_4\/4-q_2^2\/8$, etc. Entering these into the power expansion for $F(x)$, we get $$\\begin{split} F(x)=&e^x\\cdot\\left(1-a_2x^2+a_3x^3-a_4x^3+\\cdots\\right)\\\\ =&\\sum_{n=0}^\\infty\\left(\\frac{1}{n!} -\\frac{a_2}{(n-2)!}-\\frac{a_3}{(n-3)!}-\\cdots\\right)\\cdot x^n\\\\ =&\\sum_{n=1}^\\infty\\big(1-a_2n(n-1)+a_3n(n-1)(n-2)-\\cdots\\big)\\cdot\\frac{x^n}{n!}\\\\ \\end{split}$$ so the effect of $a_rx^r$ is an adjustment $a_rn(n-1)\\cdots(n-r+1)$ which for $r\\ll n$ has the same magnitude as $a_rn^2$.\n\nIf we ignore $q_r$ for $r>2$ and take the effect of $a_{2r}x^{2r}$ to be $\\approx a_{2r}[n(n-1)]^r$ (which is true for $r=1$ but only approximate for $r>1$, we get the original approximation: $e^{-q_2n(n-1)\/2}$.\n\n-\n\nI have no idea for your main question, but I know how you got, for the uniform distribution case, a formula distinct from that of Wikipedia -- because I did the same mistake earlier today.\n\nYou select $n$ values out of a space of size $d$, each selection being uniformly random. We assume that $n \\ll d$ (but not necessarily that $n^2 \\ll d$, which is the crucial point). The probability of there being no collision is:\n\n$$p(n,d) = \\frac{d!}{d^n(d-n)!}$$\n\n(there are $\\frac{d!}{(d-n)!}$ possible selections with no collisions, out of $d^n$ if we allow collisions).\n\nUsing Stirling's formula ($k! \\approx \\sqrt{2\\pi k}(\\frac{k}{e})^k)$, we get:\n\n$$p(n, d) \\approx d^{-n} \\sqrt{\\frac{d}{d-n}} \\left(\\frac{d}{e}\\right)^d\\left(\\frac{d-n}{e}\\right)^{n-d}$$\n\nSince $n \\ll d$, the part with the square root is very close to $1$, so we ignore it. The expression then becomes:\n\n$$p(n, d) \\approx e^{-n} \\left(1-\\frac{n}{d}\\right)^{n-d} = e^{-n + (n-d)\\ln (1-\\frac{n}{d})}$$\n\nNo we replace the log with its Taylor approximation, and, that's the tricky point, you have to use degree 2. This means that:\n\n\\begin{eqnarray} -n + (n-d)\\ln (1-\\frac{n}{d}) &=& -n + (n-d)\\left(-\\frac{n}{d} - \\frac{n^2}{2d^2} + O\\left(\\frac{n^3}{d^3}\\right)\\right) \\\\ &=& -\\frac{n^2}{d} - \\frac{n^3}{2d^2} + \\frac{n^2}{2d} + O\\left(\\frac{n^3}{d^2}\\right) \\\\ &=& -\\frac{n^2}{2d} + O\\left(\\frac{n^3}{d^2}\\right) \\end{eqnarray}\n\nHence the result:\n\n$$p(n,d) \\approx e^{-\\frac{n^2}{2d}}$$\n\nwhich is the formula given in the Wikipedia page on the birthday \"paradox\". In the expression above, when we replace the log with its approximation, the degree 1 terms cancel out, which is why we have to go to degree 2. Stopping at degree 1 was my mistake and, I presume, yours too.\n\n-","date":"2015-05-24 07:48:46","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.957960307598114, \"perplexity\": 156.19291298646294}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2015-22\/segments\/1432207927844.14\/warc\/CC-MAIN-20150521113207-00321-ip-10-180-206-219.ec2.internal.warc.gz\"}"}
null
null
Q: Problems-exercises on complex analysis Is there any books, pdfs, problem sheets etc, where i can find problems-exercises on standard undergraduate complex analysis? I know every book has exercises, i am asking if there is a specific book with only reasonably hard and easy problems so i can understand the course better. I didn't find any better that's why i am asking here. Thank you in advance.
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,170
Fladså Pastorat er et pastorat i Næstved Provsti, Roskilde Stift med de fire sogne: Næstelsø Sogn Mogenstrup Sogn Hammer Sogn Vester Egesborg Sogn I pastoratet er der fire kirker Næstelsø Kirke Mogenstrup Kirke Hammer Kirke Vester Egesborg Kirke Pastorater i Roskilde Stift
{ "redpajama_set_name": "RedPajamaWikipedia" }
577
{"url":"https:\/\/socratic.org\/questions\/if-9-5-l-of-a-gas-at-room-temperature-exerts-a-pressure-of-15-kpa-on-its-contain","text":"# If 9\/5 L of a gas at room temperature exerts a pressure of 15 kPa on its container, what pressure will the gas exert if the container's volume changes to 7\/3 L?\n\nDec 26, 2016\n\nWe use the equation ${p}_{1} {V}_{1} = {p}_{2} {V}_{2}$\n\n#### Explanation:\n\n$15 \\times \\frac{9}{5} = {p}_{2} \\times \\frac{7}{3}$\n\nDivide by $\\frac{7}{3}$ = multiply by $\\frac{3}{7}$:\n\n$15 \\times \\frac{9}{5} \\times \\frac{3}{7} = {p}_{2} \\cdot \\cancel{\\frac{7}{3}} \\times \\cancel{\\frac{3}{7}}$\n\n$\\to {p}_{2} = \\frac{3 \\times \\cancel{5} \\times 9 \\times 3}{\\cancel{5} \\times 7} = \\frac{81}{7} \\approx 11.6 k P a$","date":"2019-09-22 03:54:47","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 6, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6571678519248962, \"perplexity\": 1671.4272433346976}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-39\/segments\/1568514575076.30\/warc\/CC-MAIN-20190922032904-20190922054904-00256.warc.gz\"}"}
null
null
{"url":"http:\/\/www.chegg.com\/homework-help\/questions-and-answers\/orbited-moon-apollo-11-spacecraft-s-mass-12600-kg-mean-distance-moon-s-center-239367-106-m-q2692573","text":"When it orbited the Moon, the Apollo 11\nspacecraft\ufffds mass was 12600 kg, and its\nmean distance from the Moon\ufffds center was\n2.39367 \ufffd 106 m.\nFind the orbital speed of the spacecraft.\nAssume its orbit to be circular and the\nMoon to be a uniform sphere of mass\n7.36 \ufffd 1022 kg. The gravitational constant\nis 6.67259 \ufffd 10?11 N \ufffd m2\/kg2.","date":"2016-05-27 08:10:27","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8647955060005188, \"perplexity\": 13440.844886291157}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-22\/segments\/1464049276543.81\/warc\/CC-MAIN-20160524002116-00069-ip-10-185-217-139.ec2.internal.warc.gz\"}"}
null
null
We all know that Whatsapp, Facebook, Instagram etc how they are in top places and how much people showing interest in Using these social apps. Like this newly, One app enters into the market that is Snapchat app. Now it is giving tuff competition to above all social apps. This tutorial, we are going to discuss how to Install Snapchat for PC. Basically, SnapChat is made for Android devices and iOS devices. But Most of the people searching for Pc versions in the Google search engine. So that personally we are searching for the windows platforms and giving you clear downloading and Installing process for PC. Snapchat is One of the best social apps for Sharing your Images and your story's and Live moments with your Friends and family members. Easily can edit your Images before posting as per your requirements. And you can see your Friends post also. Moreover, you can react to Their Post's also. You will definitely fall in love with this app Once you start using this. Very simple way to Install and Usage. Less occupancy in your device memory. Easily can invite to friends and contact members. We all are Know that, If you wanna run any Android apps in Pc platform you have to take help From Android emulators. Once you install Emulator Your PC will be like an Android device. You can Install or download Any android apps into PC. So why you late. First of all, download android Emulator in Your PC. Lot of Emulators are there in the market and Choose good one As per your requirements. Here I can suggest you " BlueStack " or "NOX app Player". But We don't get in any App, We have to take it from their official Website. So, Open your Chrome and type "https://bluestacks.com" Official website on the address bar. You can see Official website From their download Pc version. Based on your internet speed it will take time for Downloading in your PC. Next step, Give double click for Installing in your PC. After installing open the app. Here You need to log in for entering inside of the App like Google Account play store. Previously if your a Member of BlueStack then directly you can log in. The final step, Open your BlueStack app and top side You will find the Google Play store. Just open it and Search for Snapchat App. Here you can choose install option for Installing. Within a span of time, It will Install in BlueStack app. Now you can Open the Snapchat app use it Asper your Wish. If you wanna use this app you need to enter BlueStack app because You installed Snapchat in the BlueStack app, not in your PC device. Hope you get the Clear picture about the Snapchat for PC with freely. Don't hesitate to write a mail to us if you facing any installation issues. Read also: How to install Kik for pc?
{ "redpajama_set_name": "RedPajamaC4" }
9,334
\section{\label{introduction}Introduction} The Big Bang theory is successful to explain the expansion of the Universe, the cosmic microwave background (CMB), and light element abundances. It requires the baryon-to-entropy ratio of order $10^{-10}$ as an initial condition at a temperature above $1 \ {\rm MeV} $. When we consider the earlier Universe, there is an era of exponential expansion, called inflation, which solves cosmological problems related to the initial conditions of the Universe, such as the horizon problem, flatness problem, and origin of large scale structure. However, baryon asymmetry is washed out by inflation, so that we need a mechanism to generate the observed amount of baryon asymmetry after inflation. In supersymmetric (SUSY) theories, baryon asymmetry can be generated by Affleck-Dine baryogenesis (ADBG) using a $B-L$ charged flat direction called an AD field~\cite{AD, DRT}. The AD field is assumed to have a negative effective mass term, called a Hubble-induced mass term, due to a finite energy density of the Universe via supergravity effects, which implies that it obtains a large VEV during and after inflation. As the energy density of the Universe decreases, the effective mass decreases. Eventually, the effective mass becomes comparable to the soft mass of the AD field, and then the AD field starts to oscillate around the origin of its potential. At the same time, its phase direction is kicked by its A-term potential. Since the $B-L$ number density is proportional to the phase velocity of the AD field, the $B-L$ asymmetry is generated through this dynamics. Finally, the coherent oscillation of the AD field decays and dissipates into the thermal plasma and the $B-L$ asymmetry is converted to the desired baryon asymmetry through the sphaleron effects~\cite{Kuzmin:1985mm, Fukugita:1986hr}. There are many applications of ADBG (e.g., Refs.~\cite{Anisimov:2000wx, Fujii:2001zr, Mazumdar:2001nw, Baer:2009ms, Choi:2011rs, Kasuya:2014bxa, Yamada:2015rza}). It could solve the baryon-DM coincidence problem~\cite{EnMc, Fujii:2001xp, Roszkowski:2006kw, Kitano:2008tk, ShKu2, Kane:2011ih, Kamada:2012bk, Harigaya:2014tla, Kawasaki:2015cla} and the moduli problem~\cite{Felder:2007iz, Kawasaki:2007yy, Choi:2009qd, Furuuchi:2011wa, Higaki:2012ba, Garcia:2013bha, Hayakawa:2015fga}. The mechanism can also be used to generate asymmetry in dark sector~\cite{Bell:2011tn, Cheung:2011if, Fischler:2014jda}. Inflaton may play a role of the AD field in non-SUSY models~\cite{Hertzberg:2013mba, Hertzberg:2013jba}. As mentioned above, the AD field obtains a Hubble-induced mass due to the finite energy density of the Universe during and after inflation (see Refs.~\cite{Kasuya:2006wf, Dutta:2010sg, Marsh:2011ud, Dutta:2012mw} for recent works on Hubble-induced terms.) In the conventional scenario of ADBG, the sign of the Hubble-induced mass term is assumed to be negative during and after inflation. However, the sign of the Hubble-induced mass term can change after inflation because the source of the energy density of the Universe generically changes after inflation. In this paper, we investigate a new scenario that the AD field obtains a negative Hubble-induced mass term during inflation while it obtains a positive one after inflation.% \footnote{ A similar scenario has been considered in the case of D-term inflation models in Refs.~\cite{McDonald:1999nc, Kawasaki:2015cla}, where the Hubble-induced mass is absent during D-term inflation and arises with a positive coefficient after inflation. In this paper, we focus on F-term hybrid and chaotic inflation models. }% \footnote{ The opposite case, where the Hubble-induced mass term is positive during inflation and is negative after inflation, has been considered in Refs.~\cite{Kamada:2014qja, Kamada:2015iga}. Although $B-L$ asymmetry cannot be generated via the dynamics of the flat direction, topological defects form after inflation and emit gravitational waves. } In this case, the AD field starts to oscillate around the origin of the potential due to the positive Hubble-induced mass term just after the end of inflation. At the same time, its phase direction is kicked by an A-term and $B-L$ asymmetry is generated. We calculate the produced amount of baryon asymmetry and show that it can be consistent with that observed. The whole scenario is much simpler than the conventional scenario of ADBG. This is because the dynamics of the AD field is determined only by the Hubble-induced terms and the low-energy potential of the AD field does not affect the resulting $B-L$ asymmetry. This means that the scenario and our calculations in this paper can be applied to many SUSY models, including gravity-mediated and gauge-mediated SUSY breaking models. In particular, the scenario does not result in the formation of non-topological solitons called Q-balls even in gauge-mediated SUSY breaking models~\cite{Coleman, Qsusy, EnMc, KuSh, KK}. This is one of the advantages of our scenario because Q-balls are sometimes problematic due to their long lifetime. In addition, thermal effect on the dynamics of the AD field can be neglected in our scenario because it starts to oscillate before thermal plasma grows. This is the case even for so-called $L H_u$ flat direction. However, the resulting $B-L$ asymmetry depends on the energy scale of inflation because the dynamics of the AD field is determined by Hubble-induced terms. In particular, the A-term depends on inflation models, so that we need to calculate $B-L$ asymmetry for each inflation model. Since the resulting $B-L$ asymmetry depends on parameters in inflaton sector, we could check the consistency of the scenario by observing predictions of inflation models, such as the spectral index and tensor-to-scalar ratio. This paper is organised as follows. In the next section, we briefly review the conventional scenarios of ADBG. Then we consider our scenario of ADBG in the case that the AD field obtains a positive Hubble-induced mass term after inflation. We first overview the scenario in Sec.~\ref{ADBG just after inf}. Then we apply it to a hybrid inflation model in Sec.~\ref{hybrid} and a chaotic inflation model in Sec.~\ref{chaotic}. Finally, we conclude in Sec.~\ref{conclusion}. \section{\label{ADBG}Conventional scenario of ADBG} In this section, we review the conventional scenario of ADBG to clarify the difference from our scenario explained in the subsequent sections. \subsection{Preliminary} In SUSY theories, there are SUSY partners of quarks and leptons, called squarks and sleptons, which are complex scalar fields carrying $B-L$ charges. Let us consider one of them and denote it as $\phi$. When we write its $B-L$ charge as $q$, the number density of $B-L$ asymmetry associated with $\phi$ is written as \begin{eqnarray} n_{B-L} = i q \lmk \dot{\phi}^* \phi - \phi^* \dot{\phi} \right) = 2 q {\rm Im} \lkk \phi^* \dot{\phi} \right]. \label{n_B-L} \end{eqnarray} This implies that we can obtain a large amount of $B-L$ asymmetry when the field $\phi$ rotates in the complex plane with a large amplitude. Thus we focus on a $B-L$ charged scalar field that has a very flat potential. In SUSY theories, there are two types of potentials for scalar fields: D-term and F-term potentials. Although gauged scalar fields have D-term potentials, it is known that D-terms are cancelled for gauge-singlet combinations of scalar fields. For example, when the field $\phi$ consists of the following combination, D-term potentials are cancelled: \begin{eqnarray} (u^c)^R_i = \frac{1}{\sqrt{3}} \phi, \quad (d^c)^G_j = \frac{1}{\sqrt{3}} \phi, \quad (d^c)^B_k = \frac{1}{\sqrt{3}} \phi, \end{eqnarray} where the upper indices represent color and the lower ones represent flavours ($j \ne k$). The fields $u^c$ and $d^c$ are $u$-type and $d$-type right-handed squarks, respectively. This D-flat direction is sometimes called $u^c d^c d^c$ flat direction. The following combination is another famous example of flat directions called $L H_u$ flat direction~\cite{Murayama:1993em}: \begin{eqnarray} L_i = \frac{1}{\sqrt{2}} \lmk \begin{array}{ll} 0 \\ \phi \end{array} \right), \quad H_u = \frac{1}{\sqrt{2}} \lmk \begin{array}{ll} \phi \\ 0 \end{array} \right), \end{eqnarray} where $L$ and $H_u$ are left-handed slepton and up-type Higgs, respectively. F-term potentials are determined by superpotential $W$ as \begin{eqnarray} V_F (\phi) = \abs{\frac{\del W}{\del \phi} }^2. \end{eqnarray} In the minimal SUSY Standard Model (MSSM), the superpotential is given by \begin{eqnarray} W^{(\rm MSSM)} = y_u Q H_u u^c - y_d Q H_d d^c - y_e L H_d e^c + \mu H_u H_d, \label{W_MSSM} \end{eqnarray} within the renormalizable level, where we omit flavour indices. Here we implicitly assume R-parity conservation to avoid disastrous proton decay. Fortunately, many D-flat directions, including $u^c d^c d^c$ flat direction, have no F-term potential within the renormalizable level. The D- and F-flat directions with nonzero $B-L$ charge is listed in Table.~\ref{table}~\cite{Gherghetta:1995dv}.% \footnote{ Although $L H_u$ flat direction has a potential coming from the Higgs $\mu$-term, it is assumed that $\mu$ is of order the soft mass scale and absorb it to the meaning of $m_\phi$ [see Eq.~(\ref{A-term})]. } It is expected that the dynamics of such a flat direction can generate a large amount of $B-L$ asymmetry. \begin{table} \caption{\label{table} Flat directions in the MSSM and $B-L$ charges~\cite{Gherghetta:1995dv}. } \begin{center} \begin{tabular}{ll} flat directions & $B-L$ \\ \hline \hline $ L H_u $ & -1 \\ \hline $ u^c d^c d^c $ & -1 \\ \hline $ LLe^c $ & -1 \\ \hline $ Qd^c L $ & -1 \\ \hline $d^cd^cd^cLL $ & -3 \\ \hline $ u^cu^cu^ce^ce^c $ & 1 \\ \hline $ Qu^cQu^ce^c $ & 1 \\ \hline $ QQQQu^c $ & 1 \\ \hline $ (QQQ)_4 LLLe^c $ & -1 \\ \hline $ u^cu^cd^c Qd^cQd^c $ & -1 \\ \hline \end{tabular} \end{center} \end{table} In low energy, the AD field obtains soft terms coming from the low-energy SUSY breaking effect. In this section, we consider gravity-mediated SUSY breaking models for simplicity. Note that the conventional scenario of ADBG depends on mediation models,% \footnote{ When we consider a SUSY model with a gauge mediated SUSY breaking effect, the soft mass of the AD field is suppressed for a VEV larger than the messenger scale~\cite{deGouvea:1997tn}. In this case, we have to take into account the formation of non-topological solitons called Q-balls~\cite{ Coleman, Qsusy, EnMc, KuSh, KK}. The baryon number should be released from Q-balls to explain the observed amount of baryon asymmetry and the scenario is completely different from the one explained in this section~\cite{Fujii:2001xp, ShKu2, Kamada:2012bk, Harigaya:2014tla}. } but our scenario does not as explained in the subsequent sections. We write soft terms of the AD field as \begin{eqnarray} V_{\rm soft} &=& m_\phi^2 \abs{\phi}^2 + a m_{3/2} W^{(\rm AD)} + {\rm c.c.} \label{A-term} \end{eqnarray} where $m_\phi$ ($\simeq m_{3/2}$) is the soft mass of the AD field, $m_{3/2}$ is gravitino mass, and $a$ ($= \mathcal{O}(1)$) is a constant. We can assume $a = a^*$ without loss of generality. The higher-dimentional superpotential of the AD field $W^{(\rm AD)}$ is determined below. During and after inflation, the AD field obtains effective potentials from the energy density of inflaton $I$ via supergravity effects. In supergravity, the potential of scalar fields is determined by \begin{eqnarray} V_{\rm SUGRA} = e^{K/M_{\text{Pl}}^2} \lkk \lmk D_i W \right) K^{i \bar{j}} \lmk D_j W \right)^* - \frac{3}{M_{\text{Pl}}^2} \abs{W}^2 \right], \label{SUGRA potential} \end{eqnarray} where $K$ is a K\"{a}hler potential and $D_i W \equiv W_i + K_i W / M_{\text{Pl}}^2$. The subscripts represent the derivatives with respect to corresponding fields, e.g., $W_i = \del W / \del \phi$ for $i = \phi$, and $K^{i \bar{j}}$ is defined by the inverse of $K_{i \bar{j}}$. We introduce an inflaton $I$ with a K\"{a}hler potential of \begin{eqnarray} K = \abs{\phi}^2 + \abs{I}^2 + \frac{c}{M_{\text{Pl}}^2} \abs{\phi}^2 \abs{I}^2, \end{eqnarray} where $c$ is an $O(1)$ constant. We assume that the F-term potential of $I$ drives inflation and satisfies $\abs{W_I}^2 \simeq 3 H_{\rm inf}^2 M_{\text{Pl}}^2$, where $H_{\rm inf}$ is the Hubble parameter during inflation. The supergravity potential of Eq.~(\ref{SUGRA potential}) includes the following interaction: \begin{eqnarray} V &\supset& \exp \lmk \frac{K}{M_{\text{Pl}}^2} \right) W_I ( K^{I \bar{I}})^{-1} W_I^* \\ &\simeq& \abs{F_I}^2 \lmk 1 + (1 - c) \abs{\phi}^2 \right), \label{H-mass during inf} \end{eqnarray} where we assume $\la \phi \right\rangle, \la I \right\rangle \ll M_{\text{Pl}}$ and neglect irrelevant higher-dimensional terms. Thus the AD field $\phi$ obtains an effective mass term of order the Hubble parameter during inflation: \begin{eqnarray} V &\supset& c_H H_{\rm inf}^2 \abs{\phi}^2 \\ c_H &=& - 3 (c - 1). \end{eqnarray} This is called a Hubble-induced mass term.% \footnote{ There is sometimes a Hubble-induced A-term during inflation, but it is not the case in general (see Ref.~\cite{Kasuya:2008xp}). } After inflation ends, the inflaton starts to oscillate around the potential minimum and its oscillation energy dominates the Universe. During this inflaton-oscillation dominated era, the Hubble-induced mass comes also from higher-dimensional kinetic interactions, which are determined by the K\"{a}hler potential as \begin{eqnarray} \mathcal{L}_{\rm kin} = K_{i \bar{j}} \del_\mu \varphi^i \del^\mu \varphi^{* j}, \label{kinetic term} \end{eqnarray} where $\varphi_i$ generically represents the fields of $\phi$ and $I$. There is a kinetic interaction of \begin{eqnarray} \mathcal{L}_{\rm kin} \supset K_{I \bar{I}} \abs{\dot{I}}^2 \supset \frac{c}{M_{\text{Pl}}^2} \abs{\dot{I}}^2 \abs{\phi}^2. \end{eqnarray} A typical time scale of the dynamics of the AD field is at most of order the Hubble parameter as shown below. That of inflaton is the curvature of its potential, which is larger than the Hubble parameter during inflaton-oscillation dominated era. Thus we can take a time-average over the inflaton-oscillation time scale to investigate the dynamics of the AD field. Assuming that the inflaton oscillates in a quadratic potential after inflation, we obtain an effective Hubble-induced mass for $\phi$ after inflation: \begin{eqnarray} V_H = c_H H^2 (t) \abs{\phi}^2 \\ c_H = - 3 \lmk c - \frac{1}{2} \right), \label{H-mass after inf} \end{eqnarray} where we use the Virial theorem and include the contribution from the F-term potential.% \footnote{ Inflation may be driven by a D-term potential of inflaton. In this case, the Hubble-induced mass is absent during inflation but the AD field stays at a nonzero VEV due to the Hubble-friction effect~\cite{Kolda:1998kc, Enqvist:1998pf, Kawasaki:2001in}. The inflaton obtains nonzero F-term after inflation ends, so that the AD field obtains a Hubble-induced mass during the inflaton oscillation dominated era. Thus the scenario of ADBG and resulting $B-L$ asymmetry are the same with the ones in F-term inflation. } In the conventional scenario, $c_H$ is assumed to be negative during and after inflation. This means that the AD field has a large tachyonic mass and obtains a large VEV during the time of $H(t) \mathop{}_{\textstyle \sim}^{\textstyle >} m_\phi$. Since the AD field has a large VEV, we have to take into account non-renormalizable terms to investigate its dynamics. Although the superpotential of the AD field is absent within the renormalizable level, it may have a higher-dimensional superpotential such as \begin{eqnarray} W^{(\rm AD)} = \lambda \frac{\phi^n}{n M_{\text{Pl}}^{n-3}}, \label{W_AD} \end{eqnarray} where $n$ ($ \ge 4$) is an integer depending on flat directions and $M_{\text{Pl}}$ ($\simeq 2.4 \times 10^{18} \ {\rm GeV} $) is the reduced Planck scale. For example, since the neutrinos have nonzero masses (denoted as $m_{\nu_i}$), we introduce a superpotential of \begin{eqnarray} W^{(L H_u)} &=& \frac{m_{\nu_i}}{2 \la H_u \right\rangle^2} \lmk L_i H_u \right)^2, \\ &\equiv& \frac{\lambda}{4 M_{\text{Pl}}} \phi^4 ~~~~~~\text{for}~~~~ \frac{\phi^2}{2} = L H_u, \end{eqnarray} where $\la H_u \right\rangle = \sin \beta \times 174 \ {\rm GeV}$ and $\tan \beta \equiv \la H_u \right\rangle / \la H_d \right\rangle$. Thus $L H_u$ flat direction corresponds to the case of $n=4$ in Eq.~(\ref{W_AD}). We can also write a superpotential of $(u^c d^c d^c)^2$, so that $n=6$ for the $u^c d^c d^c$ flat direction. The superpotential leads to a F-term potential of $\phi$ as \begin{eqnarray} V_F (\phi) = \lambda^2 \frac{\abs{\phi}^{2n-2}}{M_{\text{Pl}}^{2n-6}}, \label{V_F} \end{eqnarray} where we neglect irrelevant higher-dimensional terms in the supergravity potential. \subsection{Case without thermal effects} Let us explain the dynamics of the AD field and calculate $B-L$ asymmetry. In this section, we neglect thermal log potential, which is explained and introduced in the next subsection. As explained in the previous subsection, the potential of the AD field is given by \begin{eqnarray} V (\phi) &=& V_{\rm soft} + V_H + V_F \\ &=& m_\phi^2 \abs{\phi}^2 + \lmk a m_{3/2} \lambda \frac{\phi^n}{n M_{\text{Pl}}^{n-3}} + {\rm c.c.} \right) + c_H H^2(t) \abs{\phi}^2 + \lambda^2 \frac{\abs{\phi}^{2n-2}}{M_{\text{Pl}}^{2n-6}}, \end{eqnarray} during the inflaton-oscillation dominated era. When we decompose the AD field as $\phi = \varphi e^{i \theta} / \sqrt{2}$, the equations of motion are written as \begin{eqnarray} \ddot{\varphi} + 3 H \dot{\varphi} - \dot{\theta}^2 \varphi + \frac{\del V(\varphi)}{\del \varphi} &=& 0 \\ \ddot{\theta} + 3 H \dot{\theta} + 2 \frac{\dot{\varphi}}{\varphi} \dot{\theta} + \frac{\del V}{\del \theta} &=& 0, \label{EOM for phase direction} \end{eqnarray} where $H = 2/3 t$ during the inflaton-oscillation dominated era. Note that the phase direction has a Hubble-friction term ($3 H \dot{\theta}$). The coefficient $c_H$ is assumed to be negative in the conventional scenario of ADBG. In this case, the AD field has a tachyonic mass, so that it obtains a large VEV. The VEV of the AD field at the potential minimum is given by \begin{eqnarray} \la \abs{\phi} \right\rangle \simeq \lmk \frac{\abs{c_H} H^2(t) M_{\text{Pl}}^{2n-6}}{\lambda^2 (n-1)} \right)^{1/(2n-4)}. \label{VEV} \end{eqnarray} for $H(t) \mathop{}_{\textstyle \sim}^{\textstyle >} m_\phi$. The AD field follows this potential minimum. The phase of the flat direction stays at a certain phase due to the Hubble friction term. We denote the initial phase of the AD field as $\theta_0$, which is expected to be of order unity. When the Hubble parameter decreases to $m_\phi$, the potential of the AD field is dominated by the soft mass term and it starts to oscillate around the origin of the potential. Here we denote the Hubble parameter at the time of beginning of oscillation as $H_{\rm osc}$: \begin{eqnarray} H_{\rm osc} \simeq \frac{m_\phi}{\sqrt{\abs{c_H}}}. \label{H_osc 1} \end{eqnarray} The VEV of the AD field at that time is given by \begin{eqnarray} \phi_{\rm osc} \simeq \lmk \frac{\abs{c_H} H^2_{\rm osc} M_{\text{Pl}}^{2n-6}}{\lambda^2 (n-1)} \right)^{1/(2n-4)}. \label{VEV2} \end{eqnarray} At the same time, its phase direction is kicked by the A-term, so that it starts to rotate in the phase space. This is the dynamics that generates the $B-L$ asymmetry [see Eq.~(\ref{n_B-L})]. The evolution of equation for the $B-L$ number density is written as \begin{eqnarray} \dot{n}_{B-L} +3 H n_{B-L} = - q \varphi^2 \lmk \frac{\del V}{\del \theta} \right), \end{eqnarray} where $q$ denotes the $B-L$ charge of the AD field. We semi-analytically and numerically solve this equation and obtain \begin{eqnarray} a^3 n_{B-L} (t) &=& - \int \mathrm{d} t q a^3(t) \varphi^2 \frac{\del V}{\del \theta} \\ &\equiv& \epsilon q H_{\rm osc} \phi_{\rm osc}^2 a^3 (t_{\rm osc}) \\ \epsilon &\simeq& (2-4) \times \frac{a}{\sqrt{n-1}(1+(n-4)/(n-2))} \frac{m_{3/2}}{m_\phi} \sin \lmk n \theta_0 \right) ~~~~ \text{ for }~~ \epsilon \mathop{}_{\textstyle \sim}^{\textstyle <} 1, \label{result in conventional scenario 1} \end{eqnarray} where we assume $c_H = -1$ in the last line. We define the ellipticity parameter $\epsilon$ ($\le 1$) which represents the efficiency of baryogenesis. Since the $B-L$ number density has to be smaller than that of the total AD field times $B-L$ charge $q$, $\epsilon$ is at most unity. We have numerically solved the equation of motion for $\phi$ and have obtained the numerical factor of $(2-4)$ in Eq.~(\ref{result in conventional scenario 1}) for $c_H = -1$ and $\epsilon \mathop{}_{\textstyle \sim}^{\textstyle <} 1$. One of the numerical results is shown in Fig.~\ref{fig1}, where we set $n=6$, $c_H = -1$, $a m_{3/2} / m_\phi = -1$, and $\theta_0 = \pi / 10$. One can see that the phase direction is kicked and the $B-L$ asymmetry is generated at $t \sim m_\phi^{-1} \simeq H_{\rm osc}^{-1}$. The amplitude of the flat direction decreases as time evolves due to the Hubble expansion and the $B-L$ breaking effect (i.e., the A-term) becomes irrelevant soon after the oscillation. Thus, the generated $B-L$ asymmetry within a comoving volume is conserved soon after the AD field starts to oscillate as one can see in Fig.~\ref{fig1}. \begin{figure}[t] \centering \begin{tabular}{l l} \includegraphics[width=.45\textwidth, bb=0 0 450 305 ]{fig1} \quad \includegraphics[width=.45\textwidth, bb=0 0 450 191 ]{fig2} \end{tabular} \caption{\small Evolution of $B-L$ number density in a comoving volume (left panel) and the AD field (right panel) in the conventional scenario of ADBG. We set $n=6$, $c_H = -1$, $a m_{3/2} / m_\phi = -1$, and $\theta_0 = \pi / 10$. The dimensionfull quantities are rescaled such as $t \to t/m_\phi$ and $\phi \to \phi / \la \abs{\phi} \right\rangle_{t=H_{\rm osc}^{-1}}$. } \label{fig1} \end{figure} Then, the oscillating AD field decays and dissipates into radiation~\cite{Mukaida:2012qn} and the sphaleron effect relates the $B-L$ asymmetry to the baryon asymmetry~\cite{Kuzmin:1985mm, Fukugita:1986hr}.% \footnote{ For simplicity, in this section we assume that Q-balls do not form after ADBG. Note that in our scenario explained in the subsequent sections, Q-balls do not form. } Since the sphaleron process is in thermal equilibrium, the resulting baryon asymmetry is related to the $B-L$ asymmetry such as~\cite{Harvey:1990qw} \begin{eqnarray} n_b \simeq \frac{8}{23} n_{B-L}. \end{eqnarray} We can calculate the resulting baryon-to-entropy ratio $Y_b$ such as \begin{eqnarray} Y_b &\equiv& \frac{n_b}{s} \simeq \left. \frac{8}{23} \frac{n_{B-L}}{s} \right\vert_{\rm RH} \\ &\simeq& \left. \frac{8}{23} \frac{3 T_{\rm RH} n_{B-L}}{4 \rho_{\rm inf}} \right\vert_{\rm osc} \\ &\simeq& \frac{8}{23} \frac{\epsilon q T_{\rm RH}}{4 H_{\rm osc}} \lmk \frac{\phi_{\rm osc}}{M_{\text{Pl}}} \right)^2 \\ \label{Y_b 1} &\simeq& 1.2 \times 10^{-10} \epsilon q \lambda^{-1/2} \lmk \frac{T_{\rm RH}}{100 \ {\rm GeV} } \right) \lmk \frac{m_\phi}{1 \ {\rm TeV} } \right)^{-1/2}~~~~\text{for}~~n=6, \label{Y_b conventional} \end{eqnarray} where $\rho_{\rm inf}$ ($\simeq 3 H^2(t) M_{\text{Pl}}^2$) is the energy density of the inflaton and $T_{\rm RH}$ is reheating temperature. In the last line, we use Eq.~(\ref{VEV2}). The resulting baryon asymmetry can be consistent with the observed baryon asymmetry of $Y_b^{(\rm obs)} \simeq 8.7 \times 10^{-11}$~\cite{pdg}. Since we expect $\epsilon q \sim 1$, a relatively low reheating temperature is required to explain the observed amount of baryon asymmetry unless the parameter $\lambda$ is much larger than unity. \subsection{Case with thermal effects: $L H_u$ flat direction} In this section, we take into account thermal log potential. It is particularly important for the case of $n=4$, including the case of $L H_u$ flat direction. After inflation ends and before reheating completes, inflaton gradually decays into radiation. Since the energy density of radiation is given by $\rho_{\rm rad} \simeq (3/5) \rho_{\rm inf} \Gamma_I t$, there is a background plasma with a temperature of \begin{eqnarray} T = \lmk \frac{36 H(t) \Gamma_I M_{\text{Pl}}^2}{g_* (T) \pi^2} \right)^{1/4}, \label{T during osc.} \end{eqnarray} where $g_*$ is the effective number of relativistic degrees of freedom in the thermal plasma. The decay rate of inflaton $\Gamma_I$ is related with the reheating temperature as \begin{eqnarray} T_{\rm RH} \simeq \lmk \frac{90}{g_* (T_{\rm RH}) \pi^2 } \right)^{1/4} \sqrt{\Gamma_I M_{\text{Pl}}}. \end{eqnarray} Here we explain the origin of the thermal log potential, focusing on $L H_u$ flat direction. The free energy of the thermal plasma $F$ depends on QCD coupling $g_s$ in the next-to-leading order as \begin{eqnarray} F = \frac{3}{8} (1 + N_f^{({\rm th})}) g_s^2 (T) T^4, \end{eqnarray} where $N_f^{({\rm th})}$ is the number of family in the thermal plasma. Here, the quark multiplets obtain effective masses via the Yukawa interactions when $L H_u$ flat direction has a large VEV [see Eq.~(\ref{W_MSSM})]. When its VEV is larger than the temperature of the plasma, the renormalization running of $g_s$ is affected and its value at the energy scale of $T$ depends on the VEV of $L H_u$ flat direction: $g_s(T) = g_s (T, \phi)$. Therefore the free energy depends on $\phi$ and $L H_u$ flat direction acquires a potential depending on temperature. Since the renormalization running has a logarithmic dependence, it is written as~\cite{Anisimov:2000wx, Fujii:2001zr} \begin{eqnarray} V_T (\phi) \simeq c_T \alpha_s^2 T^4 \log \lmk \frac{\abs{\phi}^2}{T^2} \right), \end{eqnarray} with $c_T = 45/ 32$ for $y \abs{\phi} \gg T$, where $\alpha_s \equiv g_s^2 / 4 \pi$ and $y$ generically stands for Yukawa couplings for quarks. This is sometimes called thermal log potential. In the previous subsection, we neglect the thermal potential and the AD field starts to oscillate around the origin of the potential at $H(t) \simeq m_\phi / \sqrt{\abs{c_H}}$. When we take into account the thermal log potential, it starts to oscillate at the time of \begin{eqnarray} H_{\rm osc} \simeq \text{Max} \lkk \frac{m_\phi}{\sqrt{\abs{c_H}}}, \ \sqrt{\phi^{-1} V_T'} \right]. \label{H_osc 1-2} \end{eqnarray} Using Eqs.~(\ref{VEV2}) and (\ref{T during osc.}), this can be rewritten as \begin{eqnarray} H_{\rm osc} \simeq \text{Max} \lkk m_\phi , \ 0.6 \alpha_s \sqrt{\lambda} T_{\rm RH} \right], \label{H_osc 2} \end{eqnarray} where we assume $\abs{c_H} = 1$ and $n=4$. We numerically solve the equation of motion for $\phi$ and obtain the ellipticity parameter as \begin{eqnarray} \epsilon &=& (0.4 - 3.5) \times a \sin \lmk n \theta_0 \right) \frac{m_{3/2}}{H_{\rm osc}} \label{result in conventional scenario 2} \\ &\equiv& \tilde{\epsilon} \frac{m_{3/2}}{H_{\rm osc}}, \end{eqnarray} where we define $\tilde{\epsilon}$ that is expected to be of order unity. Here we assume $T_{\rm RH} \mathop{}_{\textstyle \sim}^{\textstyle >} m_\phi / (\alpha_s \sqrt{\lambda})$, which implies $H_{\rm osc} \simeq 0.6 \alpha_s \sqrt{\lambda} T_{\rm RH}$ [see Eq.~(\ref{H_osc 2})]. One of our results is shown in Fig.~\ref{fig2}, where we set $c_H = 1$, $a m_{3/2} / H_{\rm osc} = -0.01$, and $\theta_0 = \pi / 10$. The ellipticity parameter $\epsilon$ is much smaller than unity in this numerical calculation, so that the phase direction is kicked slightly. We are difficult to see that the AD field rotates in the phase space in the right panel of Fig.~\ref{fig2} though it actually does. \begin{figure}[t] \centering \begin{tabular}{l l} \includegraphics[width=.45\textwidth, bb=0 0 450 305 ]{fig3} \quad \includegraphics[width=.45\textwidth, bb=0 0 450 191 ]{fig4} \end{tabular} \caption{\small Evolution of $B-L$ number density in a comoving volume (left panel) and the phase direction of the AD field (right panel) in the conventional scenario of ADBG. We set $c_H = -1$, $a_H m_{3/2} / H_{\rm osc} = -0.01$, and $\theta_0 = \pi / 10$. The dimensionfull parameters are rescaled as $t \to t/H_{\rm osc}$ and $\phi \to \phi / \la \abs{\phi} \right\rangle_{t=H_{\rm osc}^{-1}}$. } \label{fig2} \end{figure} The baryon-to-entropy ratio is calculated as \begin{eqnarray} Y_b &\simeq& \frac{8}{23} \frac{q \tilde{\epsilon} m_{3/2}}{4 \alpha_s \lambda^{3/2} M_{\text{Pl}}} \\ &\simeq& 3.7 \times 10^{-10} \tilde{\epsilon} \lmk \frac{\lambda}{10^{-4}} \right)^{-3/2} \lmk \frac{m_{3/2}}{1 \ {\rm TeV} } \right), \label{Y_b 2} \end{eqnarray} where we assume $T_{\rm RH} \mathop{}_{\textstyle \sim}^{\textstyle >} m_\phi / (\alpha_s \sqrt{\lambda})$, $\abs{c_H} = 1$, and $\alpha_s = 0.1$ and use $\epsilon = \tilde{\epsilon} m_{3/2} / H_{\rm osc}$. This result is independent of the reheating temperature~\cite{Fujii:2001zr}. The observed baryon asymmetry can be explained when the coupling $\lambda$ satisfies \begin{eqnarray} \lambda \simeq 2.6 \times 10^{-4} \lmk \frac{m_{3/2}}{1 \ {\rm TeV} } \right)^{2/3}, \end{eqnarray} where we assume $\tilde{\epsilon} = 1$. When we identify the AD field as $L H_u$ flat direction, this result implies that the lightest left-handed neutrino has a tiny mass of \begin{eqnarray} m_{\nu} &\simeq& 1.6 \times 10^{-9} \ {\rm eV} \lmk \frac{\lambda}{2.6 \times 10^{-4} } \right) \\ &\simeq& 1.6 \times 10^{-9} \ {\rm eV} \lmk \frac{m_{3/2}}{1 \ {\rm TeV} } \right)^{2/3}. \end{eqnarray} \subsection{\label{isocurvature}Baryonic isocurvature constraint} In many cases, the phase direction of the AD field is massless during inflation. This implies that the phase direction has a quantum fluctuations during inflation~\cite{Enqvist:1998pf, Kawasaki:2001in, Kasuya:2008xp, Harigaya:2014tla}: \begin{eqnarray} \abs{\delta \theta_0} \simeq \frac{\sqrt{2} H_{\rm inf} }{2 \pi \abs{\phi}_{\rm inf}}. \end{eqnarray} Since the resulting baryon asymmetry is related to $\theta_0$ [see Eqs.~(\ref{result in conventional scenario 1}) and (\ref{result in conventional scenario 2})], ADBG predicts baryonic isocurvature perturbations such as \begin{eqnarray} \mathcal{S}_{b \gamma} \equiv \frac{\delta Y_B}{Y_B} \simeq n \cot \lmk n \theta_0 \right) \delta \theta_0. \label{S_b} \end{eqnarray} Since the density perturbations of the CMB are predominantly adiabatic, the baryonic isocurvature perturbation is tightly constrained as~\cite{Ade:2015lrj} \begin{eqnarray} \left\vert \mathcal{S}_{{\rm b} \gamma} \right\vert \mathop{}_{\textstyle \sim}^{\textstyle <} 5.0 \times 10^{-5}. \end{eqnarray} Therefore, this constraint puts an upper bound on the energy scale of inflation: \begin{eqnarray} H_{\rm inf} \mathop{}_{\textstyle \sim}^{\textstyle <} 5.3 \times 10^{14} \ {\rm GeV} \frac{\tan (n \theta_0)}{n} \frac{\abs{\phi}_{\rm inf}}{M_{\text{Pl}}}. \end{eqnarray} This can be rewritten as \begin{eqnarray} H_{\rm inf} \mathop{}_{\textstyle \sim}^{\textstyle <} \left\{ \begin{array}{ll} 1.6 \times 10^{13} \ {\rm GeV} \lmk \frac{\lambda}{2.6 \times 10^{-4}} \right)^{-1} &~~~~\text{for}~~ n = 4 \\ 2.3 \times 10^{12} \ {\rm GeV} \lambda^{-1/3} &~~~~\text{for}~~ n = 6, \end{array} \right. \label{isocurvature constraint} \end{eqnarray} where we use Eq.~(\ref{VEV}) and assume $\abs{c_H} = 1$ and $\tan (n \theta_0)=1$. \section{\label{ADBG just after inf}Affleck-Dine baryogenesis just after inflation} In this section, we explain a new scenario of ADBG where the AD field starts to oscillate around the origin of the potential just after the end of inflation. In general, this scenario is realized when the K\"{a}hler potential is give by \begin{eqnarray} K = \abs{\phi}^2 + \abs{S}^2 + \abs{\psi}^2 + \frac{c_1}{M_{\text{Pl}}^2} \abs{\phi}^2 \abs{S}^2 - \frac{c_2}{M_{\text{Pl}}^2} \abs{\phi}^2 \abs{\psi}^2, \label{Kahler} \end{eqnarray} where $S$ is the field whose F-term drives inflation and $\psi$ is the field whose oscillation energy dominates the Universe after inflation. Here, we assume that the fields $S$ and $\psi$ are different fields, which is actually the case in hybrid and chaotic inflation models as shown in the subsequent sections. During inflation, the AD field acquires the Hubble-induced mass via the F-term potential of the field $S$ as Eq.~(\ref{H-mass during inf}). After inflation ends, the Hubble-induced mass comes also from higher-dimensional kinetic interactions between $\phi$ and $\psi$ as Eq.~(\ref{H-mass after inf}). Therefore, the Hubble induced mass term for the AD field $\phi$ is given by \begin{eqnarray} V_H &=& c_H H^2 (t) \abs{\phi}^2 \\ c_H &=& \left\{ \begin{array}{ll} - 3 (c_1 - 1) &~~~~\text{during \ inflation} \\ 3 \lmk - (1-r ) c_1 + r c_2 + \frac{1}{2} \right) &~~~~\text{after \ inflation}, \\ \end{array} \right. \end{eqnarray} where $r$ ($0 \le r \le 1$) is the fraction of the energy density of $\psi$ to the total energy after inflation. Therefore the sign of the Hubble-induced mass term can change after inflation. If its sign continues to be negative after inflation, the conventional scenario of ADBG is realized as we explain in the previous section. In the rest of this paper, we consider the case that the coefficient is negative during inflation and is positive after inflation. In this case, the AD field starts to oscillate around the origin of the potential just after the end of inflation. In contrast to the conventional scenario of ADBG, the dynamics of its phase direction depends on inflation models, so that the resulting $B-L$ asymmetry depends on parameters in inflaton sector. In the subsequent sections, we consider hybrid and chaotic inflation models to investigate this scenario and calculate the amount of $B-L$ asymmetry. Before we investigate the detail of the dynamics of AD field, we explain its rough behaviour in this section. In the above scenario, the dynamics of the AD field is determined by the potential of \begin{eqnarray} V(\phi) = c_H H^2(t) \abs{\phi}^2 + \lambda^2 \frac{\abs{\phi}^{2n-2}}{M_{\text{Pl}}^{2n-6}} + V_A (\phi), \end{eqnarray} where $c_H < 0 $ during inflation and $c_H > 0$ after inflation. The A-term potential of $V_A$ depends on inflation models and is explicitly derived in the subsequent sections. The low-energy soft terms of Eq.~(\ref{A-term}) are irrelevant for the dynamics of the AD field. This makes our calculation simple and independent of low-energy SUSY models. In particular, the resulting $B-L$ asymmetry is independent of how SUSY breaking effect is mediated to the visible sector. Since we consider the case that $c_H < 0 $ during inflation and $c_H > 0$ after inflation, the AD field starts to oscillate around the origin just after the end of inflation. At the same time, its phase direction is kicked by an A-term. The origin of A-term depends on inflation models and thus the resulting $B-L$ asymmetry does. Here we just write generated $B-L$ asymmetry as \begin{eqnarray} \frac{a^3 (t)}{a^3 (t_{\rm osc})} n_{B-L} (t) \equiv q \epsilon H_{\rm osc} \abs{\phi}^2_{\rm osc}, \end{eqnarray} and derive $\epsilon$ in the subsequent sections. The resulting baryon-to-entropy ratio is thus written as \begin{eqnarray} Y_b &\simeq& \left. \frac{8}{23} \frac{3 T_{\rm RH} n_{B-L}}{4 \rho_{\rm inf}} \right\vert_{\rm osc} \\ &\simeq& \frac{8}{23} \frac{\epsilon q T_{\rm RH}}{4 H_{\rm osc}} \lmk \frac{\phi_{\rm osc}}{M_{\text{Pl}}} \right)^2. \end{eqnarray} This is the same with Eq.~(\ref{Y_b 1}) but $H_{\rm osc}$ is not given by Eqs.~(\ref{H_osc 1}) and (\ref{H_osc 1-2}). Since the AD field starts to oscillate just after the end of inflation in this scenario, $H_{\rm osc}$ is given by the Hubble parameter at the end of inflation. Here, let us emphasise differences from the conventional scenario of ADBG. The Hubble parameter at the time of beginning of oscillation $H_{\rm osc}$ is determined by the energy scale of inflation, not by either $m_\phi$ nor $T_{\rm RH}$ [see Eqs.~(\ref{H_osc 1}) and (\ref{H_osc 2})]. This is because the flat direction starts to oscillate just after the end of inflation due to the positive Hubble-induced mass term. In addition, $\phi_{\rm osc}$ depends only on $H_{\rm osc}$ and $\lambda$ via Eq.~(\ref{VEV2}). Therefore, the resulting $B-L$ asymmetry is independent of parameters in low-energy SUSY models, such as $m_\phi$ and $m_{3/2}$. There are some advantages in this scenario. First, as we explain above, the resulting $B-L$ asymmetry is independent of the masses of the AD field and gravitino. The result is also independent of how SUSY breaking effect is mediated to the visible sector. Secondly, non-topological solitons, called Q-balls, may form and affects the cosmological scenario after the conventional scenario of ADBG~\cite{Coleman, Qsusy, EnMc, KuSh, KK}, while they do not form in our scenario. This makes the discussion much simpler. In particular, Q-balls usually form in gauge mediated SUSY breaking models after the conventional scenario of ADBG and they are sometimes problematic in cosmology due to their long lifetime~\cite{KK, Harigaya:2014tla}. Our scenario does not suffer from this problem. Thirdly, the thermal effect on the AD field can be neglected because the AD field starts to oscillate just after the end of inflation and before the thermal plasma grows sufficiently~\cite{Mukaida:2015ria}. This also makes calculations simpler. In particular, the thermal log potential can be neglected even for $L H_u$ flat direction. Finally, our results imply that ADBG works in broader range of parameter space. Since the sign of the Hubble-induced mass term cannot be determined by underlying physics, it is equally possible that the sign becomes positive after inflation. In addition, viable parameter regions for some parameters, e.g., the reheating temperature, are different from the ones in the conventional scenario of ADBG. These fact imply that the Affleck-Dine mechanism works well in more cases than expected in the literature. \section{\label{hybrid}Hybrid inflation} In this section, we consider our scenario of ADBG in the simplest hybrid inflation model~\cite{Copeland:1994vg, Dvali:1994ms} and calculate $B-L$ asymmetry. The superpotential in the inflaton sector is given by \begin{eqnarray} W^{(\rm inf)} = \kappa S \lmk \psi \bar{\psi} - \mu^2 \right), \end{eqnarray} where $S$ is inflaton, and $\psi$ and $\bar{\psi}$ are waterfall fields. The F-term potentials are thus given as \begin{eqnarray} \left. V_{\rm inf} \right\vert_{\rm tree} = \kappa^2 \abs{\psi \bar{\psi} - \mu^2}^2 + \kappa^2 \abs{S}^2 \lmk \abs{\psi}^2 + \abs{\bar{\psi}}^2 \right). \end{eqnarray} The inflaton $S$ is assumed to have a large initial VEV so that the waterfall fields stay at the origin due to effective masses of $\kappa \la S \right\rangle$. Then the F-term of $S$ is nonzero and drives inflation, where the energy scale of inflation is given by $3 H_{\rm inf}^2 M_{\text{Pl}}^2 \simeq \kappa^2 \mu^4$. The inflaton $S$ slowly rolls toward the origin due to the 1-loop Coleman-Weinberg potential: \begin{eqnarray} \left. V_{\rm inf} \right\vert_{\rm 1-loop} = \frac{\kappa^4 \mu^4}{32 \pi^2} \lkk \lmk x^2 + 1 \right)^2 \ln \lmk x^2 + 1 \right) + \lmk x^2 - 1 \right)^2 \ln \lmk x^2 - 1 \right) - 2 x^4 \ln x^2 - 3 \right], \label{CW} \end{eqnarray} where we define $x \equiv \abs{S}/\mu$. Inflation ends when its VEV decreases to the critical value of $S_{cr} \equiv \mu$. The Hubble parameter at the end of inflation is given by \begin{eqnarray} H_{\rm osc} \simeq H_{\rm inf} \simeq \frac{\kappa \mu^2}{\sqrt{3} M_{\text{Pl}}}. \end{eqnarray} After that, the waterfall fields as well as the inflaton start to oscillate around the minimum of the potential and their oscillation energy dominates the Universe. Around the minimum of the potential, the masses of inflaton and waterfall fields are given by $\sqrt{2} \kappa \mu$. Although the simplest hybrid inflation model predicts inconsistent spectral index with the observed value, some modifications can make it consistent. For example, we may introduce a higher dimensional K\"{a}hler potential for the inflaton to write a small negative mass term, which can result in a consistent spectral index~\cite{BasteroGil:2006cm, Nakayama:2010xf}. Since our discussion below is not affected at least quantitatively in this modification, we calculate $B-L$ asymmetry in the above simplest model. \subsection{Dynamics of the AD field} The inflaton $S$ is identified with the field $S$ in Eq.~(\ref{Kahler}) and the waterfall fields $\psi$ and $\bar{\psi}$ play a role of the field $\psi$ in Eq.~(\ref{Kahler}). Thus the coefficient of the Hubble-induced mass $c_H$ can change after inflation. In this subsection, we consider the dynamics of the AD field in the hybrid inflation model and calculate $B-L$ asymmetry. Let us first consider the dynamics of the phase direction of the AD field. Using Eq.~(\ref{SUGRA potential}) with the total superpotential of $W^{(\rm AD)} + W^{(\rm inf)}$, we find that there is an A-term potential coming from \begin{eqnarray} W^{(\rm inf)}_S K^{\bar{S} {\phi}} W^{(\rm AD)}_{\bar{\phi}} + K_\phi W^{(\rm inf)} \lmk W^{(\rm AD)}_\phi \right)^* + K_S W^{(\rm AD)} \lmk W^{(\rm inf)}_S \right)^* - 3 W^{(\rm inf)} \lmk W^{(\rm AD)} \right)^* + {\rm c.c.} \end{eqnarray} The A-term is written as \begin{eqnarray} V_A &=& - \lmk 1- c_1 - \frac{2}{n} \right) \frac{\kappa \mu^2 \lambda}{M_{\text{Pl}}^{n-1}} S^* \phi^n + {\rm c.c.} \\ &=& - a \frac{H^2_{\rm inf}}{M_{\text{Pl}}} \abs{S} \abs{\phi}^2 \cos \lmk \theta_S - n \theta_\phi \right), \label{V_A hybrid}\\ a &\equiv& - 2 \lmk c_1 - 1 + \frac{2}{n} \right) \sqrt{\frac{3 \abs{c_H}}{n-1} }, \end{eqnarray} where $\theta_S$ and $\theta_\phi$ are the complex phases of the fields $S$ and $\phi$, respectively. We use Eq.~(\ref{VEV}) and $H_{\rm inf}^2 = \kappa^2 \mu^4 / 3 M_{\text{Pl}}^2$ in the second line. This is a linear term of the inflaton $S$, so that the slope of the potential should not be larger than that of the Coleman-Weinberg potential~\cite{ Buchmuller:2000zm, Nakayama:2010xf, Buchmuller:2014epa}. Otherwise the inflaton cannot reach the critical VEV and inflation cannot terminate unless we allow a fine-tuning on the initial phase of inflaton. Referring to Ref.~\cite{Buchmuller:2014epa}, we introduce a parameter to describe the relative importance of the two contributions to the slope of the potential: \begin{eqnarray} \xi &\equiv& \frac{1}{2} \lmk 1- c_1 - \frac{2}{n} \right) \frac{16 \pi^2}{\kappa^3 \ln 2} \frac{\la \abs{\phi} \right\rangle^n}{\mu M_{\text{Pl}}^{n-1}} \\ &\simeq& \frac{8 \pi^2 a}{3 \ln 2} \frac{\mu \la \abs{\phi} \right\rangle^2}{\kappa^2 M_{\text{Pl}}^3}, \label{constraint3} \end{eqnarray} which should be smaller than unify so that the inflaton can roll towards the critical value without the fine-tuning.% \footnote{ When the VEV of the AD field is so large that the parameter $\xi$ becomes of order unity (but below unity), the A-term of Eq.~(\ref{V_A hybrid}) affects inflaton dynamics. As a result, the spectral index can be consistent with the observed value~\cite{Yamada:2015rza}. } In the above minimal setup, there is no other term than Eq.~(\ref{V_A hybrid}) that affects the dynamics of the phase directions. Therefore, there is only one massive phase during inflation. For simplicity, let us assume that the inflaton and the AD field have approximately constant VEVs and $(\theta_S - n \theta_\phi) \ll 1$. In this case, the unitary matrix to diagonalise the squared mass matrix for the phase directions is given by \begin{eqnarray} \frac{1}{\sqrt{n^2 \abs{S}^2 + \abs{\phi}^2}} \lmk \begin{array}{ll} \abs{\phi}~~~ - n \abs{S} \vspace{0.2cm}\\ n \abs{S} ~~~\abs{\phi} \end{array} \right), \label{unitary matrix} \end{eqnarray} in the $( \abs{S} \theta_S / \sqrt{2} , \abs{\phi} \theta_\phi / \sqrt{2} )^T$ basis. Thus, the massive direction denoted by $f_m \theta_m$ can be written as \begin{eqnarray} f_m \theta_m = \frac{\sqrt{2} \abs{S} \abs{\phi}}{\sqrt{n^2 \abs{S}^2 + \abs{\phi}^2}} \lmk \theta_S - n \theta_\phi \right), \end{eqnarray} and its mass $m_{\theta_m}$ is given by \begin{eqnarray} m_{\theta_m} = \sqrt{\frac{a H^2}{2} \frac{\abs{\phi}}{M_{\text{Pl}}} \lmk \frac{\abs{\phi}}{\abs{S}} + n^2\frac{\abs{S}}{\abs{\phi}} \right)}. \end{eqnarray} If the curvature of the phase direction is larger than the Hubble parameter during inflation, it stays at the minimum of the A-term, i.e., $\theta_m = 0$, and the phase direction cannot be kicked in the complex plane after inflation. In this case, $B-L$ asymmetry cannot be generated. Thus, we require $m_{\theta_m} \ll H$, which can be rewritten as \begin{eqnarray} a \abs{\phi}^2 &\ll& \abs{S} M_{\text{Pl}}, \label{constraint1}\\ a n^2 \abs{S} &\ll& M_{\text{Pl}}, \label{constraint2} \end{eqnarray} in order that the phase direction can stay at a different phase from the minimum due to the Hubble friction effect. We denote the initial phase as $\theta_m^{\rm ini}$. After inflation ends, the AD field acquires a positive Hubble-induced mass term and starts to oscillate around the origin of the potential. At the same time, the massive phase direction is kicked by the above A-term. Since the radial direction decreases with time due to the Hubble expansion, the A-term is relevant just after the beginning of oscillation. Thus we can estimate the angular velocity of massive phase direction such as \begin{eqnarray} \dot{\theta}_m \approx \frac{m_{\theta_m}^2}{H} \theta_m^{\rm ini}, \end{eqnarray} [see Eq.~(\ref{EOM for phase direction})]. Using the inverse of the unitary matrix of Eq.~(\ref{unitary matrix}), we obtain the angular velocity of the phase of the AD field such as \begin{eqnarray} \dot{\theta}_\phi &=& \frac{ - n \abs{S}}{\sqrt{n^2 \abs{S}^2 + \abs{\phi}^2}} \frac{f_m}{\sqrt{2} \abs{\phi}} \dot{\theta}_m \\ &\approx& \frac{m_{\theta_m}^2}{H} \frac{ - n \abs{S}}{\sqrt{n^2 \abs{S}^2 + \abs{\phi}^2}} \frac{f_m \theta_m^{\rm ini}}{\sqrt{2} \abs{\phi}} \\ &=& \frac{m_{\theta_m}^2}{H} \frac{ - n \abs{S}}{\sqrt{n^2 \abs{S}^2 + \abs{\phi}^2}} \frac{ \abs{S}}{\sqrt{n^2 \abs{S}^2 + \abs{\phi}^2}} \lmk \theta_S - n \theta_\phi \right)^{\rm ini} \\ &=& - \frac{a n}{2} \frac{\abs{S}}{M_{\text{Pl}}} H \lmk \theta_S - n \theta_\phi \right)^{\rm ini}. \end{eqnarray} Thus we obtain \begin{eqnarray} \frac{a^3(t)}{a^3 (t_{\rm osc})} n_{B-L} (t) &=& \left. 2 \dot{\theta}_\phi \abs{\phi}^2 \right\vert_{\rm osc} \\ &\equiv& \epsilon q H_{\rm osc} \phi^2 \\ \epsilon &\equiv& \tilde{\epsilon} \frac{S_{\rm cr}}{M_{\text{Pl}}} \label{epsilon}\\ \tilde{\epsilon} &\simeq& (0.1-0.2) a n \sin \lmk n \theta_\phi - \theta_S \right)_{\rm osc}, \label{epsilon-factor} \end{eqnarray} where we define $\tilde{\epsilon}$ which is expected to be of order unity. The numerical factor of $(0.1-0.2)$ is determined from our numerical calculations explained below. Note that the resulting ellipticity parameter $\epsilon$ is consistent with a naive estimation of $\epsilon \sim V_A' / \phi H_{\rm osc}^2$.% \footnote{ We implicitly assume that $(S_{cr} / M_{\text{Pl}} ) \mathop{}_{\textstyle \sim}^{\textstyle >} m_{3/2} / H_{\rm osc}$ so that we can neglect an A-term of $m_{3/2} W_\phi$ [see Eq.~(\ref{A-term})]. Otherwise $\epsilon$ may be of order $m_{3/2} / H_{\rm osc}$. } The ellipticity parameter $\epsilon$, which describes the efficiency of baryogenesis, is much smaller than unity because of the condition of Eq.~(\ref{constraint2}). This is because the phase direction of the AD field is kicked by the A-term that is suppressed by the VEV of the inflaton. After the oscillations begins, the amplitude of the radial direction of the inflaton $S$ decreases with time as $\abs{S} \propto a^{-3/2}$. That of the AD field does as $\abs{\phi} \propto a^{-3/4}$ so that its number density ($H(t) \abs{\phi}^2/2$) decreases as $\propto a^{-3}$. Since the A-term, i.e., the $B-L$ number violating interaction, is a higher dimensional term, it is turned off soon after the AD field starts to oscillate after inflation. The generated $B-L$ asymmetry is then conserved in a comoving volume and thus $n_{B-L} \propto a^{-3}$ for $t > t_{\rm osc}$. We have numerically solved the equations of motion together with the Friedmann equation, where the waterfall fields are collectively described by a real scalar field $\tilde{\psi}$ such as $\psi = \bar{\psi} \equiv \tilde{\psi}/\sqrt{2}$. We assume $\abs{S}^2 / M_{\text{Pl}}^2$, $\abs{\phi}^2 / M_{\text{Pl}}^2$, $\tilde{\psi}^2 / M_{\text{Pl}}^2 \ll 1$ and take into account next-to-leading order terms in terms of them. We use the full kinetic terms for $S$ and $\phi$ [see Eq.~(\ref{kinetic term})], while we assume a canonical one for $\psi$ for simplicity. One of the results is shown in Fig.~\ref{fig3}, where the generated $B-L$ asymmetry is consistent with Eq.~(\ref{epsilon}). Taking parameters such as $n=4, 6$, $\kappa = 0.02-0.5$, $\mu = 0.0004 - 0.02$, $\lambda = 0.01 - 100$, and $\theta_\phi^{\rm ini} = 0.001 - 0.1$, we confirm the above parameter dependences and obtain the numerical uncertainty of $(0.1-0.2)$ in Eq.~(\ref{epsilon-factor}). We assume $c_H = -1$ and $c_2=0$ in our calculations, but we check that nonzero values of $c_2$ ($= \mathcal{O}(1)$ and $\ge 0$) does not change our results even quantitatively. \begin{figure}[t] \centering \includegraphics[width=.45\textwidth, bb=0 0 360 240 ]{fig5} \caption{\small Evolution plot for $B-L$ number after hybrid inflation. The dashed curve is our prediction of Eq.~(\ref{epsilon-factor}) with a numerical factor of $0.2$. We assume $\lambda=1$, $n=6$, $c_H = -1$, $c_2=0$, $\kappa = 0.05$, $\mu = 0.001$, and $\theta_\phi^{\rm ini} = 0.01$. } \label{fig3} \end{figure} \subsection{Baryon asymmetry} The AD field starts to oscillate just after inflation and generate $B-L$ asymmetry. The oscillating AD field decays and dissipates into radiation~\cite{Mukaida:2012qn} and the sphaleron effect relates the $B-L$ asymmetry to the baryon asymmetry~\cite{Kuzmin:1985mm, Fukugita:1986hr}. Using Eq.~(\ref{epsilon}), we can calculate the baryon-to-entropy ratio $Y_b$ such as \begin{eqnarray} Y_b &\simeq& \frac{8}{23} \frac{\epsilon q T_{\rm RH}}{4 H_{\rm osc}} \lmk \frac{\phi_{\rm osc}}{M_{\text{Pl}}} \right)^2 \\ &\simeq& \left\{ \begin{array}{ll} 0.05 \sqrt{\abs{c_H}} q \frac{\epsilon}{\lambda} \frac{T_{\rm RH}}{M_{\text{Pl}}} ~~~~\text{for}~~ n = 4 \vspace{0.2cm}\\ 0.06 \abs{c_H}^{1/4} q \frac{\epsilon}{\lambda^{1/2}} \frac{T_{\rm RH}}{\sqrt{H_{\rm osc} M_{\text{Pl}}}} ~~~~\text{for}~~ n = 6, \end{array} \right. \label{Y_b} \end{eqnarray} Since $\epsilon \equiv \tilde{\epsilon} S_{\rm cr} / M_{\text{Pl}}$, $S_{\rm cr} = \mu$, and $H_{\rm osc}^2 \simeq \kappa^2 \mu^4 / (3 M_{\text{Pl}}^2)$, this is rewritten as \begin{eqnarray} Y_b &\simeq& \left\{ \begin{array}{ll} 0.05 \frac{\mu T_{\rm RH}}{\lambda M_{\text{Pl}}^2} ~~~~\text{for}~~ n = 4 \vspace{0.2cm}\\ 0.08 \frac{T_{\rm RH}}{ \sqrt{\kappa \lambda} M_{\text{Pl}}} ~~~~\text{for}~~ n = 6, \end{array} \right. \end{eqnarray} where we assume $\abs{c_H} = 1$, $q=1$, and $\tilde{\epsilon} = 1$. For typical parameters, it is given by \begin{eqnarray} Y_b &\simeq& \left\{ \begin{array}{ll} 9 \times 10^{-11} \lmk \frac{\mu}{10^{15} \ {\rm GeV} } \right) \lmk \frac{T_{\rm RH}}{10^{9} \ {\rm GeV} } \right) \lmk \frac{\lambda }{10^{-4}} \right)^{-1} &~~~~\text{for}~~ n = 4 \vspace{0.2cm}\\ 1 \times 10^{-10} \lambda^{-1/2} \lmk \frac{\kappa}{10^{-3} } \right)^{-1/2} \lmk \frac{T_{\rm RH}}{10^{7} \ {\rm GeV} } \right) &~~~~\text{for}~~ n = 6, \end{array} \right. \label{result in hybrid} \end{eqnarray} We check that the constraints of Eqs.~(\ref{constraint1}) and (\ref{constraint2}) and $\xi \le 1$ [see Eq.~(\ref{constraint3})] are satisfied for the above reference parameters. Thus, we can explain the observed baryon asymmetry of $Y_b^{\rm obs} \simeq 8.7 \times 10^{-11}$~\cite{pdg} in this scenario. Since a linear combination of phase directions is massless during inflation, our scenario predicts nonzero baryonic isocurvature fluctuations like the case in Sec.~\ref{isocurvature}. However, the energy scale of hybrid inflation can be lower than the constraint of Eq.~(\ref{isocurvature constraint}). In fact, for the above reference parameters, our scenario is consistent with the present upper bound on the isocurvature mode. \subsection{Reheating temperature} As we can see in Eq.~(\ref{result in hybrid}), the resulting baryon asymmetry depends on reheating temperature $T_{\rm RH}$. To determine it, let us consider the decay of inflaton. There is a lower bound on the reheating temperature because the inflaton decays into the MSSM particles via supergravity effects. The decay rate is calculated as \begin{eqnarray} \Gamma_{\rm inf}^{\rm SUGRA} = \frac{3}{128 \pi^3} \abs{y_t}^2 \lmk \frac{\mu}{M_{\text{Pl}}} \right)^2 \frac{m_{\rm inf}^3}{M_{\text{Pl}}^2}, \end{eqnarray} where $m_{\rm inf} = \sqrt{2} \kappa \mu$ is the inflaton mass and $y_t$ is the top Yukawa coupling constant. The lower bound on the reheating temperature is thus given by~\cite{Nakamura:2006uc} \begin{eqnarray} T_{\rm RH}^{(\rm min)} \simeq 3 \times 10^3 \ {\rm GeV} \abs{y_t} \lmk \frac{\mu}{10^{15} \ {\rm GeV} } \right) \lmk \frac{m_{\rm inf}}{10^{12} \ {\rm GeV} } \right)^{3/2}. \end{eqnarray} If there is an interaction between the inflaton and Higgs fields such as \begin{eqnarray} W \supset y \phi H_u H_d, \end{eqnarray} then the inflaton decay rate and the reheating temperature are estimated as \begin{eqnarray} \Gamma_{\rm inf} &=& \frac{y^2}{4 \pi} m_\phi \\ T_{\rm RH} &\simeq& 2 \times 10^{10} \ {\rm GeV} \lmk \frac{y}{10^{-4}} \right) \lmk \frac{m_{\rm inf}}{10^{12} \ {\rm GeV} } \right)^{1/2}. \label{T_RH in hybrid} \end{eqnarray} Note that the coupling constant $y$ should be smaller than $\kappa$ so as not to affect the Coleman-Weinberg potential of Eq.~(\ref{CW}). Thus the reheating temperature cannot be higher than that of Eq.~(\ref{T_RH in hybrid}) with $y \approx \kappa$. We have to take into account the constraint on $T_{\rm RH}$ from gravitino overproduction problems. The inflaton decays also into gravitinos via supergravity effects. Its production rate is given by~\cite{Nakamura:2006uc} \begin{eqnarray} \Gamma_{3/2} \simeq \frac{1}{96 \pi} \lmk \frac{\mu}{M_{\text{Pl}}} \right)^2 \frac{m_{\rm inf}^3 }{M_{\text{Pl}}^2}. \end{eqnarray} The resulting gravitino-to-entropy ratio from this contribution is given by \begin{eqnarray} Y_{3/2}^{\rm (decay)} \simeq \frac{3}{2} \lmk \frac{90}{g_* \pi^2} \right)^{1/2} \frac{\Gamma_{3/2} M_{\text{Pl}}}{m_{\rm inf} T_{\rm RH}}. \end{eqnarray} Gravitinos are also produced from scatterings in the thermal plasma after reheating completes. Its abundance is given by~\cite{Bolz:2000fu, Pradler:2006qh, Buchmuller:2011mw} \begin{eqnarray} Y_{3/2}^{\rm (thermal)} \simeq 0.26 \frac{\rho_c}{m_{3/2} s_0} \lmk \frac{T_{\rm RH}}{10^{10} \ {\rm GeV} } \right) \lkk 0.13 \lmk \frac{m_{3/2}}{100 \ {\rm GeV} } \right) + \lmk \frac{100 \ {\rm GeV} }{m_{3/2}} \right) \lmk \frac{m_{\tilde{g}}}{1 \ {\rm TeV} } \right)^2 \right], \end{eqnarray} where $s_0$ ($\simeq 2.9 \times 10^3 \ {\rm cm}^{-3}$) and $\rho_c$ ($\simeq 1.052 \times 10^{-5} h^2 \ {\rm GeV} / \ {\rm cm}^3$) are the present entropy density and critical energy density, respectively. The parameter $m_{\tilde{g}}$ is gluino mass and $h$ is the present Hubble parameter in the unit of $100 \ {\rm km} \, s^{-1} \ {\rm Mpc}^{-1}$. Stringent bounds on the reheating temperature are obtained when we assume that the gravitino is the lightest SUSY particle (LSP) and is stable. In this case, its abundance should not exceed the observed DM abundance: \begin{eqnarray} m_{3/2} \lmk Y_{3/2}^{\rm (decay)} + Y_{3/2}^{\rm (thermal)} \right) \le \frac{\rho_c}{s_0} \Omega_{\rm DM} \simeq 0.4 \ {\rm eV} , \label{gravitino problem2} \end{eqnarray} where $\Omega_{\rm DM} h^2$ ($\simeq 0.12$) is the DM relic density.% \footnote{ If the gravitino mass is about $1 \ {\rm TeV} $ and it is unstable, its decay products interact with the light elements and destroy them at the time of BBN epoch. Then the gravitino abundance is bounded above by about four order of magnitude severer than the bound of Eq.~(\ref{gravitino problem2})~\cite{Kawasaki:1999na}. } For example, in the case of $m_{3/2} = 100 \ {\rm GeV} $, the reheating temperature is bounded such as \begin{eqnarray} 2 \times 10^{7} \ {\rm GeV} \lmk \frac{\mu}{10^{15} \ {\rm GeV} } \right)^2 \lmk \frac{m_{\rm inf}}{10^{12} \ {\rm GeV} } \right)^2 \mathop{}_{\textstyle \sim}^{\textstyle <} T_{\rm RH} \mathop{}_{\textstyle \sim}^{\textstyle <} 9 \times 10^9 \ {\rm GeV} , \end{eqnarray} where we use $h \simeq 0.67$. We can see that the reference parameters used in Eq.~(\ref{result in hybrid}) are consistent with this bound. Note that for the case of $n=4$, the coupling constant in the superpotential of the AD field cannot be much larger than $10^{-4}$ because of the upper bound on the reheating temperature. For the case of $n=6$, we can naturally explain the observed baryon asymmetry for $\lambda = \mathcal{O}(1)$ with a reheating temperature consistent with the gravitino problem. This is in contrast to the result in the conventional scenario of ADBG [see Eq.~(\ref{Y_b conventional})], where an extremely large value of $\lambda$ is required to be consistent with the lower bound on reheating temperature. In the case of such a large value of $\lambda$, the thermal log potential has to be taken into account even for $n=6$. \section{\label{chaotic}Chaotic inflation} In this section, we consider our scenario of ADBG in a chaotic inflation model with a shift symmetry in supergravity~\cite{Kawasaki:2000yn, Kallosh:2010ug}. The inflaton $I$ has a shift symmetry in the K\"{a}hler potential and the minimal K\"{a}hler potential is written as \begin{eqnarray} K_{\rm inf} = c_0 M_{\text{Pl}} \lmk I + I^* \right) + \frac{1}{2} \lmk I + I^* \right)^2 + \abs{X}^2 - \frac{c_3}{4} \frac{ \abs{X}^4}{M_{\text{Pl}}^2}, \end{eqnarray} where $X$ is a stabiliser field. Note that $c_0$ is an order parameter of $Z_2$ symmetry, under which the fields $I$ and $X$ are odd, so that we take $c_0$ as a free parameter that may be smaller than unity. We include the $\abs{X}^4$ term in the K\"{a}hler potential, which cannot be suppressed by any symmetries. The other higher dimensional terms do not change our discussion qualitatively, so that we neglect them in the following analysis. To realize chaotic inflation in a quadratic potential, the superpotential is assumed to break the shift symmetry such as \begin{eqnarray} W_{\rm inf} = m_{\rm inf} I X, \end{eqnarray} where $m_{\rm inf}$ is inflaton mass. The field $I$ has a quadratic potential from the F-term of $X$. Its imaginary component can have a larger VEV than the Planck scale thanks to the shift symmetry in the K\"{a}hler potential and is identified with inflaton. The real component of $I$ obtains a Hubble-induced mass and stays at a VEV of ${\rm Re} [ I ] = - c_0 / 2$~\cite{Kawasaki:2000ws}. When the VEV of the inflaton decreases down to the Planck scale, the real component of $I$ as well as the inflaton start to oscillate around the origin of the potential and inflation ends. The dynamics is illustrated in Fig.~\ref{fig4}, where we numerically solve the equation of motion of the field $I$ and plot its trajectory for the case of $c_0 = 1$. The field $I$ slowly rolls along the imaginary axis during inflation, where ${\rm Re} [ I ] = - c_0 / 2$ is approximately satisfied. After it reaches the red point, inflation ends and it starts to oscillate and rotate around the origin. The Hubble parameter at the end of inflation is given by \begin{eqnarray} H_{\rm osc} \simeq \frac{m_{\rm inf} }{ \sqrt{3}}. \end{eqnarray} \begin{figure}[t] \centering \includegraphics[width=.25\textwidth, bb=0 0 309 541 ]{fig6} \caption{\small Dynamics of the field $I$ in the complex plane in the chaotic inflation model. We set $c_0 = 1$. The field $I$ slowly rolls along the line of ${\rm Re} [ I ] = - c_0 / 2$ during inflation. After it reaches the red point, inflation ends and it starts to oscillate and rotate around the origin. } \label{fig4} \end{figure} The stabiliser field $X$ obtains a Hubble-induced mass via the higher dimensional K\"{a}hler potential such as \begin{eqnarray} V \supset c_3 m^2 \frac{ \abs{I}^2}{M_{\text{Pl}}^2} \abs{X}^2 \simeq 3 c_3 H^2 \abs{X}^2. \label{Hubble induced mass of X} \end{eqnarray} This implies that the dynamics of $X$ is qualitatively different from the case with $c_3 = 0$. We should include them because the higher dimensional K\"{a}hler potential cannot be suppressed by any symmetries, To realize chaotic inflation, we assume $c_3 > 0 $. Then the field $X$ stays at the origin. However, when we take into account of the backreaction of the AD field, $X$ obtains a small VEV as shown in the next subsection. \subsection{Dynamics of the AD field} Taking into account the AD field, we consider the K\"{a}hler potential of \begin{eqnarray} K = K_{\rm inf} + \abs{\phi}^2 + c_1 \abs{X}^2 \abs{\phi}^2 - \frac{c_2}{2} \lmk I + I^* \right)^2 \abs{\phi}^2. \end{eqnarray} Although we introduce a shift symmetry for the field $I$, the fields $X$ and $I$ basically correspond to the fields $S$ and $\psi$ in Eq.~(\ref{Kahler}), respectively. The AD field acquires the Hubble-induced mass term from the F-term of $X$ during inflation. After inflation ends, the Hubble-induced mass term partially comes from kinetic interactions. In fact, the K\"{a}hler potential of $- c_2 /2 ( I+ I ^*)^2 \abs{\phi}^2$ induces a kinetic interaction of \begin{eqnarray} \mathcal{L} \supset - c_2 \frac{1}{M_{\text{Pl}}^2} \abs{\phi}^2 \abs{\del_\mu I}^2. \end{eqnarray} We obtain the effective Hubble-induced mass term of $(3 c_2/2) H^2 (t) \abs{\phi}^2$ from this kinetic interaction. To sum up, the Hubble-induced mass term is given by \begin{eqnarray} V_H = c_H H^2(t) \abs{\phi}^2 \\ c_H &=& \left\{ \begin{array}{ll} - 3 (c_1 -1 ) &~~~~\text{during \ inflation} \\ \frac{3}{2} \lmk c_2 - c_1 + 1 \right) &~~~~\text{after \ inflation}, \end{array} \right. \end{eqnarray} where the other terms than the one proportional to $c_2$ come from the potential energy. Thus we can consider the case that the coefficient $c_H$ is negative during inflation and is positive after inflation. There is also an A-term such as \begin{eqnarray} V_A &=& \frac{1}{n} \lmk n(1-c_1) - 2 \right) \frac{\lambda m_{\rm inf}}{M_{\text{Pl}}^{n-1}} I X (\phi^*)^n + {\rm c.c.} \\ &=& \frac{2}{n} \lmk n(1-c_1) - 2 \right) \frac{\lambda m_{\rm inf}}{M_{\text{Pl}}^{n-1}} \abs{I} \abs{X} \abs{\phi}^n \cos \lmk \theta_I + \theta_X - n \theta_\phi \right) \\ &\simeq& - a H^2(t) \frac{ \abs{X}}{M_{\text{Pl}}} \abs{\phi}^2 \cos \lmk \theta_I + \theta_X - n \theta_\phi \right), \label{A-term in chaotic inflation} \end{eqnarray} where we use Eq.~(\ref{VEV}) and $H (t) \simeq m_{\rm inf} \abs{I} / \sqrt{3} M_{\text{Pl}}$ in the last line and $\theta_I$, $\theta_X$, and $\theta_\phi$ are the complex phases of the fields $I$, $X$, and $\phi$, respectively. The coefficient $a$ is given by \begin{eqnarray} a = 2 \sqrt{\frac{3\abs{c_H}}{n-1}} \lmk c_1 - 1 + \frac{2}{n} \right). \end{eqnarray} The A-term can be regarded as a linear term for $X$. Since the field $X$ has a positive Hubble-induced mass term of Eq.~(\ref{Hubble induced mass of X}), it stays at the following minimum during inflation: \begin{eqnarray} \la \abs{X} \right\rangle \simeq \frac{a}{6 c_3} \frac{1}{M_{\text{Pl}}} \abs{\phi}^2. \end{eqnarray} A linear combination of the phase directions has a mass of order the Hubble parameter due to the A-term, so that it stays at the following minimum during inflation: \begin{eqnarray} \la \theta_X - n \theta_\phi \right\rangle = - \la \theta_I \right\rangle \simeq - {\rm sign} [c_0 ] \frac{\pi}{2}, \end{eqnarray} where we use ${\rm Re} [ I ] \ll {\rm Im} [I]$ during inflation. After inflation ends, the field $I$ starts to rotate in the phase space as shown in Fig.~\ref{fig1} and its phase $\theta_I$ has a nonzero velocity. This implies that a linear combination of the phases $\theta_X$ and $\theta_\phi$ obtains a nonzero velocity to follow its potential minimum. Since the A-term contains the phase direction of the inflaton, the whole dynamics is difficult to imagine. In fact, one may estimate $\epsilon \approx a \abs{X}_{\rm osc}/ M_{\text{Pl}}$ like the case in the hybrid inflation model considered in the previous section [see Eq.~(\ref{epsilon})], but we find this estimation wrong. We perform numerical calculations to solve the equations of motion for the complex scalar fields $S$, $X$, and $\phi$. We use the full supergravity potential for $S$, $X$, and $\phi$. The kinetic interactions are simplified such that $S$ and $X$ have canonical kinetic terms for simplicity. We take into account the kinetic interactions for $\phi$ associated with $c_2$, which is needed to change the sign of its Hubble-induced mass term. The parameters are taken in the intervals of $\lambda = 10^{-3} - 10^4$ and $c_0 = 10^{-5} - 1$ for $n=4$ and $6$. The $\mathcal{O}(1)$ coefficients in the K\"{a}hler potential are assumed to be $c_1=2$, $c_2 = 1$, and $c_3=1$. From our numerical calculations, we obtain the following results: \begin{eqnarray} \frac{a^3(t)}{a^3 (t_{\rm osc})} n_{B-L} (t) &\equiv& \epsilon q H_{\rm osc} \phi_{\rm osc}^2 \\ \epsilon &\equiv& \tilde{\epsilon} c_0 \label{result in chaotic inf}\\ \tilde{\epsilon} &\simeq& (0.01-0.1) a, \label{numerical factor in chaotic inf} \end{eqnarray} where the factor of $0.01 - 0.1$ is a numerical uncertainty. One example of our results is shown in Fig.~\ref{fig5}, where we set $\lambda=1$, $n=6$, $c_0 = 0.5$, $c_1 = 2$, $\abs{c_2} = 1$, and $c_3 = -1$. The blue curve represents the time evolution of the $B-L$ number after the end of inflation, while the orange dashed curve corresponds to Eq.~(\ref{numerical factor in chaotic inf}) with a numerical factor of $0.01$. The oscillation behaviour of $B-L$ number density may come from the effect of the oscillating inflaton through supergravity effects and is irrelevant for our discussion.% \footnote{ We have investigated a possibility to generate $B-L$ asymmetry via this effect in Ref.~\cite{Takahashi:2015ula}. Note that in this paper we do not introduce a $B-L$ violating operator associated with the right-handed neutrino, so that the net $B-L$ asymmetry vanishes for this effect. Even if we introduce the $B-L$ violating operator, the resulting $B-L$ asymmetry generated from this effect is much smaller than that generated from ADBG. } The $c_0$ dependence in our result of Eq.~(\ref{result in chaotic inf}) comes from the ellipticity of the dynamics of the inflaton in the complex plane. This means that $B-L$ asymmetry cannot be generated for $c_0 = 0$, in which case no CP odd component of the field $I$ is excited. \begin{figure}[t] \centering \includegraphics[width=.45\textwidth, bb=0 0 450 302 ]{fig7} \caption{\small Evolution plot for $B-L$ number density in our scenario of ADBG in the chaotic inflation model. The dashed curve is our prediction of Eq.~(\ref{numerical factor in chaotic inf}) with a numerical factor of $0.01$. We take $\lambda=1$, $n=6$, $c_0 = 0.5$, $c_1 = 2$, $\abs{c_2} = 1$, and $c_3 = -1$. } \label{fig5} \end{figure} One might wonder why there is no factor of $\abs{X}$ in our result of Eq.~(\ref{result in chaotic inf}) in contrast to the one in the case of hybrid inflation [see Eq.~(\ref{epsilon})]. Although we perform numerical calculation with the full supergravity potential with some kinetic interactions to derive the above results, we also check the same parameter dependence in the following toy model: \begin{eqnarray} \ddot{\phi} + 3 H(t) \dot{\phi} + H^2 (t) \phi - n a_H I X ( \phi^{*})^{n-1} &=& 0 \\ \ddot{I} + 3 H(t) \dot{I} + m^2 I - a_H X^* \phi^{n} &=& 0 \\ \ddot{X} + 3 H(t) \dot{X} + m^2 X - a_H I^* \phi^{n} &=& 0, \end{eqnarray} where $H(t) = 2/3 t$. Initial conditions are taken as \begin{eqnarray} \phi ( t_0) = 1, ~~~~ I (t_0) = 1, ~~~~ X (t_0) = X_0, \\ \dot{\phi} (t_0) = 0, ~~~~ \dot{I} (t_0) = i c_0, ~~~~ \dot{X} (t_0) = 0. \end{eqnarray} We confirm that the resulting $B-L$ density is proportional to $c_0$ and $a_H$, and is almost independent of $X_0$. \subsection{Baryon asymmetry} Using the results obtained in the previous subsection, we calculate the baryon-to-entropy ratio such as \begin{eqnarray} Y_b &\simeq& \frac{2 \tilde{\epsilon} q}{23} c_0 \frac{T_{\rm RH}}{H_{\rm osc}} \lmk \frac{\phi_{\rm osc}}{M_{\text{Pl}}} \right)^2 \label{result in chaotic 0} \\ &\simeq& \left\{ \begin{array}{ll} 0.005 c_0 \frac{T_{\rm RH} }{\lambda M_{\text{Pl}}} ~~~~\text{for}~~ n = 4 \vspace{0.3cm} \\ 0.006 c_0 \frac{T_{\rm RH}}{\sqrt{\lambda H_{\rm osc} M_{\text{Pl}}}} ~~~~\text{for}~~ n = 6, \end{array} \right. \end{eqnarray} where we assume $\tilde{\epsilon} q = 0.1$ and $\abs{c_H} = 1$ in the last line. For typical parameters, it is given by \begin{eqnarray} Y_b &\simeq& \left\{ \begin{array}{ll} 2 \times 10^{-10} \lmk \frac{c_0 T_{\rm RH}}{10^{7} \ {\rm GeV} } \right) \lmk \frac{\lambda }{10^{-4}} \right)^{-1} ~~~~\text{for}~~ n = 4 \vspace{0.2cm}\\ 1 \times 10^{-10} \lmk \frac{c_0 T_{\rm RH}}{10^{6} \ {\rm GeV} } \right) \lmk \frac{\lambda}{10^{-4} } \right)^{-1/2} ~~~~\text{for}~~ n = 6, \end{array} \right. \label{result in chaotic} \end{eqnarray} where we use $H_{\rm osc} \simeq m_{\rm inf} \approx 10^{13} \ {\rm GeV} $. Thus, we can explain the observed baryon asymmetry of $Y_b^{\rm obs} \simeq 8.7 \times 10^{-11}$~\cite{pdg}. Since the COBE normalisation of the amplitude of density perturbations requires that the energy scale of chaotic inflation is given by $H_{\rm inf} \simeq 10^{14} \ {\rm GeV} $ in the chaotic inflation model, the baryonic isocurvature constraint of Eq.~(\ref{isocurvature constraint}) is much severer than the case in the hybrid inflation. It requires that the parameter in the superpotential $\lambda$ is smaller than about $10^{-4}$. This means that the VEV of the AD field is as large as the Planck scale during inflation. In this case, the backreaction of the AD field to inflaton dynamics might be relevant. As a result, the tensor-to-scalar ratio can be consistent with the present constraint within $2 \sigma$~\cite{Yamada:2015rza}. Note that the number density of the AD field decreases with time as $\propto a^{-3}$ due to the expansion of the Universe. This means that its energy density decreases as $a^{-9/2}$ because its effective mass is of order the Hubble parameter, which decreases as $a^{-3/2}$. Thus its energy density never dominates that of the Universe and the result of Eq.~(\ref{result in chaotic 0}) is applicable even for the case of $\phi_{\rm osc} \simeq M_{\text{Pl}}$. \subsection{Reheating temperature} The inflaton can decay into the MSSM particles via supergravity effects. Its decay rate is calculated in Ref.~\cite{Nakamura:2006uc} and is given as \begin{eqnarray} \Gamma_{\rm inf}^{\rm (SUGRA)} = \frac{3 c_0^2}{256 \pi^3} \abs{y_t}^2 \frac{m_{\rm inf}^3}{M_{\text{Pl}}^2}. \label{inflaton decay rate in chaotic inf} \end{eqnarray} This implies that the reheating temperature is given by \begin{eqnarray} T_{\rm RH} \simeq 2 \times 10^8 \ {\rm GeV} c_0 \abs{y_t} \lmk \frac{m_{\rm inf}}{10^{13} \ {\rm GeV} } \right)^{3/2}. \end{eqnarray} Together with Eq.~(\ref{result in chaotic}), we find that the observed baryon asymmetry can be explained when $c_0 = \mathcal{O} (0.1)$. Note that there may be a renormalizable coupling such as \begin{eqnarray} W \supset y X H_u H_d. \end{eqnarray} If $c_0$ is sufficiently small, the decay rate is determined by this term and is given by \begin{eqnarray} T_{\rm RH} \simeq 6 \times 10^8 \ {\rm GeV} \lmk \frac{ y}{10^{-6}} \right) \lmk \frac{m_{\rm inf}}{10^{13} \ {\rm GeV} } \right)^{1/2}. \end{eqnarray} However, the coupling constant $y$ should be suppressed by a factor of $m_{\rm inf} / M_{\text{Pl}}$ not to affect the inflaton potential, so that the reheating temperature is at most $10^{9} \ {\rm GeV} $~\cite{Nakayama:2013txa}. In order to kick the phase direction and generate $B-L$ asymmetry, we need a nonzero value of $Z_2$ breaking parameter $c_0$. However, the $Z_2$ breaking term makes the inflaton decay into gravitinos efficiently via supergravity effects and its decay rate is the same order with that of Eq.~(\ref{inflaton decay rate in chaotic inf}). Therefore, there is a gravitino problem from inflaton decay. We can avoid the problem by assuming that the gravitino is sufficiently heavy ($m_{3/2} \mathop{}_{\textstyle \sim}^{\textstyle >} 100 \ {\rm TeV} $) so as to decay before the BBN epoch and the R-parity is violated for the LSP not to overclose the Universe. Or, we can assume that gravitino is sufficiently light ($m_{3/2} \mathop{}_{\textstyle \sim}^{\textstyle <} 2 \ {\rm keV} $), in which case they do not overclose the Universe. The former possibility might be well motivated partly because the observed $125 \ {\rm GeV} $ Higgs mass favours a heavy squark mass of order $100 \ {\rm TeV} $ for a small $\tan \beta$~\cite{ArkaniHamed:2004fb, Wells:2004di, Hall:2011jd, Ibe:2011aa}. \section{\label{conclusion}Discussion and onclusions} We have investigated a new scenario that the Affleck-Dine mechanism works just after the end of inflation. The AD field stays at a large VEV by a negative Hubble-induced mass term during inflation and then starts to oscillate around the origin by a positive one after inflation. At the same time, its phase direction is kicked by an A-term and $B-L$ asymmetry is generated. Since its dynamics is determined by Hubble-induced terms, the resulting $B-L$ asymmetry is independent of parameters in low-energy SUSY models. This fact makes our scenario very simple. In particular, Q-balls, which sometimes form after the conventional scenario of ADBG, do not form in our scenario. The A-term depends on inflation models, so that the resulting $B-L$ asymmetry does in our scenario. We have investigated the scenario and calculated the produced amount of $B-L$ asymmetry in F-term hybrid and chaotic inflation models in supergravity. We have found that our scenario requires a higher reheating temperature than the one required in the conventional scenario. This implies that ADBG works in larger parameter spaces than expected in the literature. In particular, in the F-term hybrid inflation model, the required reheating temperature is naturally consistent with the gravitino overproduction bounds. The required reheating temperature is not unnaturally small even if the VEV of the AD field is so large that its backreaction to inflaton dynamics becomes relevant. Since the backreaction can make the spectral index and tensor-to-scalar ratio consistent with observations~\cite{Yamada:2015rza}, this is another advantage of this scenario. \vspace{1cm} \section*{Acknowledgments} M.Y. thanks W. Buchm\"{u}ller for kind hospitality at DESY. This work is supported by World Premier International Research Center Initiative (WPI Initiative), MEXT, Japan, the Program for the Leading Graduate Schools, MEXT, Japan, and the JSPS Research Fellowships for Young Scientists, No. 25.8715. \vspace{1cm}
{ "redpajama_set_name": "RedPajamaArXiv" }
6,689
// ---------------------------------------------------------------------------------- // // Copyright Microsoft Corporation // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // http://www.apache.org/licenses/LICENSE-2.0 // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. // ---------------------------------------------------------------------------------- using System; using Microsoft.Azure.Commands.Common.Authentication; using Microsoft.Azure.Commands.Common.Authentication.Abstractions; using Microsoft.Azure.Commands.ResourceManager.Common; using Microsoft.Azure.Management.Internal.Resources; using Microsoft.Azure.Management.Internal.Resources.Models; using Microsoft.Azure.Management.ManagementGroups; using Microsoft.Rest.Azure; using Microsoft.WindowsAzure.Commands.Utilities.Common; namespace Microsoft.Azure.Commands.Resources.ManagementGroups.Common { /// <summary> /// Base class of Azure Management Groups Cmdlet. /// </summary> public abstract class AzureManagementGroupsCmdletBase : AzureRMCmdlet { private IManagementGroupsAPIClient _managementGroupsApiClient; /// <summary> /// Gets or sets the Groups RP client. /// </summary> public IManagementGroupsAPIClient ManagementGroupsApiClient { get { return _managementGroupsApiClient ?? (_managementGroupsApiClient = AzureSession.Instance.ClientFactory.CreateArmClient<ManagementGroupsAPIClient>( DefaultProfile.DefaultContext, AzureEnvironment.Endpoint.ResourceManager)); } set { _managementGroupsApiClient = value; } } public void PreregisterSubscription(string subscriptionId) { IAzureContext context; if (TryGetDefaultContext(out context) && context.Account != null && context.Subscription != null) { if (subscriptionId == context.Subscription.Id) { return; } short RetryCount = 10; string providerName = "Microsoft.Management"; try { var rmclient = new ResourceManagementClient( context.Environment.GetEndpointAsUri(AzureEnvironment.Endpoint.ResourceManager), AzureSession.Instance.AuthenticationFactory.GetServiceClientCredentials(context, AzureEnvironment.Endpoint.ResourceManager)) { SubscriptionId = subscriptionId }; var provider = rmclient.Providers.Get(providerName); if (provider.RegistrationState != RegistrationState.Registered) { short retryCount = 0; do { if (retryCount++ > RetryCount) { throw new TimeoutException(); } provider = rmclient.Providers.Register(providerName); TestMockSupport.Delay(2000); } while (provider.RegistrationState != RegistrationState.Registered); } } catch (Exception e) { if (e.Message?.IndexOf("does not have authorization") >= 0 && e.Message?.IndexOf("register/action", StringComparison.InvariantCultureIgnoreCase) >= 0) { throw new CloudException(e.Message); } } } } public void PreregisterSubscription() { IAzureContext context; if (TryGetDefaultContext(out context) && context.Account != null && context.Subscription != null) { short RetryCount = 10; string providerName = "Microsoft.Management"; try { var rmclient = new ResourceManagementClient( context.Environment.GetEndpointAsUri(AzureEnvironment.Endpoint.ResourceManager), AzureSession.Instance.AuthenticationFactory.GetServiceClientCredentials(context, AzureEnvironment.Endpoint.ResourceManager)) { SubscriptionId = context.Subscription.Id }; var provider = rmclient.Providers.Get(providerName); if (provider.RegistrationState != RegistrationState.Registered) { short retryCount = 0; do { if (retryCount++ > RetryCount) { throw new TimeoutException(); } provider = rmclient.Providers.Register(providerName); TestMockSupport.Delay(2000); } while (provider.RegistrationState != RegistrationState.Registered); } } catch (Exception e) { if (e.Message?.IndexOf("does not have authorization") >= 0 && e.Message?.IndexOf("register/action", StringComparison.InvariantCultureIgnoreCase) >= 0) { throw new CloudException(e.Message); } } } } } }
{ "redpajama_set_name": "RedPajamaGithub" }
7,847
Anti-Israel Bias at UN Human Rights Council Overview of the U.N. Human Rights Council during the emergency debate on human rights and humanitarian situation in Syria, at the United Nations in Geneva. At the 34th session of the UN Human Rights Council, the United States refused to participate in the debate session devoted to "human rights situation in Palestine and other occupied Arab territories." The discussion took place in accordance with Agenda Item 7, which makes Israel -- and only Israel -- a permanent target of examination and discussion at each of the three yearly meetings of the Human Rights Council. Traditionally, countries with deplorable human rights records have blasted Israel during these sessions. In a written statement, U.S. Permanent Representative to the UN Ambassador Nikki Haley criticized the fact that Israel is the only country permanently on the Human Rights Council's agenda. "It is not Syria, where the regime has systematically slaughtered and tortured its own people. It is not Iran, where public hangings are a regular occurrence. It is not North Korea, where the regime uses forced labor camps to crush its people into submission. It is Israel." Israel is a strong and long standing democracy in the Middle East. Ambassador Haley noted that the "so-called 'Agenda Item 7' discredits the standing of the only UN body designed to address the state of global human rights by allowing actions to distract from their own abuses back home by churning out anti-Israel propaganda." In a separate statement, Acting State Department Spokesperson Mark Toner said that the "continued existence of this agenda item is among the largest threat to the credibility of the [Human Rights] Council." He called the U.S. decision not to attend the Council's Item 7 General Debate session, "an expression of our deeply-held conviction that this bias must be addressed in order for the Council to realize its legitimate purpose." Mr. Toner emphasized that it "does not serve the interests of the Council to single out one country in an unbalanced matter," and that the U.S. "will vote against every resolution put forth under this agenda item and is encouraging other countries to do the same." "The U.S.," said Spokesperson Toner, "is dedicated to the pursuit of respect for international human rights by all countries in the world, and we call on all member states and international partners who are committed to human rights to work with us to pursue much needed reforms in the UN Human Rights Council."
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,496
{"url":"https:\/\/dmoj.ca\/problem\/wc96p1","text":"WC '96 P1 - Fishes in a Tank\n\nView as PDF\n\nPoints: 5\nTime limit: 1.0s\nMemory limit: 16M\n\nProblem type\nWoburn Challenge 1996\n\nIn a cubic fish tank some fishes are swimming around. Little Billy-Bob watched the fishes move around and around until he became dizzy. At some time during the day the fishes were all stacked together in one column, at other times they were scattered randomly in the tank and at still other times they were in an eclipsed configuration. It was this configuration that Billy-Bob enjoyed watching the most.\n\nIn this eclipsed configuration, the fishes were grouped together in vertical columns of varying length, and in each group the fishes are one on top of another. Never were there two or more groups in the same vertical column, though. Thus the tank looked like it had chains of fish hanging vertically.\n\nLittle Billy-Bob would try to figure out which level (or depth) in the tank had the most number of fishes when it was in this configuration. But he had to do it fast before the fishes moved past one another and ruined all his fun.\n\nYou can help Billy-Bob by writing a program that does the work for him. The fish tank will be an cube and hence in the eclipsed configuration it will have at most groups as it has only separate vertical columns. You are to find the level (or depth) at which there is the largest number of fishes. The top level of the tank is level 1, and the bottom level of the tank is level . If multiple levels each have the most number of fish, give the topmost of these levels.\n\nFor example, consider a cube with a group of 3 fish on the center vertical column, and two single fish on opposite corners. If we label the columns with co-ordinates from to this could be described as:\n\n\u2022 In column : A stack of fish from level 1 to level 3\n\u2022 In column : A stack of fish from level 1 to level 1 (i.e. only in level 1)\n\u2022 In column : A stack of fish from level 3 to level 3 (i.e. only in level 3)\n\nIn this case there are two fish each at levels 1 and 3, so we give the topmost (1).\n\nInput Specification\n\nIn the data file there are 5 data sets. The first line of each data set is , the size of the cubic fishtank to be considered. The next lines each contain four integers which indicate a column of fish at whose top fish is in level and whose bottom fish is in level . (And of course there are fish at each level since Billy only likes these particular fish-stack configurations as mentioned above). This set of fish-columns is terminated by a line containing all zeroes.\n\nOutput Specification\n\nGive the topmost level of all those containing the biggest number of fish.\n\nSample Input\n\n3\n1 1 3 3\n2 2 1 3\n3 3 1 1\n0 0 0 0\n\n(and 4 more data sets)\n\nSample Output\n\n1","date":"2021-06-15 10:42:49","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5277853608131409, \"perplexity\": 892.5571907814696}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-25\/segments\/1623487620971.25\/warc\/CC-MAIN-20210615084235-20210615114235-00305.warc.gz\"}"}
null
null
Enjoy safer, quicker access to your funds. Have your net pay electronically deposited. Safely and securely transfer money from one bank to another, locally or internationally. Protect yourself from declined transactions. It's free to sign up. Easily transfer money, confirm balances, make a loan payment and more with your telephone.
{ "redpajama_set_name": "RedPajamaC4" }
5,960
{"url":"https:\/\/learn.careers360.com\/engineering\/question-if-a-set-has-20-elementshow-many-ordered-pairs-having-distinct-elements-can-be-formed-a-150-b-170-c-190-d-210\/","text":"If a set has 20 elements.How many ordered pairs having distinct elements can be formed? A. 150 B. 170 C. 190 D. 210\n\neg. (a,b),\u00a0$a\\epsilon A \\and\\ b\\epsilon B$\nThe number of ordered pairs =\u00a0$^{20}C_{2}$=190","date":"2020-06-05 10:27:08","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 2, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3335999846458435, \"perplexity\": 1006.5450388069662}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-24\/segments\/1590348496026.74\/warc\/CC-MAIN-20200605080742-20200605110742-00427.warc.gz\"}"}
null
null
<?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:id="@+id/root_view" android:orientation="vertical"> <LinearLayout android:id="@+id/top_lay" android:layout_width="match_parent" android:layout_height="@dimen/y150" android:orientation="horizontal" android:paddingLeft="@dimen/width_48" android:paddingRight="@dimen/width_48"> <com.xiaoshangxing.utils.customView.CircleImage android:id="@+id/head_image" android:layout_width="@dimen/x100" android:layout_height="@dimen/x100" android:layout_gravity="center_vertical" /> <TextView android:id="@+id/name" style="@style/black_16sp" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center_vertical" android:paddingLeft="@dimen/x24" /> <TextView android:id="@+id/ex" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center_vertical" android:paddingLeft="@dimen/x24" android:singleLine="true" android:textColor="@color/g0" android:textSize="13sp" /> </LinearLayout> <ImageView android:id="@+id/divider2" android:layout_width="match_parent" android:layout_height="1px" android:layout_marginLeft="@dimen/width_48" android:background="@color/g1" /> </LinearLayout>
{ "redpajama_set_name": "RedPajamaGithub" }
2,168
← 2018 in Books "But if the unbelievable were not sometimes a reality, there would be no possibility of struggle…": Victor Serge's 'Unforgiving Years' → The whites had come, cadged a little land to build their forts, and then because of them nothing was ever the same again. They had brought with them things never before heard of here, and people had fought over them, nation against nation, brother against brother. And now the whites' ambition knew no bounds. Where would it end? (p. 249) Segu is a story of first contact. Or, to put it more accurately, it is a story of many first contacts. Beginning in 1797, and spanning the first half of the 19th century, it tells of the last days of the Bambara Empire (that spanned present-day Mali), and of a West African society disintegrating under the twin forces of Islam and colonialism. From the time that Tiekoro, the eldest son of Dousika and Nya Traore, announces his conversion to the new religion of Islam, a curse hangs over the Traore family, intent on claiming each one of their sons. From Timbucktoo to the Middle Passage, from London to plantation Brazil, and then back to the capital city of Segu, it haunts them – and through them, the Bambara Empire itself. As Islam becomes more dominant in West Africa, transforming itself into a militant and exclusionary religion, as the slave trade begins to spread its tentacles inwards from the Gold Coast, and as the French and the British begin to make the first moves in their eventual "scramble for Africa" (the book opens with Mungo Park's arrival at the capital city), Segu must make the agonising choice between destruction as the price of maintaining the Bambara way of life, and assimilation or dissolution for survival. Written in 1985, Segu predates by quite a few years the more recent expositions of this genre. Chinua Achebe's Things Fall Apart remains, of course, perhaps the most well-known novel about the disruption of social structures under the pressures of colonialism; and Tayeb Salih's Season of Migration to the North continues to be the classic exploration of the ambiguous personal relationship between coloniser and colonised. The influences of these works are evident on Conde (in fact, one of her characters' visit to London bears strong resemblances to Season of Migration to the North), but perhaps what is more interesting is how Segu has strong echoes in contemporary work. Its skilful use of the family saga to tell the story of a nation will put readers in the mind of Jennifer Makumbi's Kintu (2014, Uganda); and its sensitive portrayal of a society that struggles to preserve itself, but knows only too well that its destruction is inevitable, foretells Naivo's Beyond the Rice Fields (2017, Madagascar) and Patrice Nganang's Mount Pleasant (2011, Cameroon) (indeed, all three novels are set in the first half of the 19th century, their events separated by a few years). More than these novels, however, Segu bears the imprints of an epic: it is concerned not merely with the breakdown of Bambara society, but with the transformation of a world, and it is that sense that gives Code's prose an almost transcendent quality, at times: The child pondered. "How many languages shall I be able to speak?" Naba stroked the little head, with its knobby curls. "I hope you'll only speak the languages of your heart," he said. (p. 259) Segu is also a story about motherhood. While the role of women in a patriarchal society remains circumscribed and limited, by the end of the book, Nya – the chief wife of the deceased Dousika, and (therefore) mother of the Traore children, who are doomed to wander the corners of the earth – has emerged as one of its most important characters. Her character is illumined by her bearing the loss of her children (and the eventual return of some), losses that finally become unbearable, and trigger some of the most memorable lines in the book: And so Nya was brought low, like a tree eaten away from within by termites … 'Nya, daughter of Fale,' they repeated, 'your ancestors bent the world like a bow and unbent it again like a straight road. Nya, stand erect again too.' (p. 375) These lines are also representative of a particular feature of Segu, which stood out for me: language that is so rich in metaphor, that it often engages multiple senses at the same time. "Her heart was bitter," writes Conde at one point, "bitter as cahuchu, the wood that weeps, which the seringueiros, the rubber gatherers, stabbed with their knives in the forest." (p. 204) The smell of bitterness, the taste of tears, the tactile sense of stabbing, all come together within an image of men plunging their knives into a tree, combining to create an immensely powerful sensory experience. And so it is elsewhere. "[Is] the spirit … as bitter as the bark of mahogany?" (p. 306), it is asked on an occasion. "Why was life this swamp into which you were drawn in spite of yourself, to emerge defiled, your hands dripping?" (p. 377), it is asked on another. "There are times when a man's life disgusts him, staring him in the face with its pitted skin and its bad teeth in their rotten gums" (p. 396), it is observed on a third. The sentiments that these lines express are simple enough, but with Conde's use of language, they hit the reader with all the force of physical sensation. That is not to say, of course, that reading Segu is like an extended sensory excursion. The novel carries depth and an emotional charge, and it is suffused with a sense of tragic – yet inevitable loss. This comes through in specific lines that are delivered, rapier-like, cutting into the recesses of the soul ("He suddenly felt sorry for her, and his compassion created the illusion of desire." (p. 292), but more than that, it is present in the atmosphere of the novel. In an essay called 'The Futurists', the Russian literary critic Viktor Shklovsky wrote of the poet Vladimir Mayakovsky, on the eve of the Russian Revolution, thus: I remember walking with Mayakovsky, whom even now in my mind I must call Vladimir Vladimirovich, and not Volodya, along the paved streets of Petersburg, the sun-speckled avenues of the Summer Garden, the Neva embankments, the Zhukovskaya Street, where the woman lived whom the poet loved. Bits of landscape melted into – burned themselves into – Mayakovsky's poems. The poet was quiet, sad, ironic, calm. He was sure – he knew – that the revolution would happen soon. He looked at the things around him the way one does then the thing is about to disappear. (The Shklovsky Reader, p. 236). It is a bit like this. Bits of the landscape melt into – burn themselves into – Segu. And Segu is all about looking at the things around one the way one does when they are about to disappear. And if the disappearance is of the Bambaras, of Segu itself, then it is presages in the lives of the Toure sons, who are driven from wherever they go, unable to find or make a home anywhere in the world, until driven back to Segu that itself faces the threat of extinction. Segu is a novel about leave-taking – whether one would or no – and all that that brings with it. But for all that, it is a novel that resists saying goodbye, even until the last page – and, after we have put it down, if we want to know how things really played out, we have to consult history books. Conde and Segu, however, refuse to deliver that, perhaps to affirm that as long as we have literature, there will always be another way to imagine an ending. Filed under African Writing, Mali, Maryse Conde Tagged as African Writing, Mali, Maryse Conde, west africa
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,758
The Geography Curriculum Area at Sarah Bonnell School is a vibrant and active learning community. We believe Geography to be an exciting and dynamic subject that affects us all in this rapidly changing world. Our aim is to strive to ensure every student is given every opportunity to realise their potential. The Geography area is well-established and very successful, with last year's GCSE results continuing to significantly exceed national averages at 80% A*-C and 31% A*-A. All students at Key Stage 3 study Geography and it is a very popular option for GCSE, a KS4 cohort of more than 220 students. At Key Stage 3, Geography at Sarah Bonnell covers all aspects of the National Curriculum and explores links between the physical and human world in which we live. The team works hard to ensure lessons are engaging, challenging current and relevant to the girls. We aim to promote curious and inspired minds. As part of being an independent learner, the team encourages the students to get involved in project work and use the ICT resources available to access the departmental website and MLE page. There is a strong focus on fully preparing the girls for GCSEs at Key Stage 3, by introducing GCSE topics, GCSE style exam questions and GCSE assessments. In Year 10, the Geography Curriculum Area uses the new AQA GCSE specification, which includes two elements of fieldwork, and topics such as the Living World, Economic World and Urban Challenges. In Year 11, the Geography Curriculum Area uses Edexcel Specification A. The units taught build on existing Key Stage 3 learning whilst also introducing new geographical knowledge, understanding and skills. The topics studied are River Landscapes; Coastal Landscapes; Population; Tourism; Tectonics; Wasteful World; Economic Change; Settlement Change; Challenges to the Planet; Skills and a Controlled Assessment.
{ "redpajama_set_name": "RedPajamaC4" }
7,812
Q: JFileChooser icons on 2K Displays Any idea how to make the Java Swing file chooser look better on 2K displays where the windows font scaling is > 125%? I am using ordinary code such as: JFileChooser fc = new JFileChooser(); if (settings.currentdir != null) fc.setCurrentDirectory(new File(settings.currentdir)); int returnVal = fc.showOpenDialog((Window) holder); if (returnVal == JFileChooser.APPROVE_OPTION) { But the file chooser is only displaying tiny icons for the listed files and directories. I am using JDK 8. What is going wrong? P.S.: Scope of the question is only Windows, not Unixes. On Windows, the two default L&F, they scale the font. But they don't scale icons. The application has to do that, since it might use a different bitmap resources for higher scales. It seems that JFileChooser is not coded this way. But it might be that the JFileChooser can be instructed to do so. I don't see that the other question addresses icon size and the JFileChooser on Windows: How to set the DPI of Java Swing apps on Windows/Linux? The other question deals with font size, which is not an issue for the JFileChooser on Windows with one of the two Windows L&F. A: Just a quick idea while i came across this thread. You can try to deliver your own iconset: new JFileChooser().setFileView(new FileView() { @Override public Icon getIcon(File f) { return fancy2kIconForExtension(StringUtils.substringAfterLast(".")); } }); be careful to load your Icons from a Cache, as this method is called very often from inside JFileChooser, otherwise you end up reloading icon all the time. A: I very recently ran into same problem. the only work around is not using java build in ImageIcon class but to write one yourself, This one took the provided image, scale it to fit current component size and paint it. I tried to make is as simple as possible and as close to original class as able, but its not perfect and need improvement, especially in component-icon alignment import java.awt.Component; import java.awt.Graphics; import java.awt.Graphics2D; import java.awt.RenderingHints; import java.awt.geom.AffineTransform; import javax.swing.AbstractButton; import javax.swing.ImageIcon; /** * * @author Rastislav */ public class ScaleToFitAndAntialiasIcon extends ImageIcon{ private ImageIcon icon; public ScaleToFitAndAntialiasIcon(ImageIcon icon) { this.icon = icon; } public int getIconWidth() { return icon.getIconWidth(); } public int getIconHeight() { return icon.getIconHeight(); } @Override public void paintIcon(Component c, Graphics g, int x, int y) { Graphics2D g2d = (Graphics2D)g.create(); AffineTransform at = g2d.getTransform(); double scaleToFit = ((double)c.getHeight() / (double)icon.getIconHeight()); if((int)icon.getIconHeight()*scaleToFit == c.getHeight()){ scaleToFit = ((double)c.getHeight() / (double)icon.getIconHeight()) - 0.1; } AffineTransform scaled = AffineTransform.getScaleInstance(scaleToFit, scaleToFit); at.concatenate( scaled ); g2d.setTransform( at ); //need improvement /* int lineupMinus = (int)((double)icon.getIconWidth() *((double)c.getHeight() / (double)icon.getIconHeight())); int lineup = (int)((double)icon.getIconWidth() * scaleToFit); int ff = (int)(lineupMinus - lineup); System.out.println(ff); */ g2d.setRenderingHint(RenderingHints.KEY_TEXT_ANTIALIASING, RenderingHints.VALUE_TEXT_ANTIALIAS_ON); g2d.setRenderingHint(RenderingHints.KEY_INTERPOLATION, RenderingHints.VALUE_INTERPOLATION_BILINEAR); g2d.setRenderingHint(RenderingHints.KEY_INTERPOLATION, RenderingHints.VALUE_INTERPOLATION_BICUBIC); //improved code goes here icon.paintIcon(c, g2d, x, 4); if(c instanceof AbstractButton a){ a.setIconTextGap((int)(-icon.getIconWidth()/2)); } g2d.dispose(); } }
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,388
Flying Eagle Country Retreat Our records show that this inn is closed. Belle Casa Our records show that this inn is closed. 125 Verde Place Our records show that this inn is closed. Clarkdale bed and breakfast travel guide for romantic, historic and adventure b&b's. Browse through the iLoveInns.com database of Clarkdale, Arizona bed and breakfasts and country inns to find detailed listings that include room rates, special deals and area activities. You can click on the 'check rates and availability' button to contact the innkeeper.
{ "redpajama_set_name": "RedPajamaC4" }
952
Q: Build Android APK programmatically I need to build a web environment which creates signed Android .apk files from a given Android application source code. How can I get this? The web environment will be running on a Linux machine. I was thinking on a Linux shell script which builds signed .apk file. However, I've read some information about Apache Ant but I don't understand exactly its purpose and if it could do the trick.
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,515
\section{Historical background and present motivations for holography} No other theory in the history of physics has been able to cover such a wide range of phenomena with impressive precision as QFT. However its amazing predictive power stands in a worrisome contrast to its weak ontological status. In fact QFT is the only theory of immense epistemic strength which, even after more than 80 years, remained on shaky mathematical and conceptual grounds. Unlike any other area of physics, including QM, there are simply no interesting mathematically controllable interacting models which would show that the underlying principles remain free of internal contradictions in the presence of interactions. The faith in e.g. the Standard Model is based primarily on its perturbative descriptive power; outside the perturbative domain there are more doubts than supporting arguments. The suspicion that this state of affairs may be related to the conceptual and mathematical weakness of the method of Lagrangian quantization rather then a shortcoming indicating an inconsistency of the underlying principles in the presence of interactions can be traced back to its discoverer Pascual Jordan. It certainly was behind all later attempts of e.g. Arthur Wightman and Rudolf Haag to find a more autonomous setting away from the quantization parallelism with classical theories which culminated in Wightman's axiomatic setting in terms of vacuum correlation functions and the Haag-Kastler theory of nets of operator algebras.\ The distance of such conceptual improvements to the applied world of calculations has unfortunately persisted. Nowhere is the contrast between computational triumph and conceptual misery more visible than in renormalized perturbation theory which has remained our only means to explore the standard model. Most particle physicists have a working knowledge of perturbation theory and at least some of them took notice of the fact that, although the renormalized perturbative series can be shown to diverge and that in certain cases these divergent series are Borel resummable. Here I will add some more comments without going into details. The Borel re-summability property unfortunately does not lead to an existence proof; the correct mathematical statement in this situation is that if the existence can be established\footnote{The existence for models with a finite wave-function renormalization constant has been established in the early 60s and this situation has not changed up to recently. The old results only include superrenormalizable models whereas the new criterion is not related to short-distance restrictions but rather requires a certain phase space behavior (modular nuclearity).} by nonperturbative method then the Borel-resummed series would indeed acquire an asymptotic convergence status with respect to the solution, and one would for the first time be allowed to celebrate the numerical success as having a solid ontological basis \footnote{This is actually the present situation for the class of d=1+1 factorizing models \cite{Lech}.}. But the whole issue of model existence attained the status of an unpleasant fact, something which is often kept away from newcomers, so that as a result there is a certain danger to confuse the existence of a model with the ability to write down a Lagrangian or a functional integral and apply some computational recipe. Fortunately important but unfashionable problems in particle physics never disappear completely. Even if they have been left on the wayside as "un-stringy", "unsupersymmetrizable" or too far removed from the "Holy Grail of a TOE" and therefore not really career-improving, there will be always be individuals who return to them with new ideas. Indeed there has been some recent progress about the aforementioned existence problem from a quite unexpected direction. Within the setting of d=1+1 factorizing models the use of modular operator theory has led to a control over phase space degrees of freedom which in turn paved the way to an existence proof. Those models are distinguished by their simple generators for the wedge-localized algebra \cite{Sch}; in fact these generators turned out to possess Fourier-transforms with mass-shell creation/annihilation operators which are only slightly more complicated than free fields. An important additional idea on the way to an existence proof is the issue of the cardinality of degrees of freedom. In the form of the phase space in QFT as opposed to QM this issue goes back to the 60s \cite{Ha-Sw} and underwent several refinements \cite{Bu-Wi} (a sketch of the history can be found in \cite{legacy}). The remaining problem was to show that the simplicity of the wedge generators led to a "tame" phase space behavior which guaranties the nontriviality as well as the additional expected properties of the double cone localized algebras obtained as intersections of wedge-localized algebras \cite{Lech}. Although these models have no particle creation through on-shell scattering, they exhibit the full infinite vacuum polarization clouds upon sharpening the localization from wedges to compact spacetime regions as e.g. double cones \cite{Bo-Bu-Sc}. Their simplicity is only manifest in the existence of simple wedge generators; for compact localization regions their complicated infinite vacuum polarization clouds are not simpler than in other QFT. Similar simple-minded Ans\"{a}tze for wedge algebras in higher dimensions cannot work since interactions which lead to nontrivial elastic scattering without also causing particle creation cannot exist; such a No-Go theorem for 4-dim. QFT was established already in \cite{Aks}. Nevertheless it is quite interesting to note that even if with such a simple-minded Ansatz for wedge generators in higher dimensions one does not get to compactly localized local observables, one can in some cases go to certain subwedge intersections \cite{Bu-Su}\cite{Gr-Le} before the increase in localization leads to trivial algebras. Whereas in the Lagrangian approach one starts with local fields and their correlations and moves afterwards to less local objects as global charges, incoming fields\footnote{Incoming/outgoing free fields are only local with respect to themselves. The physically relevant notion of locality is \textit{relative locality to the interacting fields}. If incoming fields are relatively local/almost local, the theory has no interactions. } etc., the modular localization approach goes the opposite way i.e. one starts from the wedge region (the best compromise between particles and fields) which is most close to the particle mass-shell the S-matrix and then works one's way down. The pointlike local fields only appear at the very end and play the role of \textit{coordinatizing generators} of the double cone algebras for arbitrary small sizes. Nonlocal models are automatically "noncommutative" in the sense that the the maximal commutativity of massive theories allowed by the principles of QFT, namely spacelike commutativity, is weakened by allowing various degrees of violations of spacelike commutativity. In this context the non-commutativity associated with the deformation of the product to a star-product using the Weyl-Moyal formalism is only a very special (but very popular) case. The motivation for studying non-commutative QFT for its own sake comes from string theory, and one should not expect this motivation to be better than for string theory itself. My motivation for having being interested in noncommutative theory during the last decade comes from the observation that non-commutative fields can have \textit{simpler properties than commutative ones}. More concretely: complicated two-dimensional local theories may lead to wedge-localized algebras which are generated by non-commutative fields where the latter only fulfill the much weaker wedge-locality (see above). Whereas in d=1+1 such constructions \cite{Sch} may lead via algebraic intersections to nontrivial, nonperturbative local fields, it is known that in higher dimensions this simple kind of wedge generating field without vacuum polarization is not available. But interestingly enough on can improve the wedge localization somewhat \cite{new} before the further sharpening of localization via algebraic intersections ends in trivial algebras. These recent developments combine the useful part of the history of S-matrix theory and formfactors with very new conceptual inroads into QFT (modular localization, phase space properties of LQP). \ The idea to divide the difficult full problem into a collection of simpler smaller ones is also at the root of the various forms of the holography of the two subsequent sections. The predecessor of lightfront holography was the so-called "lightcone quantization" which started in the early 70s; it was designed to focus on short-distances and forget temporarily about the rest. The idea to work with fields which are associated to the lightfront $x_{-}=0$ (not the light cone which is $x^{2}=0$) as a submanifold in Minkowski spacetime looked very promising but unfortunately the connection with the original problem of analyzing the local theory in the bulk was never addressed and as the misleading name "lightcone quantization" reveals, the approach was considered as a different quantization rather then a different method for looking at the same local QFT in Minkowski spacetime. It is not really necessary to continue a seperate criticism of "lightcone quantization" because its shortcomings will be become obvious after the presentation of lightfront holography (more generally \textit{holography onto null-surfaces}). Whereas the more elaborate and potentially more important lightfront holography has not led to heated discussions, the controversial potential of the simpler AdS-CFT holography had been enormous and to the degree that it contains interesting messages which increase our scientific understanding it will be presented in these notes. Since all subjects have been treated in the existing literature, our presentation should be viewed ass a guide through the literature with occasionally additional and (hopefully) helpful remarks. \section{Lightfront holography, holography on null-surfaces and the origin of the area law} Free fields offer a nice introduction into the bulk-holography relation which, despite its simplicity, remains conceptually non-trivial. We seek generating fields $A_{LF}$ for the lightfront algebra $\mathcal{A}% (LF)$ by following the formal prescription $x_{-}=0$ of the old "lightfront approach" \cite{Leut}. Using the abbreviation $x_{\pm}=x^{0}\pm x^{3},~p_{\pm }=p^{0}+p^{3}\simeq e^{\mp\theta},~$with $\theta$ the momentum space rapidity :% \begin{align} & A_{LF}(x_{+},x_{\perp}):=A(x)|_{x_{-}=0}\simeq\int\left( e^{i(p_{-}% (\theta)x_{+}+ip_{\perp}x_{\perp}}a^{\ast}(\theta,p_{\perp})d\theta dp_{\perp }+h.c.\right) \label{LF}\\ & \left\langle \partial_{x_{+}}A_{LF}(x_{+},x_{\perp})\partial_{x\prime_{+}% }A_{LF}(x_{+}^{\prime},x_{\perp}^{\prime})\right\rangle \simeq\frac{1}{\left( x_{+}-x_{+}^{\prime}+i\varepsilon\right) ^{2}}\cdot\delta(x_{\perp}-x_{\perp }^{\prime})\nonumber\\ & \left[ \partial_{x_{+}}A_{LF}(x_{+},x_{\perp}),\partial_{x\prime_{+}% }A_{LF}(x_{+}^{\prime},x_{\perp}^{\prime})\right] \simeq\delta^{\prime}% (x_{+}-x_{+}^{\prime})\delta(x_{\perp}-x_{\perp}^{\prime})\nonumber \end{align} The justification for this formal manipulation\footnote{We took the derivatives for technical reasons (in order to write the formulas without test functions).} follow from the fact that the equivalence class of test function $\left[ f\right] $, which have the same mass shell restriction $\tilde {f}|_{H_{m}}$ to the mass hyperboloid of mass m, is mapped to a unique test function $f_{LF}$ which "lives"on the lightfront \cite{Dries}\cite{S1}. It only takes the margin of a newspaper to verify the identity $A(f)=A(\left[ f\right] )=A_{LF}(f_{LF}).$ This identity does not mean that the \thinspace$A_{LF}$ generator can be used to describe the local substructure in the bulk. The inversion involves an equivalence class and does not distinguish an individual test-function in the bulk; in fact a finitely localized test function $f(x_{+},x_{\perp})$ on LF corresponds to a de-localized subspace in the bulk. Using an intuitive metaphoric language one may say that a strict localization on LF corresponds to a fuzzy localization in the bulk and vice versa. Hence the pointwise use of the LF generators enforces the LF localization and the only wedge-localized operators which can be directly obtained as smeared $A_{LF}$ fields have a noncompact extension within a wedge whose causal horizon is on LF. Nevertheless there is equality between the two operator algebras associated to the bulk $W$ and its (upper) horizon $\partial W$% \begin{equation} \mathcal{A}(W)=\mathcal{A}(H(W))\subset\mathcal{A}(LF)=B(H) \end{equation} These operator algebras are the von Neumann closures of the Weyl algebras generated by the smeared fields $A$ and $A_{LF}$ and it is only in the sense of this closure (or by forming the double commutant) that the equality holds. Quantum field theorists are used to deal with single operators. Therefore the knowledge about the equality of algebras without being able to say which operators are localized in subregion is somewhat unaccustomed. As will be explained later on, the finer localization properties in the algebraic setting can be recovered by taking suitable intersections of wedge algebras i.e. the structure of the family of all wedge algebras determines whether the local algebras are nontrivial and in case they are permits to compute the local net which contains all informations about the particular model. This idea of taking the holographic projection of individual bulk fields can be generalized to composites of free fields (as e.g. the stress-energy tensor). In order to avoid lengthy discussions about how to interpret logarithmic chiral two-point functions in terms of restricted test functions\footnote{This is a well-understood problem of chiral fields of zero scale dimension which is not directly related to holography.} we work restrict our attention to Wick-composites of $\partial_{x_{+}}A_{LF}(x_{+},x_{\perp})$ \begin{equation} \left[ B_{LF}(x_{+},x_{\perp}),C_{LF}(x_{+}^{\prime},x_{\perp}^{\prime })\right] =\sum_{l=0}^{m}\delta^{l}(x_{\perp}-x_{\perp}^{\prime}% )\sum_{k(l)=0}^{n(l)}\delta^{k(l)}(x_{+}-x_{+}^{\prime})D_{LF}^{\left( k(l)\right) }(x_{+},x_{\perp})\label{com}% \end{equation} where the dimensions of the composites $D_{LF}^{(k(l))}$ together with the degrees of the derivatives of the delta functions obey the standard rule of scale dimensional conservation. In the commutator the transverse and the longitudinal part both appear with delta functions and their derivatives yet there is a very important structural difference which shows up in the correlation functions. To understand this point we look at the second line in (\ref{LF}). The longitudinal (=lightlike) delta-functions carries the chiral vacuum polarization the transverse part consists only of products of delta functions as if it would come from a product of correlation functions of nonrelativistic Schroedinger creation/annihilation operators $\psi^{\ast }(x_{\perp}),$ $\psi(x_{\perp}).$ In other words the LF-fields which feature in this extended chiral theory are \textit{chimera between QFT and QM}; they have one leg in QFT and n-2 legs in QM with the "chimeric vacuum" being partially a (transverse) factorizing quantum mechanical state of "nothingness" (the Buddhist nirvana) and partially the longitudinally particle-antiparticle polarized LQP vacuum state of "virtually everything" (the Abrahamic heaven). Upon lightlike localization of LF to (in the present case) $\partial W$ (or to a longitudinal interval) the vacuum on $\mathcal{A}(\partial W)$ becomes a radiating KMS thermal state with nonvanishing localization-entropy \cite{S1}\cite{S2}. In case of interacting fields there is no change with respect to the absence of transverse vacuum polarization, but unlike the free case the global algebra $\mathcal{A}(LF)$ or the semi-global algebra $\mathcal{A}(\partial W)$ is generally bigger than the algebra one obtains from the globalization using compactly localized subalgebras, i.e. $\overline{\cup_{O\subset LF}\mathcal{A}_{LF}(\mathcal{O})}\subset \mathcal{A}(LF),$ $\mathcal{O}\subset LF$. We will return to this point at a more opportune moment. The aforementioned "chimeric" behavior of the vacuum is related in a profound way to the conceptual distinctions between QM and QFT \cite{interface}. Whereas transversely the vacuum is tensor-factorizing with respect to the Born localization and therefore leads to the standard quantum mechanical concepts of entanglement and the related information theoretical (cold) entropy, the entanglement from restricting the vacuum to an algebra associated with an interval in lightray direction is a thermal KMS state with a genuine thermodynamic entropy. Instead of the standard quantum mechanical dichotomy between pure and entangled restricted states there are simply no pure states at all. All states on sharply localized operator algebras are highly mixed and the restriction of global particle states (including the vacuum) to the W-horizon $\mathcal{A}(\partial W)$ results in KMS thermal states. This is the result of the different nature of localized algebras in QFT from localized algebras in QM \cite{interface}. Therefore if one wants to use the terminology\ "entanglement" in QFT one should be aware that one is dealing with a totally intrinsic very strong form of entanglement: \textit{all physically distinguished global pure states} (in particular finite energy states in particular the vacuum) \textit{upon restriction to a localized algebra become intrinsically entangled and unlike in QM there is no local operation which disentangles}. Whereas the cold (information theoretic) entanglement is often linked to the uncertainty relation of QM, the raison d'etre behind the "hot" entanglement is the phenomenon of vacuum polarization resulting from localization in quantum theories with a maximal velocity. The transverse tensor factorization restricts the Reeh-Schlieder theorem (also known as the "state-operator relation"). For a longitudinal strip (st) on LF of a finite transverse extension the LF algebra tensor-factorizes together with the Hilbert space $H=H_{st}\otimes H_{st\perp}$ and the $H_{st}$ projected form of the Reeh-Schlieder theorem for a subalgebra localized within the strip continues to be valid$.$ This concept of \textit{transverse extended chiral fields} can also be axiomatically formulated for interacting fields independently of whether those objects result from a bulk theory via holographic projection or whether one wants to study QFT on (non-hyperbolic) null-surfaces. These "lightfront fields" share some important properties with chiral fields. In both cases subalgebras localized on subregions lead to a \textit{geometric modular theory,} whereas in the bulk this property is restricted to wedge algebras. Furthermore in both cases the symmetry groups are infinite dimensional; in chiral theories the largest possible group is (after compactification) Diff($\dot{R}$), whereas the transverse extended version admits besides these pure lighlike symmetries also $x_{\perp}$-$x_{+}$ mixing ($x_{\perp}% $-dependent) symmetry transformations which leave the commutation structure invariant. There is one note of caution, unlike those conformal QFTs which arise as chiral projections from 2-dimensional conformal QFT, the extended chiral models of QFT on the lightfront which result from holography do not come with a stress-energy tensor and hence the diffeomorphism invariance beyond the Moebius invariance (for which one gets from modular invariance, no energy momentum tensor needed) is not automatic. \ This leads to the interesting question if there are concepts which permit to incorporate also the diffeomorphisms beyond the Moebius transformations into a modular setting, a problem which will not be pursuit here. We have formulated the algebraic structure of holographic projected fields for bosonic fields, but it should be obvious to the reader that a generalization to Fermi fields is straightforward. Lightfront holography is consistent with the fact that except for d=1+1 there are no operators which "live" on a lightray since the presence of the quantum mechanical transverse delta function prevents such a possibility i.e. only after transverse averaging with test functions does one get to (unbounded) operators. It is an interesting question whether a direct "holographic projection" of \textit{interacting} pointlike bulk fields into lightfront fields analog to (\ref{LF}) can be formulated, thus avoiding the algebraic steps starting with wedge algebra. The important formula which led to the lightfront generators is the \textit{mass shell representation} of the free field; if we would have performed the $x_{-}=0$ limit in the two point function the result would diverge. This suggests that we should start from the so-called Glaser-Lehmann-Zimmermann (GLZ) representation \cite{GLZ} which is an on-shell representation in terms an infinite series of integrals involving the incoming particle creation/annihilation operators \begin{align} & A(x)=\sum\frac{1}{n!}\int dx_{1}...\int dx_{n}~a(x;x_{1},...x_{n}% ):A_{in}(x_{1})....A(x_{n}):\label{mass}\\ & A(x)=\sum\frac{1}{n!}\int_{H_{m}}dp_{1}...\int_{H_{m}}dp_{n}~e^{ix(\sum p_{i})}\tilde{a}(p_{1},...p_{n}):\tilde{A}(p_{1})....\tilde{A}(p_{n}% ):\nonumber\\ & A(x)_{LF}=A(x)_{x_{-}=0}\nonumber \end{align} in which the coefficient functions $a(x;x_{1},...x_{n})$ are \textit{retarded functions}. The second line shows that only the mass-shell restriction of these functions matter; the momentum space integration goes over the entire mass-shell and the two components of the mass hyperboloid $H_{m}$ are associated with the annihilation/creation part of the Fourier transform of the incoming field. These mass-shell restrictions of the retarded coefficient functions are related to multi-particle formfactors of the field $A.$ Clearly we can take $x_{-}=0$ in this on-shell representation without apparently creating any problems in addition to the possibly bad convergence properties of such series (with or without the lightfront restriction) which they had from the start. The use of the on-shell representation (\ref{mass}) is essential, doing this directly in the Wightman functions would lead to meaningless divergences, as we already noticed in the free field case. Such GLZ formulas amount to a representation of a local field in terms of other local fields in which the \textit{relation between the two sets} of fields is very \textit{nonlocal}. Hence this procedure is less intuitive than the algebraic method based on relative commutants and intersections of algebras. The use of a GLZ series also goes in some sense against the spirit of holography which is to simplify certain aspects\footnote{Those aspects for which holography does not simplify include particle and scattering aspects.} in order to facilitate the solution of certain properties of the theory (i.e. to preserve the original aim of the ill-defined lightcone quantization), whereas to arrive at GLZ representations one must already have solved the on-shell aspects of the model (i.e. know all its formfactors) before applying holography. Nevertheless, in those cases where one has explicit knowledge of formfactors, as in the case of 2-dim. factorizing models mentioned in the previous section, this knowledge can be used to calculate the scaling dimensions of their associated holographic fields $A_{LF}.$ These fields lead to more general plektonic (braid group) commutation relations which replace the bosonic relations of transverse extended chiral observables (\ref{com}). We refer to \cite{Hol} in which the holographic scaling dimensions for several fields in factorizing models will be calculated, including the Ising model for which an exact determination of the scaling dimension of the order field is possible. Although the holographic dimensions agree with those from the short distance analysis (which have been previously calculated in \cite{Ba-Ka}), the conceptual status of holography is quite different from that of critical universality classes. The former is an exact relation between a 2-dim. factorizing model (change of the spacetime ordering of a given bulk theory) whereas the latter is a passing to a different QFT in the same universality class. The mentioned exact result in the case of the Ising model strengthens the hope that GLZ representations and the closely related expansions of local fields in terms of wedge algebra generating on-shell operators \cite{Hol} have a better convergence status than perturbative series. By far the conceptually and mathematically cleanest way to pass from the bulk to the lightfront is in terms of nets of operator algebras via modular theory. This method requires to start from algebras in "standard position" i.e. a pair ($\mathcal{A},\Omega$) such that the operator algebra $\mathcal{A}$ acts cyclically on the state vector $\Omega$ i.e. $\overline{\mathcal{A}\Omega}=H$ and has no annihilators i.e. $A\Omega=0\curvearrowright A=0.$ According to the Reeh-Schlieder theorem any localized algebra $\mathcal{A}(\mathcal{O})$ forms a standard pair ($\mathcal{A}(\mathcal{O}),\Omega$) with respect to the vacuum $\Omega$ and the best starting point for the lightfront holography is a wedge algebra since the (upper) causal horizon $\partial W$ of the wedge $W$ is already half the lightfront. The crux of the matter is the construction of the local substructure on $\partial W.$ The local resolution in longitudinal (lightray) direction is done as follows. Let $W$ be the $x_{0}-x_{3}$ wedge in Minkowski spacetime which is left invariant by the $x_{0}-x_{3}$ Lorentz-boosts. Consider a family of wedges $W_{a}$ which are obtained by sliding the $W$ along the $x_{+}=x_{0}+x_{3}$ lightray by a lightlike translation $a>0$ into itself. The set of spacetime points on $LF$ consisting of those points on $\partial W_{a}$ which are spacelike to the interior of $W_{b}$ for $b>a$ is denoted by $\partial W_{a,b};$ it consists of points $x_{+}\in(a,b)$ with an unlimited transverse part $x_{\perp}\in R^{2}$. These regions are two-sided transverse slabs on $LF$. To get to intersections of finite size one may \textquotedblleft tilt\textquotedblright\ these slabs by the action of certain subgroups in $\mathcal{G}$ which change the transverse directions. Using the 2-parametric subgroup $\mathcal{G}_{2}$ of $\mathcal{G}$ which is the restriction to $LF$ of the two \textquotedblleft translations\textquotedblright\ in the Wigner little group (i.e. the subgroup fixing the lightray in $LF$), it is easy to see that this is achieved by forming intersections with $G_{2}$- transformed slabs $\partial W_{a,b}$ \begin{equation} \partial W_{a,b}\cap g(\partial W_{a,b}),\text{ }g\in\mathcal{G}_{2}% \end{equation} By continuing with forming intersections and unions, one can get to finite convex regions $\mathcal{O}$ of a quite general shape. The local net on the lightfront is the collection of all local algebras $\mathcal{A(O}),$ $\mathcal{O}\subset LF$ and as usual their weak closure is the global algebra $\mathcal{A}_{LF}$. For interacting systems the global lightfront algebra is generally expected to be smaller than the bulk, in particular one expects \begin{align} \mathcal{A}_{LF}(\partial W) & \subset\mathcal{A}(\partial W)=\mathcal{A}% (W)\label{contain}\\ \mathcal{A}_{LF}(\partial W) & =\cup_{\mathcal{O}\subset\partial W}\mathcal{A}_{LF}(\mathcal{O}),~\mathcal{A}(W)=\cup_{\mathcal{C\subset}% W}\mathcal{A}(\mathcal{C})\nonumber \end{align} where the semi-global algebras are formed with the localization concept of their relative nets as indicated in the second line. The smaller left hand side accounts for the fact that the formation of relative commutants as $\mathcal{A}(\partial W_{a,b})$ may not maintain the standardness of the algebra because $\overline{\cup_{a,b}\mathcal{A}(\partial W_{a,b})\Omega }\subsetneqq H.$ In that case the globalization of the algebraic holography only captures a global (i.e. not localized) subalgebra of the global bulk and one could ask whether the pointlike procedure using the GLZ representation leads to generating fields which generate a bigger algebra. gives more. The answer is positive since also (bosonic) fields with anomalous short distance dimensions will pass the projective holography and become anyonic field on the lightray\footnote{The standard Boson-Fermion statistics refers to spacelike distances and the lightlike statistics resulting from projective holography is determined by the anomalous short distance dimensions of the bulk field and not by their statistics.} On the other hand algebraic holography filters out bosonic fields which define the chiral obervables. These chiral observables have a DHR superselection theory. This leads to the obvious conjecture% \begin{equation} Alg\{proj~hol\}\subseteq Alg\{DHR\} \end{equation} Here the left hand side denotes the algebra generated by applying projective holography to the pointlike bulk fields and the reight hand side is the smallest algebra which contains all DHR superselection sectors of the LF observable (extended chiral) algebra which resulted from algebraic holography. It is worthwhile to emphasize that the connection between the operator algebraic and the pointlike prescription is much easier on LF than in the bulk. In the presence of conformal symmetries one has the results of Joerss \cite{Joerss}; looking at his theorems in the chiral setting an adaptation to the transverse extended chiral theories on LF should be straightforward. For consistency reasons such fields must fulfill (\ref{com}) I hope to come back to this issue in a different context. One motivation for being interested in lightfront holography is that it is expected to helpful in dividing the complicated problem of classifying and constructing QFTs according to intrinsic principles into several less complicated steps. In the case of d=1+1 factorizing models one does not need this holographic projection onto a chiral theory on the lightray for the mere existence proof. But e.g. for the determination of the spectrum of the short distance scale dimension, it is only holography and not the critical limit which permits to maintain the original Hilbert space setting. It is precisely this property which makes it potentially interesting for structural investigations and actual constructions of higher dimensional QFT. Now we are well-prepared to address the main point of this section: the area law for localization entropy which follows from the absence of transverse vacuum polarization. Since this point does not depend on most of the above technicalities, it may be helpful to the reader to present the conceptual mathematical origin of this unique\footnote{Holography on null-surfaces is the only context in which a quantum mechanical structure enters a field theoretic setting.} tensor-factorization property. The relevant theorem goes back to Borchers \cite{Bo} and can be stated as follows. Let $\mathcal{A}_{i}\subset B(H),$ $i=1,2$ be two operator algebras with $\left[ \mathcal{A}% _{1},U(a)\mathcal{A}_{2}U(a)^{\ast}\right] =0$ $\forall a$ and $U(a)$ a translation with \textit{nonnegative} generator which fulfills the cluster factorization property (i.e. asymptotic factorization in correlation functions for infinitely large cluster separations) with respect to a unique $U(a)$-invariant state vector $\Omega$\footnote{Locality in both directions shows that the lightlike translates $\left\langle \Omega\left\vert AU(a)B\right\vert \Omega\right\rangle $ are boundary values of entire functions and the cluster property together with Liouville's theorem gives the factorization.}$.$ It then follows that the two algebras tensor factorize in the sense $\mathcal{A}_{1}\mathcal{\vee A}_{2}=\mathcal{A}_{1}\mathcal{\otimes A}_{2}$ where the left hand side denotes the joint operator algebra. In the case at hand the tensor factorization follows as soon as the open regions $\mathcal{O}_{i}\subset LF$ in $\mathcal{A(O}_{i}$) $i=1,2$ have no transverse overlap. The lightlike cluster factorization is weaker (only a power law) than its better known spacelike counterpart, but as a result of the analytic properties following from the \textit{non-negative generator of lightlike translations} it enforces the asymptotic factorization to be valid at all distances. The resulting transverse factorization implies the transverse additivity of extensive quantities as energy and entropy and their behavior in lightray direction can then be calculated in terns of the associated auxiliary chiral theory. a well-known property for spacelike separations. This result \cite{S1}\cite{S2} of the transverse factorization may be summarized as follows \begin{enumerate} \item The system of $LF$ subalgebras $\left\{ \mathcal{A(O)}\right\} _{\mathcal{O\subset}LF}$ tensor-factorizes transversely with the vacuum being free of transverse entanglement \begin{align} & \mathcal{A(O}_{1}\mathcal{\cup O}_{2}\mathcal{)}=\mathcal{A(O}% _{1}\mathcal{)\otimes A(O}_{2}\mathcal{)},\text{ }\mathcal{(O}_{1}% \mathcal{)}_{\perp}\cap\mathcal{(O}_{2}\mathcal{)}_{\perp}=\emptyset \label{fac}\\ & \left\langle \Omega\left\vert \mathcal{A(O}_{1}\mathcal{)\otimes A(O}% _{2}\mathcal{)}\right\vert \Omega\right\rangle =\left\langle \Omega\left\vert \mathcal{A(O}_{1}\mathcal{)}\left\vert \Omega\right\rangle \left\langle \Omega\right\vert \mathcal{A(O}_{2}\mathcal{)}\right\vert \Omega\right\rangle \nonumber \end{align} \item Extensive properties as entropy and energy on $LF$ are proportional to the extension of the transverse area. \item The area density of localization-entropy in the vacuum state for a system with sharp localization on $LF$ diverges logarithmically \begin{equation} s_{loc}=\lim_{\varepsilon\rightarrow0}\frac{c}{6}\left\vert ln\varepsilon \right\vert +... \label{ent}% \end{equation} where $\varepsilon$ is the size of the interval of \textquotedblleft fuzziness\textquotedblright\ of the boundary in the lightray direction which one has to allow in order for the vacuum polarization cloud to attenuate and the proportionality constant $c$ is (at least in typical examples) the central extension parameter of the Witt-Virasoro algebra. \end{enumerate} The following comments about these results are helpful in order to appreciate some of the physical consequences as well as extensions to more general null-surfaces. As the volume divergence of the energy/entropy in a heat bath thermal system results from the thermodynamic limit of a sequence of boxed systems in a Gibbs states, the logarithmic divergence in the vacuum polarization attenuation distance $\varepsilon$ plays an analogous role in the approximation of the semiinfinitely extended $\partial W$ by sequences of algebras whose localization regions approach $\partial W$ from the inside. In both cases the limiting algebras are monads whereas the approximands are type I \ analogs of the "box quantization" algebras. In fact in the present conformal context the relation between the standard heat bath thermodynamic limit and the limit of vanishing attenuation length for the localization-caused vacuum polarization cloud really gord beyond an analogy and becomes an isomorphism. This surprising result is based on two facts \cite{S1}\cite{S2}. On the one hand conformal theories come with a natural covariant "box" approximation of the thermodynamic limit since the continuous spectrum translational Hamiltonian can be obtained as a scaled limit of a sequence of discrete spectrum conformal rotational Hamiltonians associated to global type I systems. In the other hand it has been known for some time that a heat bath chiral KMS state can always be re-interpreted as the Unruh restriction applied to a vacuum system in an larger world (a kind of inverse Unruh effect). Both fact together lead to the above formula for the area density of entropy. In fact using the conformal invariance one can write the area density formula in the more suggestive manner by identifying $\varepsilon$ with the conformal invariant cross-ratio of 4 points% \[ \varepsilon^{2}=\frac{\left( a_{2}-a_{1}\right) \left( b_{1}-b_{2}\right) }{\left( b_{1}-a_{1}\right) \left( b_{2}-a_{2}\right) }% \] where $a_{1}<a_{2}<b_{2}<b_{1}$ so that $\left( a_{1},b_{1}\right) $ corresponds to the larger localization interval and $\left( a_{2}% ,b_{2}\right) $ is the approximand which goes with the interpolating type I algebras. At this point one makes contact with some interesting work on what condensed matter physicist call the "entanglement entropy"\footnote{In \cite{Cardy} the formula for the logarithmically increasing entropy is associated with a field theoretic cutoff and the role of the vacuum polarization cloud in cunjunction with the KMS thermal properties (which is not compatible with a quantum mechanical entanglement interpretation \cite{interface}) are not noticed. Since there is no implementation of the split property, the idea of an attenuation of the vacuum polarization cloud has no conceptual place in a path integral formulation. QM and QFT are not distinguished in the functional integral setting and even on a metaphorical level there seems to be no possibility to implement the split property. }. One expects that the arguments for the absence of transverse vacuum fluctuations carry over to other null-surfaces as e.g. the upper horizon $\partial\mathcal{D}$ of the double cone $\mathcal{D}$. In the interacting case it is not possible to obtain $\partial\mathcal{D}$ generators through test function restrictions. For zero mass free fields there is however the possibility to conformally transform the wedge into the double cone and in this way obtain the holographic generators as the conformally transformed generators of $\mathcal{A}(\partial W).$ In order to show that the resulting $\mathcal{A(\partial D})$ continue to play their role even when the bulk generators cease to be conformal one would have to prove that certain double-cone affiliated inclusions are modular inclusions. We hope to return to this interesting problem. We have presented the pointlike approach and the algebraic approach next to each other, but apart from the free field we have not really connected them. Although one must leave a detailed discussion of their relation to the future, there are some obvious observations one can make. Since for chiral fields the notion of short-distance dimension and rotational spin (the action of the $L_{0}$ generator) are closely connected and since the algebraic process of taking relative commutators is bosonic, the lightfront algebras are necessarily bosonic. A field as the chiral order variable of the Ising model with dimension $\frac{1}{16}$ does not appear in the algebraic holography but, as mentioned above, it is the pointlike projection of the massive order variable in the factorizing Ising model in the bulk. On the other hand an integer dimensional fields as the stress-energy tensor, is common to both formulations. This suggests that the anomalous dimensional fields which are missing in the algebraic construction may be recovered via representation theory of the transverse extended chiral observable algebra which arises as the image of the algebraic holography. Since the original purpose of holography similar to that of that of its ill-fated lightcone quantization predecessor, is to achieve a simplified but still rigorous description (for the lightcone quantization the main motivation was a better description of certain "short distance aspects" of QFT), the question arises if one can use holography as a tool in a more ambitious program of classification and construction of QFTs. In this case one must be able to make sense of \textit{inverse holography} i.e. confront the question whether, knowing the local net on the lightfront. one can only obtain at least part of the local substructure of the bulk. It is immediately clear that one construct that part in the bulk which arises from intersecting the LF-affiliated wedge algebras. The full net is only reconstructible if the action of those remaining Poincar\'{e} transformations outside the 7-parametric LF covariance group is known. The presence of the Moebius group acting on the lightlike direction on null-surfaces in curved spacetime resulting from bifurcate Killing horizons \cite{K-W} has been established in \cite{G-L-W}, thus paving the way for the transfer of the thermal results to QFT in CST. This is an illustration of symmetry enhancement which is one of holographies "magics". The above interaction-free case with its chiral abelian current algebra structure (\ref{LF}) admits a much larger unitarily implemented symmetry group, namely the diffeomorphism group of the circle. However the unitary implementers (beyond the Moebius group) do not leave the vacuum invariant (and hence are not Wigner symmetries). As a result of the commutation relations (\ref{com}) these Diff(S$^{1}$) symmetries are expected to appear in the holographic projection of interacting theories. These unitary symmetries act only geometrically on the holographic objects; their action on the bulk (on which they are also well-defined) is fuzzy i.e. not describable in geometric terms. This looks like an interesting extension of the new setting of local covariance \cite{B-F-V} The area proportionality for localization entropy is a structural property of LQP which creates an interesting and hopefully fruitful contrast with Bekenstein's are law \cite{Be} for black hole horizons. Bekenstein's thermal reading of the area behavior of a certain quantity in classical Einstein-Hilbert like field theories has been interpreted as being on the interface of QFT with QG. Now we see that the main support, namely the claim that QFT alone cannot explain an area behavior, is not correct. There remains the question whether Bekenstein's numerical value, which people tried to understand in terms of quantum mechanical level occupation, is a credible candidate for quantum entropy. QFT gives a family of area laws with different vacuum polarization \textit{attenuation parameters} $\varepsilon$ and it is easy to fix this parameter in terms of the Planck length so that the two values coalesce. The problem which I have with such an argument is that I have never seen a situation where a \textit{classical} value remained intact after passing to the quantum theory. This does only happen for certain \textit{quasiclassical} values in case the system is integrable. \section{From holography to correspondence: the AdS-CFT correspondence and a controversy} The holography onto null-surfaces addresses the very subtle relation between bulk quantum matter and the projection onto its causal/event horizon as explained in the previous section. A simpler case of holography arises if the bulk and a lower dimensional brane\footnote{In general the brane has a lower dimensional symmetry than its associated bulk and usually denotes d-1 dimensional subspace which contains a time-like direction. Different from null-surfaces branes have a causal leakage.} (timelike) boundary share the same maximally possible spacetime (vacuum) symmetry. The only case where this situation arises between two global Lorentz manifolds of different spacetime dimension is the famous AdS-CFT correspondence. In that case the causality leakage off a brane does not occur. In the following we will use the same terminology for the universal coverings of AdS/CFT as for the spacetimes themselves. Already in the 60s the observation that the 15-parametric conformal symmetry which is shared between the conformal of 3+1-dimensional compactified Minkowski spacetime and the 5-dim. Anti-de-Sitter (the negative constant curvature brother of the cosmologically important de Sitter spacetime) brought a possible field theoretic relation between these theories into the foreground; in fact Fronsdal \cite{Fron} suspected that QFTs on both spacetimes share more than the spacetime symmetry groups. But the modular localization theory which could convert the shared group symmetry into a relation between two \textit{different spacetime ordering devices} (in the sense of Leibniz) for the \textit{same abstract quantum matter substrate} was not yet in place at that time. Over several decades the main use of the AdS solution has been (similar to Goedel's cosmological model) to show that Einstein-Hilbert field equations besides the many desired solution (as the Robertson-Walker cosmological models and the closely related de Sitter spacetime) also admit unphysical solutions (leading to timelike selfclosing worldlines, time machines, wormholes etc.) and therefore should be further restricted. The AdS spacetime lost this role of only providing counterexamples and began to play an important role in particle physics when the string theorist placed it into the center of a conjecture about a correspondence between a particular maximally supersymmetric massless conformally covariant Yang-Mills model in d=1+3 and a supersymmetric gravitational model. The first paper was by J. Maldacena \cite{Ma} who started from a particular compactification of 10-dim. superstring theory, with 5 uncompactified coordinates forming the AdS spacetime. Since the mathematics as well as the conceptual structure of string theory is poorly understood, the string side was identified with one of the supersymmetric gravity models which inspite of its being non-renormalizable admitted a more manageable Lagrangian formulation and was expected to have a similar particle content. On the side of CFT he placed a maximally supersymmetric gauge theory of which calculations which verify the vanishing of the low order beta function already existed\footnote{An historically interesting case in which the beta function vanishes in every order is the massive Thirring model. In that case the zero mass limit is indeed conformally invariant, but there is no interacting conformal theory for which a perturbation can be formulated directly, it would generate unmanagable infrared divergencies.} (certainly a \textit{necessary} prerequisite for conformal invariance).\ The arguments involved perturbation theory and additional less controllable approximations. The more than 4.700 follow up papers on this subject did essentially not change the status of the conjecture. But at least some aspects of the general AdS-CFT correspondence became clearer after Witten \cite{Witten} exemplified the ideas in the field theoretic context of a $\Phi^{4}$ coupling on AdS using a Euclidean functional integral setting. The structural properties of the AdS-CFT correspondence came out clearly in Rehren's \cite{Rehren} \textit{algebraic holography}. $\ $The setting of local quantum physics (LQP) is particularly suited for questions in which one theory is assumed as given and one wants to construct its holographic projection or its corresponding model on another spacetime. LQP can solve such problems of isomorphisms between models without being forced to actually construct a model on either side (which functional integration proposes to do but only in a metaphoric way) be. At first sight Rehren's setting rewritten in terms of functional integrals (with all the metaphoric caveats, but done in the best tradition of the functional trade) looked quite different from Witten's functional representation. But thanks to a functional identity (explained in the Duetsch-Rehren paper) which shows that fixing functional sources on a boundary and forcing the field values to take on a boundary value via delta function in the functional field space leads to the same result. In this way the apparent disparity disappeared \cite{Du-Re} and there is only one AdS-CFT correspondence within QFT. There are limits to the rigor and validity of functional integral tools in QFT. Even in QM where they are rigorous an attempt to teach a course on QM based on functional integrals would end without having been able to cover the standard material. As an interesting mental exercise just image a scenario with Feynman before Heisenberg. Since path integral representations are much closer to the old quasiclassical Bohr Sommerfeld formulation the transition would have been much smoother, but it would have taken a longer time to get to the operational core of quantum theory; on the other hand quasiclassical fomulas and perturbative corrections thereof would emerge with elegance and efficiency. Using the measure theoretical functional setting it is well-known that superrenormalizable polynomial couplings can be controlled this way \cite{Gl-Ja}. Realistic models with infinite wave function renormalization constants (all realistic Lagrangian models in more than two spacetime dimensions have a trans canonical short distance behavior) do not fall into this amenable category. But even in low dimension, where there exist models with finite wave function renormalization constants and hence the short distance prerequisites are met, the functional setting of the AdS-CFT correspondence has an infrared problem\footnote{Infrared problems of the kind as they appear in interacting conformal theories are strictly speaking not susceptible to perturbation theoretical treatment and they also seem to pose serious (maybe unsoluble) problems in functional integral representations. In those cases where on knows the exact form of the massless limit (Thirring model) this knowledge can be used to disentangle the perturbative infrared divergences.} of a nasty unresolved kind \cite{Got}. As the result of lack of an analog to the operator formulation in QM the suggestive power, their close relation to classical geometric concepts and their formal elegance functional integrals have maintained their dominant role in particle physics although renormalized perturbation theory is better taken care of in the setting of "causal perturbation". An operator approach which is not only capable to establish the mathematical existence of models but also permits their explicit construction exists presently only in d=1+1; it is the previously mentioned bootstrap-formfactor or wedge-localization approach for fsctorizing models. Lagrangian factorizing models only constitute a small fraction. For structural problems as holography, where one starts from a given theory and wants to construct its intrinsically defined holographic image, the use of metaphorical instruments as Euclidean functional integral representations is suggestive but not really convincing in any mathematical sense. As in the case of lightfront holography there are two mathematically controllable ways to AdS-CFT holography; either using (Wightman) fields (\textit{projective holography}) or using operator algebras (\textit{algebraic holography}). The result of all these different methods can be consistently related \cite{Du-Re}\cite{ReLec}. The main gain in lightfront holography is a significant simplification of certain properties as compared to the bulk. Even if some of the original problems of the bulk come back in the process of holographic inversion they reappear in the more amenable form of several smaller problems rather than one big one. The motivation for field theorists being interested in the AdS-CFT correspondence is similar, apart from the fact that the simplification obtainable through an \textit{algebraic isomorphism} is more limited (less radical) than that of a projection. Nevertheless it is not unreasonable to explore the possibility whether some hidden property as e.g. a widespread conjectures \textit{partial integrabilty}\footnote{Global integrability is only possible in d=1+1, but I am not aware of any theorem which rules out the possibility of integrable substructures. } could become more visible after a spacetime "re-packaging" of the quantum matter substrate from CFT to AdS. Despite many interesting analogies between chiral theories and higher dimensional QFT \cite{To} little is known about higher-dimensional conformal QFTs. There are Lagrangian candidates as e.g. certain supersymmetric Yang-Mills theories which fulfill (at least in lowest order) some perturbative prerequisite of conformality which consists in a vanishing beta-function. As mentioned before perturbation theory for conformal QFT, as a result of severe infrared problems, cannot be formulated directly. The prime example for such a situation is the massive Thirring model for which there exists an elegant structural argument for $\beta(g)=0$ and the knowledge about the non-perturbative massless version can then be used to find the correct perturbative infrared treatment. As far as I could see (with appologies in case of having overlooked some important work) none of these two steps has been carried out for SUSY-YM, so even the conformal side of the Maldacena conjecture has remained unsafe territory. There is one advantage which null-surface holography has over AdS-CFT type brane holography. The cardinality of degrees of freedom adjusts itself to what is \textit{natural} for null-surfaces (as a manifold in its own right); for the lightfront holography this is the operator algebra generated from extended chiral fields (\ref{com}). On the other hand this "thinning out" in holographic projections is of course the reason whay inverse holography becomes more complicated and cannot be done with the QFT on one null surface only. In the holography of the AdS-CFT correspondence the bulk degrees of freedom pass to a conformal brane; in contradistinction to the holography on null-surfaces there is \textit{no reduction of degrees of freedom} resulting from projection. \ Hence the AdS$-$CFT isomorphism starting from a "normal" (causally complete as formally arising from Lagrangians) 5-dimensional AdS leads to a \textit{conformal field theory with too many degrees of freedom}. Since a "thinning out" by hand does not seem to be possible, the "physically health" of such a conformal QFT is somewhat dodgy, to put it mildly. In case one starts with a free Klein-Gordon field on AdS one finds that the generating conformal fields of the CFT are special \textit{generalized free fields} i.e. a kind of continuous superpositions of free fields. They were introduced in the late 50s by W. Greenberg and their useful purpose was (similar to AdS in classical gravity) to \textit{test the physical soundness of axioms of QFT} in the sense that if a system of axioms allowed such solutions, it needed to be further restricted \cite{Ha-Sc} (in that case the so-called causal completion or time-slice property excluded generalized free fields). It seems that meanwhile the word "physical" has changes its meaning, it is used for anything which originated from a physicist. In the opposite direction the degrees of freedom of a "normal" CFT become "diluted" on AdS in the inverse correspondence. There are not sufficient degrees of freedom for arriving at nontrivial compactly localized operators, the cardinality of degrees of freedom is only sufficient to furnish noncompact regions as AdS wedges with nontrivial operators, the compactly localized double cone algebras remain trivial (multiples of the identity). In the setting based on fields this means that the restriction on testfunction spaces is so severe that pointlike field $A_{AdS}(x)$ at interior points $x\in intAdS$ do not exist in the standard sense as operator-valued distributions on Schwartz spaces. They exist on much smaller test function spaces which contain no functions with compact localizations. Both sides of the correspondence have been treated in a mathematically rigorous fashion for free AdS (Klein-Gordon equation) theories and free (wave equation) CFT \cite{Du-Re2}\cite{ReLec} where the mismatch between degrees of freedom can be explicated and the structural arguments based on the principles of general QFT show that this mismatch between the transferred and the natural cardinality of the degree of freedom is really there. In terms of the better known Lagrangian formalism the statement would be that if one starts from a Lagrange theory at one side the other side cannot be Lagrangian. Of course both sides remain QFT in the more general sense of fulfilling the required symmetries, have positive energy and being consistent with spacelike commutativity. In the mentioned free field illustration a AdS Klein-Gordon field is evidently Lagrangian whereas the corresponding \textit{conformal generalized free field} has no Lagrangian and cannot even be characterized in terms of a local hyperbolic field equation. According to the best educated guess, 4-dim. maximally supersymmetric Yang-Mills theories (if they exist and are conformal) would be a natural conformal QFTs "as we know it" and therefore cannot come from a natural QFT on AdS. Needless to say again that there are severe technical problems to set up a perturbation theory for a conformally invariant interactions, the known perturbative systematics breaks down in the presence of infrared problems\footnote{A well-known problem is the massive Thirring model which leads to $\beta=0$ in all orders. In this case one already knew confmal limit in closed form and was able to check the correctness of the relation by consistency considerations.}. I belong to a generation for which not everything which is mathematically possible must have a physical realization; in particular I do not adhere to the new credo that every mathematically consistent idea is realized in some parallel world (anthropic principle): no parallel universe for the physical realization of every mathematical belch. Generalized free fields\footnote{It is interesting to note that the Nambu-Goto Lagrangian (which describes a classical relativistic string) yields upon quantization a pointlike localized generalized free field with the well-known infinite tower mass spectrum and the appearance of a Hagedorn limit temperature. As such it is pointlike localized and there is \textit{no intrinsic quantum concept} which permits to associate it with any stringlike localization. } and their interacting counterparts which arise from natural AdS free- or interacting- fields remain in my view unphysical, but are of considerable mathematical interest. They do not fit into the standard causal localization setting and they do not allow thermal KMS states without a limiting Hagedorn temperature (both facts are related). Nature did not indicate that it likes to go beyond the usual localizability and thermal behavior. If string theory demands such things it is not my concern, let Max Tegmark find another universe where nature complies with string theory. \ Holography is a technical tool and not a physical principle. It simplifies certain aspects of a QFT at the expense of others (i.e. it cannot achieve miracles). The use of such ideas in intermediate steps may have some technical merits, but I do not see any scientific reason to change my viewpoint about physical admissibility. The question of whether by changing the spacetime encoding one could simplify certain properties (e.g. detect integrable substructures) of complicated theories is of course very interesting, but in order to pursue such a line it is not necessary to physically identify the changed theory. Such attempts where only one side needs to be physical and the role of holography would consist in exposing certain structural features which remained hidden in the original formulation sound highly interesting to me. There is however one deeply worrisome aspect of this whole development. Never before has there been more than 4.700 publication on such a rather narrow subject; in fact even nowadays, one decade after this gold-digger's rush about the AdS-CFT correspondence started, there is still a sizable number of papers every month by people looking for nuggets at the same place but without bringing Maldacena's gravity-gauge theory conjecture any closer to a resolution. Even with making all the allowances in comparison with earlier fashions, this phenomenon is too overwhelming order to be overlooked. Independent of its significance for particle physics and the way it will end, the understanding of what went on and its covering by the media will be challenging to historians and philosophers of science in the years to come. I know that it is contra bonos mores to touch on a sociological aspect in a physics paper, but my age permits me to say that at no time before was the scientific production in particle theory that strongly coupled to the Zeitgeist as during the last two decades; never before had global market forces such a decisive impact on the scientific production. Therefore it is natural to look for an explanation why thousands of articles are written on an interesting (but not clearly formulated) conjecture with hundreds of other interesting problems left aside; where does the magic attraction come from? Is it the Holy Grail of a TOE which sets into motion these big caravans? Did the critical power of past particle physics disappear in favor of acclamation? Why are the few critical but unbiased attempts only mentioned by the labels given to them and not by their scientific content? Since commentaries about the crisis in an area of which one is part run the risk of being misunderstood, let me make perfectly clear that particle physics was a speculative subject and I uphold that it must remain this way. Therefore I have no problem whatsoever with Maldacena's paper; it is in the best tradition of particle physics which was always a delicate blend of a highly imaginative and innovative contribution from one author with profoundly critical analysis of others. I am worried about the loss of this balance. My criticism is also not directed against the thousands of authors who enter this area in good faith believing that they are working at an epoch-forming paradigmatic problem because their peers gave them this impression. Even if they entered for the more mundane reason of carving out a career, I would not consider this as the cause of the present problem. The real problem is with those who by their scientific qualifications and status are the intellectual leaders and the role models. If they abdicate their role as critical mediators by becoming the whips of the TOE monoculture of particle physics then checks and balances will be lost. Would there have been almost 5000 publication on a rather narrow theme (compared with other topics) in the presence of a more critical attitude from leading particle physicists? No way. Would particle theory, once the pride of theoretical physics with a methodological impact on many adjacent areas have fallen into disrespect and be the object of mock within the larger physics community? The list of questions of this kind with negative answers can be continued. It is worthwile to look back at times when the delicate balance between the innovative and speculative on the one hand and the critical on the other was still there. Young researchers found guidance by associating themselves to \ "schools of thought" which where associated with geographical places and names as Schwinger, Landau, Bogoiubov, Wheeler, Wightman, Lehmann, Haag... who represented different coexisting schools of thought. Instead of scientific cross fertilization between different schools, the new globalized caravan supports the formation of a gigantic monoculture and the loss of the culture of checks and balances. Not even string theorists can deny that this unfortunate development started with string theory. Every problem string theory addresses takes on a strange metaphoric aspect, an effect which is obviously wanted as the fondness for the use of the letter M shows. The above mentioned AdS-CFT topic gives an illustration which (with a modest amount of mathematical physics) shows the clear structural QFT theorem as compared to the strange conjecture which even thousands of publications were not able to liberate from the metaphoric twilight. But it is a remarkable fact that, whenever string theorist explain their ideas by QFT analogs in the setting of functional integrals as was done by Witten in \cite{Witten} for the $\varphi^{4}$ coupling, and on the other hand algebraic quantum field theorists present their rigorous structural method for the same model in the same setting \cite{Du-Re}, the two results agree (see also \cite{Got}). This is good news. But now comes the bad news. Despite the agreement the Witten camp, i.e. everybody except a few individuals, claim that there exist two different types of AdS-CFT correspondences namely theirs and another one which at least some of them refer to as the "German AdS-CFT correspondence". Why is that? I think I know but I will not write it. At this point it becomes clear that it is the abandonment of the critical role of the leaders which is fuelling this unhealthy development. Could a statement: \ "X-Y-Z theory is a gift of the 21st century which by chance fell into the 20 century" have come from Pauli, Schwinger, or Feynman? One would imagine that in those days people had a better awareness that mystifications like this could disturb the delicate critical counterbalance which the speculative nature of particle physics requires. \ The long range negative effect on particle theory of such a statement is proportional to the prominence and charisma of its author. There have been several books which criticise string theory. Most critics emphasize that the theory has not predicted a single observable effect and that there is no reason to expect that this will change in the future. Although I sympathize with that criticism, especially if it comes from experimentalists and philosphers, I think that a theorist should focus his critique on the conceptual and mathematical structure and not rely on help from Karl Popper or dwell on the non-existent observational support. Surprisingly I could not find any scholarly article in this direction. One of the reasons may be that after 4 decades of development of string theory such a task requires rather detailed knowledge about its conceptual and mathematical basis. As a result of this unsatidfactory situation I stopped my critical article \cite{crisis} from going into print and decided to re-write it in such a way that the particle physics part is strengthened at the expense of the sociological sections. The aforementioned situation of ignoring results which shed a critical light on string theory or the string theorists version of the AdS-CFT correspondence is perhaps best understood in terms of the proverbial \textit{executing of the messenger who brings bad news}; the unwanted message in the case at hand being the \textit{structural} impossibility to have Lagrangian QFTs with causal propagation on both sides of the correspondence. It seems that under the corrosive influence of more than 4 decades of string theory, Feynman's observation about its mode of arguing being based on finding excuses instead of explanations, which two decades ago was meant to be provocative, has become the norm. The quantum gravity-gauge theory conjecture is a good example of how a correct but undesired AdS-CFT correspondence is shifted to the elusive level of string theory and quantum gravity so that the degrees of freedom aspect becomes pushed underneath the rug of the elusive string theory where it only insignificantly enlarges the already very high number of metaphors. There have been an increasing number of papers with titles as "QCD and a Holographic Model of Hadrons", "Early Time Dynamics in Heavy Ion Collisions and AdS/CFT Correspondence", "Confinement/Deconfinement Transition in AdS / CFT", "Isospin Diffusion in Thermal AdS/CFT with flavour", "Holographic Mesons in a Thermal Bath", "Viscous Hyrodynamics and AdS/CFT", "Heavy Quark Diffusion from AdS/CFT".... Ads/CFT \ for everything? Is string theory bolstered by AdS-CFT really on the way to become a TOE for all of physics, a theory for anything which sacrifies conceptual cohesion to amok running calculations? Or are we witnessing a desperate attempt to overcome the more than 4 decade lasting physical disutiliy? Perhaps it is only a consequence of the "liberating" effect of following prominent leaders who have forgone their duty as critical mediators and preserver of conceptual cohesion. \section{Concluding remarks} In these notes we revisited one of the oldest and still unsolved conceptual problems in QFT, the existence of interacting models. Besides some new concrete results about the existence of factorizing models (which only exist in d=1+1), it is the new method itself, with its promise to explore new fundamental and fully intrinsic properties of QFT, which merits attention. A particularly promising approach for the classification and construction of QFTs consists in using holographic lightfront projections (and in a later stage work one's way back into the bulk). In this situation the holographic degrees of freedom are thinned out as compared to the bulk i.e. the extended chiral fields have lesser number of degrees of freedom. The concept of degrees of freedom used here is a dynamical one. Knowing only a global algebra\footnote{Knowing an operator algebra means knowing its position within the algebra $B(H)$ of all operators. Knowing its net substructure means knowing the relative position of all its subalgebras.} as the wedge algebra i.e. $\mathcal{A}(W)\subset B(H)$ as an inclusion into the full algebra one uses fewer degrees freedom than one needs in order to describe the full local substructure of $\mathcal{A}(W)$ i.e. knowing $\mathcal{A}(W\not )$ in the sense of a local net. The degrees of freedom emerge always from relations between algebras whereas the single algebra is a structureless monad\cite{Hol}% . \ Saying that the net $\mathcal{A}(LF)$ has less degrees of freedom than the net associated with the bulk is the same as saying that the knowledge of the $LF$affiliated wedges does not suffice to reconstruct the local bulk structure. In this sense the notion of degrees of freedom depends on the knowlege one has about a system; refining the net structure of localized subalgebras of a global algebra increases the degrees of freedom. The lightfront holography is a genuine projection with a lesser cardinality of degrees of freedom i.e. without knowing how other Poincar\'{e} transformations outside the 7-parametric invariance group of the lightfront act it is not uniquely invertible. On its own, i.e. without added information, the lightfront holography cannot distinguish between massive and massless theories; \ a transverse extended chiral theories does not know whether the bulk was massive or massless. The knowledge of how the opposite lightray translation $U(a_{-})$ acts on $\mathcal{A}(LF)$ restores uniqueness; but this action is necessarily "fuzzy" i.e. non-geometric, purely algebraic$.$Only upon returning to the spacetime ordering device in terms of the bulk it becomes geometric. The hallmark of null-surface holography is an area law for localization entropy in which the proportionality constant is a product of a holographic matter dependent constant times a logarithmic dependence on the attenuation length for vacuum polarization. By far the more popular holography has been the AdS-CFT correspondence. Here its physical utility is less clear than the mathematical structure. Is there really a relation between a special class of conformal gauge invariant gauge theories with supersymmetric quantum gravity? Not a very probable consequence of a change of an spacetime ordering device for a given matter substrate which is what holography means. Integrable substructures within such conformal gauge theories which become more overt on the AdSside? This appears a bit more realistic, but present indications are still very flimsy. \textbf{Acknowledgements}: I am indebted to B. Fauser, J. Tolksdorf and E. Zeidler for the invitation to participate in a 2007 conference in Leipzig and for hospitality extended to me during my stay.
{ "redpajama_set_name": "RedPajamaArXiv" }
4,459
package org.datanucleus.samples.validation; import javax.jdo.annotations.PersistenceCapable; import javax.validation.constraints.NotNull; import javax.validation.constraints.Size; /** * Sample persistable class with javax.validation annotations. */ @PersistenceCapable public class ValidatedPerson { @NotNull(message="Forename must be specified.") String forename; @NotNull(message="Surname must be specified.") String surname; @Size(min = 0, max = 9) String login; public ValidatedPerson(String firstname, String lastname) { this.forename = firstname; this.surname = lastname; } public void setLogin(String login) { this.login = login; } }
{ "redpajama_set_name": "RedPajamaGithub" }
7,981
{"url":"https:\/\/blender.stackexchange.com\/questions\/8999\/convert-from-blender-rotations-to-right-handed-y-up-rotations-maya-houdini","text":"# Convert from Blender rotations to Right Handed Y-Up rotations (Maya, Houdini, \u2026)\n\nQuestion:\n\nMy question is, how can I convert a Blender Z-Up RH rotation to Y-Up RH (Maya, Houdini, etc.) rotation?\n\n(I believe Blender uses a Right-Handed Z-Up orientation... correct me if wrong)\n\nDetails:\n\nPosition seems pretty basic. I am simply getting the position column of the world matrix like so:\n\nposX = lightObj.matrix_world[0][3]\nposY = lightObj.matrix_world[2][3]\nposZ = lightObj.matrix_world[1][3]\n\n\n(swizzling Z and Y)\n\nRotations on the other hand seem to need more than a simple swizzle. I have figured out how to get the Blender world space rotation in XYZ by doing this:\n\nlightObj.matrix_world.to_euler('XYZ')\n\n\nNow if only I knew what to do with it... I have searched for conversion formulas online and come up with several incomplete\/incorrect implementations.\n\n\u2022 \u2013\u00a0gandalf3 Apr 29 '14 at 22:19\n\u2022 I'm not sure if it's correct, but in the past I've gotten correct-looking results by swizzling X > Z, Y > X, Z > Y. \u2013\u00a0gandalf3 Apr 29 '14 at 22:30\n\u2022 I think swizzling works in some cases but not all, as in the swizzle is different in different quadrants or something. From what I gather I might need to build a matrix to convert coordinate systems, or make use of quaternions... \u2013\u00a0PolyMesh Apr 29 '14 at 22:39\n\u2022 Just rotate all your objects by -90\u00b0 around the x-axis. The rotation matrix will do the zwizzle for you. Just make sure you are rotating the objects relative to the world center, not relative to the object center. Then the relative positions of different objects will also be transformed correctly. \u2013\u00a0maddin45 Apr 30 '14 at 6:50\n\u2022 maddin45, I am not able to get this to work. I am getting a x = -90 rotation matrix, multiplying to the matrix_world of the object, then getting euler from there and results are pretty different. Can you tell me what I am doing wrong with this method? \u2013\u00a0PolyMesh Apr 30 '14 at 7:39\n\nFinally got it! There were a few problems with my earlier attempts. The one that was throwing me off the most was incorrect position (because the rotations never looked consistently right with a wrong position)\n\nIt turns out I had the position wrong because the +Y in Blender is actually -Z in a Y-Up RH coordinate system. Or in other words when converting the code is:\n\nposX = lightObj.matrix_world[0][3]\nposY = lightObj.matrix_world[2][3]\nposZ = -lightObj.matrix_world[1][3] # note the negative\n\n\n(swizzle YZ and negate Z)\n\nAnother problem I encountered... It turns out there is a special case when exporting lights, which I didn't realize earlier. In Blender, the cone of a spotlight with zero rotations aims straight down, where in Maya, Houdini, and my game engine, spotlights face down the -Z. (If blender were to match this it would aim down the +Y)\n\nSo with these in mind, I came to this for rotations:\n\n# for objects\nobj = bpy.data.objects['objName']\nmm = bpy_extras.io_utils.axis_conversion(from_forward='Y', from_up='Z', to_forward='-Z', to_up='Y')\nom = obj.matrix_world.to_3x3()\nt = mm * om\nv = t.to_euler('XYZ')\nprint('pos:(%s, %s, %s)' % (obj.matrix_world.translation.x, obj.matrix_world.translation.z, -obj.matrix_world.translation.y))\nprint('rot:(%s, %s, %s)' % (degrees(v.x) + 90.0, degrees(v.y), degrees(v.z)))\n# not totally sure why we need the +90 in X, but assume it has something to do with compensating for the axis_conversion\n\n# for spot lights (they face straight down in Blender)\nobj = bpy.data.objects['spotLightName']\nmm = bpy_extras.io_utils.axis_conversion(from_forward='Y', from_up='Z', to_forward='-Z', to_up='Y')\nom = obj.matrix_world.to_3x3()\nt = mm * om\nv = t.to_euler('XZY') # not sure why this needs to be XZY for spot lights and XYZ for objects\nprint('pos:(%s, %s, %s)' % (obj.matrix_world.translation.x, obj.matrix_world.translation.z, -obj.matrix_world.translation.y))\nprint('rot:(%s, %s, %s)' % (degrees(v.x), degrees(v.y), degrees(v.z)))\n# note: there is no need for the axis_conversion rotation compensation due to the spotlight facing down already\n\n\nAs you can see in the comments, there are still things I am not clear on and not sure I am doing right, so please feel free to comment\/correct me where wrong.\n\nI have a feeling something might be off with my rotation order stuff, but I don't know what it should be. All I know intuitively is that it makes sense to use XZY since that is what I am swizzling to, that should match the post-swizzled rotation order so it is XYZ in both systems right?","date":"2021-05-16 04:00:05","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4408806264400482, \"perplexity\": 1700.7913437140035}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-21\/segments\/1620243991659.54\/warc\/CC-MAIN-20210516013713-20210516043713-00482.warc.gz\"}"}
null
null
Next message: John K Clark: "Gamma Ray Bursters" Previous message: Patrick Wilken: "NANO: NASA Explains How Molecular-Sized Gears Might Work" Next in thread: Max More: "Re: BOOKS: Recommendations" is a very insiderly source, run by cutting-edge editors. neural net, through various instantiations, to use English As She Is Spoke. far as I can see pretty well informed with up-to-date AI theory.
{ "redpajama_set_name": "RedPajamaC4" }
4,480
This kind of news always excites us. Jumping Back Slash has released the 6th EP in the JBS00-something series and we couldn't be happier. Continuing this excellent series of EPs after beginning the new MM00-something series earlier this year. This is one of an almost incessant (but still exciting) run of independent releases on his Bandcamp (a service for which he is a vocal proponent) over the last few years. The EP as a whole comes across as a considered exploration of celebratory and spacious sounds that whack as much as they woo. It's smoothly comforting as much as it motivates for a blurry, late night hedonistic throw-down. The opener 'Fall in Luv' successfully instructs you to do just that; get settled into something fresh and new from the massively multifaceted producer, with all the butterflies and excitement that falling in 'luv' entails. It's got majestic vocal pads for the lift, crunching kicks to smash you down and jubilant 'Hey!'s and 'Wooh!'s to get you back up every time. If track one is falling in love, then 'Tuesday' is a honeymoon shuffle. The track settles into something more comfortably recognisable yet no less vibrant from the Knysna-based super-producer. That leads into the duplicitously dark-and-bright, sci-fi romp with notes of joyous abandon, 'No Summer'. This is one for the dark, deep, late night/early morning dancefloors. The closer, titled 'Life Pass Me By,' is a lush, lamenting anti-ballad that either abstractly tells a story of mourning the loss of youth, or flippantly brushes aside the passing of time. It's hard to tell but easy to get sucked into, either way. The EP is no doubt a unique offering within his vast and frequently-expanding discography. Not only that, but it's a clear evolution that doesn't force the 'dance' aspect that dominates much of his sound's musical heritage. It bodes well for the upcoming (rumoured) label release coming later this year.
{ "redpajama_set_name": "RedPajamaC4" }
3,021
Q: How to create a Request to an external web service according a user in front-end behaves in magento I have to integrate a web store on magento and integrate it with an external system. This system provides several functionalities, they are.. Update stock of products Import products form stock and when a stock or product list has being updated by the front end in magento, magento has to send a notification request to the external system about the change. first two i know there is API methods written on magento. but can you please explain me how can i achieve this task? Thanks in advance!
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,993
Q: How to fix "No Module name pyspark" from exe I have created a program in Pyspark/python using spyder IDE. Program is using Pyspark library and it runs perfectly file when i am running it from IDE SPYDER. I created exe of same program using pyinstaller. When i run exe from command prompt it gives error "No module name Pyspark". Please help/suggest. Thank You. I have created a program in Pyspark/python using spyder IDE. Program is using Pyspark library and it runs perfectly file when i am running it from IDE SPYDER. I created exe of same program using pyinstaller. When i run exe from command prompt it gives error "No module name Pyspark A: Have you installed it with pip? pip install pyspark Must be install 'global' not only in your enviroment if you have one.
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,914
Q: Subtract CGRect from CGRect -- largest piece of one not containing the other How I can substract one CGRect from another? I want the result R1 - R2 to be the largest subrectangle of R1 that does not intersect R2. Example 1: +----------------------------------+ | +--------+ | | | R2 | | | | | | | +--------+ R1 | | | | | | | +----------------------------------+ R3 = CGRectSubstract(R2,R1); +----------------------+ | | | | | | | R3 | | | | | | | +----------------------+ Example 2: +-----------------------+----------+ | | | | | R2 | | | | | R1 +----------+ | | | | | | +----------------------------------+ R3 = CGRectSubstract(R2,R1); +-----------------------+ | | | | | | | R3 | | | | | | | +-----------------------+ Example 3: +----------------------------------+ | | | | | | | R1 | | +---------+ | | | | | | | R2 | | +---------+---------+--------------+ R3 = CGRectSubstract(R2,R1); +----------------------------------+ | | | | | R3 | | | +----------------------------------+ A: Your definition is fairly ambiguous, what says whether the subtraction is horizontal or vertical? I recommend using a combination of CGRectIntersection and CGRectDivide, along with specifying a direction to remove ambiguity. (not tested, or even compiled) CGRect rectSubtract(CGRect r1, CGRect r2, CGRectEdge edge) { // Find how much r1 overlaps r2 CGRect intersection = CGRectIntersection(r1, r2); // If they don't intersect, just return r1. No subtraction to be done if (CGRectIsNull(intersection)) { return r1; } // Figure out how much we chop off r1 float chopAmount = (edge == CGRectMinXEdge || edge == CGRectMaxXEdge) ? intersection.size.width : intersection.size.height; CGRect r3, throwaway; // Chop CGRectDivide(r1, &throwaway, &r3, chopAmount, edge); return r3; } A: CGRect newRect = CGRectMake(0, 0, rect2.size.width - rect1.size.width, rect2.size.height - rect1.size.height); In response to your illustration, this code I've given you here will do exactly what you want (assuming you don't care about the origin XY coordinates). I've looked through the docs for CGGeometry functions, and there doesn't seem to be a CGRectDifference or other such method defined. There is, however, CGRectUnion, but that does the opposite of what you are looking for. A: Would probably go something like this: CGRect frame = CGRectMake(0, 0, 320, 480); float aWidth = frame.size.width; /* say for instance 320 */ float aHeight = frame.size.height; /* say for instance 480 */ int final = aWidth - aHeight; NSLog(@"Should be -160, your answer: %i",final);
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,418
Gilles Peterson, Worldwide FM and Brownswood Recordings have announced a new UK festival running from Thursday 15th – Sunday 18th August 2019. We Out Here takes its name from the 2018 Brownswood compilation that's become a calling card for the recent UK jazz renaissance, so the forthcoming lineup will no doubt be full of its contributors. Billed as a festival for clubbers and families, the programme promises to cover the full breadth of DJ and live talent that orbits Gilles' universe, with the community values of its creators at its core. We Out Here builds on Gilles' annual summer and winter festivals in France, as well as the London awards evening to become his first UK festival adventure and biggest live music project to date. Location and lineup will be announced 6th February. Sign-up for pre-sale tickets at the We Out Here website. Watch Gilles unpack the impact of Lonnie Liston Smith's album Expansions.
{ "redpajama_set_name": "RedPajamaC4" }
4,622
{"url":"https:\/\/ai.stackexchange.com\/questions\/6622\/how-is-division-by-zero-avoided-when-implementing-back-propagation-for-a-neural","text":"# How is division by zero avoided when implementing back-propagation for a neural network with sigmoid at the output neuron?\n\nI am building a neural network for which I am using the sigmoid function as the activation function for the single output neuron at the end. Since the sigmoid function is known to take any number and return a value between 0 and 1, this is causing division by zero error in the back-propagation stage, because of the derivation of cross-entropy. I have seen over the internet it is advised to use a sigmoid activation function with a cross-entropy loss function.\n\nSo, how this error is solved?\n\n\u2022 it's not quite clear what you are asking , how will it cause division by zero error? can you describe more about the model? Jun 2, 2018 at 5:36\n\u2022 It has been suggested to me that adding a small constant to the denominator will prevent the divide by zero error. Contrary to he accepted answer, this does cause issues of giving outputs of infinity rather than zero. Jun 8, 2020 at 21:41\n\nCross entropy loss is given by:\n\nNow as we know sigmoid function outputs values between 0-1, but what you have missed is it cannot output values exactly 0 or exactly 1 as for that to happen sigmoid(z) will have to be + or -infinity.\n\nAlthough your compiler gives a divide by 0 error, as very small floating point numbers are rounded off to 0, it is practically of no importance as it can happen in 2 cases only:\n\n1. sigmoid(z) = 0,in which case even though the compiler cannot calculate log(0) (the first term in the equation) it is ultimately getting multiplied by y_i which will be 0 so final answer is 0.\n2. sigmoid(z) = 1,in which case even though the compiler cannot calculate log(1-1) (the second term in the equation) it is ultimately getting multiplied by 1 - y_i which will be 0 so final answer is 0.\n\nThere are a few ways to get past this if you don't want the error at all:\n\n\u2022 Increase the precision of your compiler to float64 or infinity if available.\n\u2022 Write the program in such a way that anything multiplied by 0 is 0 without looking at the other terms.\n\u2022 Write the program in a way to handle such cases in a special way.\n\nImplementation side note: You cannot bypass divide by 0 error with your manual exception handler in most processors (AFAIK) . So you have to make sure the error does not occur at all.\n\nNOTE: It is assumed that the random weight initialisation takes care of the fact that at the beginning of training it does not so happen that $$\\tilde y$$ or $$1-\\tilde y$$ is 0 while the target is exactly the opposite, it is assumed that due to good training that the output is reaching near to the target and thus the 2 cases mentioned above will hold true.\n\nHope this helps!\n\n\u2022 Loss you mentioned is logistic loss, not cross entropy loss. Logistic loss assumes binary classification and 0 corresponds to one class and 1 to another. Cross entropy is used for multiple class case and sum of inputs should be equal to 1. Formula is just negative sum of each label multiply by log of each prediction. Feb 11, 2020 at 10:50\n\u2022 @KyryloPolezhaiev I'm not sure of the terminology either. For example in PyTorch cross entropy loss means softmax loss whereas logistic\/cross entropy loss is named as binary cross entropy loss.\n\u2013\u00a0user9947\nFeb 11, 2020 at 12:50\n\u2022 Also, if sigmoid returns almost zero it doesn\u2019t mean tgat label y is equal to zero. Same for case when sigmoid return one. Model can miss. That is what happens almost everytime when training is started. Feb 17, 2020 at 11:09\n\u2022 Sigmoid of z is output of model, y is ground truth label from dataset to compare output with. Feb 17, 2020 at 11:10\n\u2022 I have run into the OPs problem with the same exact set-up (binary cross entropy loss and logistic sigmoid function). My are not rendering as simply 0 during back propagation, as your answer suggests. I am getting a lot of infinity values when sigmoid(z). The problem isn't trivial, but the solution is. I found it in Giang Tran's answer to this question. Look at the final two lines of his derivation: math.stackexchange.com\/questions\/2503428\/\u2026 Jun 7, 2020 at 23:25","date":"2022-06-26 04:16:58","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 2, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7253458499908447, \"perplexity\": 560.645937065859}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-27\/segments\/1656103037089.4\/warc\/CC-MAIN-20220626040948-20220626070948-00659.warc.gz\"}"}
null
null
Lampropeltis extenuata est une espèce de serpent de la famille des Colubridae. Répartition Cette espèce est endémique de Floride aux États-Unis. Description L'holotype de Lampropeltis extenuata mesure dont pour la queue. Cette espèce à la face dorsale gris-argenté présente 61 taches brun sombre cerclées de noir entre la tête et l'anus et 11 au niveau de la queue. Sa face ventrale est gris-argenté tacheté de noir. Publication originale Brown, 1890 : On a new genus of Colubridae from Florida. Proceedings of the Academy of Natural Sciences of Philadelphia, , (texte intégral). Liens externes Notes et références Serpent (nom scientifique) Colubrinae Faune endémique de Floride
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,942
'use strict'; var Promise = require('bluebird'); var chalk = require('chalk'); var inquirer = require('inquirer'); var fsStat = Promise.promisify(require('fs').stat); var fsUnlink = Promise.promisify(require('fs').unlink); var fsRmdir = Promise.promisify(require('rimraf')); module.exports = function modExports(args) { var searchTerms; if (!args._.join('')) { console.log(chalk.red('You need to search for a specific post before you can remove it. Check `hexo help remove` for usage details.')); process.exit(); } else { // every whitespace-separated word in the input search is a case-insensitive regular expression searchTerms = args._.map(function mapSearchTerms(arg) { return new RegExp(arg, 'i'); }); } // load database this.load().then(function loadDb() { var locals = this.locals; // load posts and pages getArticles(locals).then(function loadArticles(articles) { return selectArticle(articles); }).then(function processSelected(selected) { return confirmRemove(selected); }).catch(function failProcessSelected(err) { console.log(err.stack ? chalk.red(err.stack) : chalk.gray(err)); process.exit(); }); function getArticles(data) { return Promise.resolve(data.get('posts').toArray().concat(locals.get('pages').toArray())); } function filterOnName(articles, terms) { return articles.filter(function filterArticles(article) { return terms.every(function checkRE(term) { return term.test(article.title) || term.test(article.slug); }); }); } function selectArticle(items) { var filtered = filterOnName(items, searchTerms); if (filtered.length === 0) { return Promise.reject(chalk.red('No posts or pages found using your query.')); } if (filtered.length === 1) { return Promise.resolve(filtered[0]); } var entries = filtered.map(function mapEntries(article) { return [article.title, ' (', chalk.green(article.source), ')'].join(''); }); return inquirer.prompt([ { type: 'list', name: 'selected', message: 'Select the post or page you wish to rename.', choices: entries, }, ]).then(function getAnswer(answer) { var pos = entries.indexOf(answer.selected); return filtered[pos]; }); } function confirmRemove(post) { var message = '\n - Remove ' + chalk.green.underline(post.title) + '?\n' + chalk.red.bgBlack('Warning: this action is irreversible!'); var del = chalk.red('Delete it!'); var can = chalk.green('Cancel'); return inquirer.prompt([ { type: 'list', message: message, name: 'answer', choices: [ del, can, ], }, ]).then(function getResponse(response) { var ans = response.answer; switch (ans) { case del: remove(post); break; case can: console.log(chalk.gray('OK. See you later.')); process.exit(); break; default: console.log(chalk.red('This shouldn\'t happen, please file a bug report.')); process.exit(); } return chalk.gray('Done.'); }); } function remove(post) { var src = post.full_source; // first check if the file exists fsStat(src).then(function getStats(stats) { if (stats.isFile()) { // delete the file fsUnlink(src).then(function unlinkFile() { var assetDir = src.substr(0, src.lastIndexOf('.')); console.log(chalk.red.underline(src + ' deleted.')); // check for the asset directory fsStat(assetDir).then(function getAssetDirStats(adStats) { if (adStats.isDirectory()) { // delete the asset dir fsRmdir(assetDir).then(function unlinkAssetDir() { console.log(chalk.red.underline(assetDir + ' (asset directory) deleted.')); }); } }).catch(function failAssetDir() { console.log(chalk.gray('No asset dir found.')); }); }); } }).catch(function failRemove(err) { console.log(chalk.red('Problem deleting article :', err)); process.exit(); }); } }.bind(this)); };
{ "redpajama_set_name": "RedPajamaGithub" }
6,258
Own your social media–install Storytlr | Dan Moore! I guess I'm just not very trusting, because I like to have copies of my data. I host my own blog, rather than use blogger or wordpress.com. I host my own email (or at least one of my two main accounts). I prefer to document interesting things on my blog, rather than a site like Quora or Stack Overflow (though I do have an account on the latter). Heck, even though I use an open ID provider, my own domain is the master, and I just delegate to myopenid.com. It was pretty trivial to install. I ran into this issue with Storytlr not recognizing that PDO was installed, but the fix (hacking the install script) worked, and I didn't run into the Zend error also in that bug post. I also ran into an issue where I chose an admin password of less than six characters on install. Storytlr was happy to let me do that, but then wouldn't let me enter the exact same password when I was logging in for the first time. To fix this, I had to update the password column in the users table with a new MD5 string, created using this tool. So, what does Storytlr actually give me? Access to my data: I set up feeds to be polled regularly (requires access to cron) and can export them to CSV whenever I want. And I keep them as long as I want to. One single point of view of all my social content. Really easy way to add more feeds if I join a new social network. Here are the sites/networks Storytlr supports right now. Technical issues, resolved as documented above. No support for facebook. (Well, there's this experimental support, announced here, but nothing that is part of the project.) This is big, given how bad Facebook is with respect to privacy. I am not sure what my next steps are here. Not wanting others to have access to my lifestream. This was easily fixed with a Auth directive. If you are depending on social media sites, have some technical chops, a server to host it on, and want to ensure a historical archive, you should look at Storytlr.
{ "redpajama_set_name": "RedPajamaC4" }
5,427
HomePerspectiveIt can't happen here: A review of Live Not By Lies It can't happen here: A review of Live Not By Lies By Francis X. Maier | Catholic World Report In January 2017, three days before Barack Obama left the White House, the New York Times published an opinion piece entitled "Reading the Classic Novel That Predicted Trump." Written by Beverly Gage, it spoke darkly of parallels between the 1935 Sinclair Lewis fantasy, It Can't Happen Here, and the incoming new president. In the Lewis novel, a populist bully, Berzelius Windrip, sweeps to power in the Great Depression. He attacks blacks and Jews, the "lies" of the press, and the elitism of intellectuals. He promises "every real American family" a cash bonus. Once in office, he locks up Congress and installs a homespun fascism. As Gage fretted in the Times: At a moment when instability seems to be the only constant in American politics, It Can't Happen Here offers an alluring (if terrifying) certainty: It can happen here, and what comes next will be even ghastlier than you expect . . . If Lewis's postelection vision is what awaits us, there will be little cause for hope, or even civic engagement, in the months ahead. The only viable options will be to get out of the country — or to join an armed underground resistance. Time has been cruel to the Lewis novel. Fascism never came close to power in the United States, not even under the dreaded Donald. And compared to Orwell's 1984 or Zamyatin's We, It Can't Happen Here is second-rate literature. But the Times article is still instructive. It's the voice of a coastal ruling class freaked by the prospect of troglodytes from the flyover colonies wrecking their lawn party. One needn't be a Donald fan to read the last four years of media loathing and congressional guerrilla warfare for what they are: a slow-motion coup by the nation's "right people," the "best people," against a vulgar — if, alas, constitutionally legitimate —intruder. Trump clearly earned part of that hostility; but only part. As a think-tank friend likes to quip, nothing is more suggestive of Washington's entrenched government class these days than the 1978 film, Invasion of the Body Snatchers. The aliens running things may look human, but they recoil and shriek at any outsider who's not One of Their Own. Here in the real 2020, the conditions that produced a "Big Man" style dictatorship — a Hitler or Franco or Mussolini — simply don't exist in advanced economies. It Can't Happen Here really can't happen here without a collapse in the U.S. standard of living. But something worse can happen, as Rod Dreher argues persuasively in his latest book, Live Not By Lies: A Manual for Christian Dissidents. As events would have it, we don't need an American Caesar or the theatrics of a Rubicon crossing. Our political institutions and public consciousness can be, and are being, transformed from the inside out, without any melodrama. The result, says Dreher, will be a comfortable servitude, a "soft totalitarianism," run by a technocratic, progressive elite, and supported by Big Data and a compliant capitalism. Everyday life will be far closer to the sunny brain-scrub of Aldous Huxley's Brave New World than the shabbiness and goon-squad brutality of Orwell's Airstrip One. Dreher has been the canary in a cultural coal mine for some time. He wrote compellingly about "post-Christian" America long before many Christians were willing to admit the obvious. His best-selling 2017 book, The Benedict Option, linked modern believers to their monastic past for the tools to thrive in unfriendly times. He has two goals in Live Not By Lies, a fitting sequel to his earlier work. He seeks first to explain what's reshaping American culture and why; and then to suggest the strategies needed today to live and witness Christian hope, despite the changing terrain. Dreher has a simple, vigorous, engaging style, backed up by exhaustive research and numerous interviews with survivors of Soviet era repression. His book's title — "Live Not By Lies" — is taken from a 1974 essay by the great Russian dissident, Alexander Solzhenitsyn. And logically so. A survivor of the gulag, Solzhenitsyn committed his life to attacking the mendacity and murderous delusions of Marxist-Leninist ideology. Stalin and his millions of victims were not an "aberration" of the socialist system. They were the inevitable fruit of deceits congenital to Marxist and progressive thought. For Solzhenitsyn, the label "progressive" itself was a misnomer, an example of overweening conceit and skillful self-deception. The materialist view of man was not simply wrong, but a poisonous lie. Dreher borrows this basic insight and applies it to the smiley-face atheism at the heart of modern technocratic thought. The lie that infects the DNA of atheism kills. Whether the killing is quick and brutal, or a slow, soft strangulation of the spirit, the result is the same. Part One of Dreher's book argues, to quote the author, that "despite its superficial permissiveness, liberal democracy is degenerating into something resembling the totalitarianism over which it triumphed in the Cold War." Dreher examines the seemingly implausible, but very real, parallels between our own society and the ones that gave birth to the totalitarianisms of the last century. Part Two of the text outlines the "forms, methods and sources of resistance" we might use to push back against "soft totalitarianism's lies." The chapters in Part One on "Progressivism as Religion" and "Capitalism, Woke and Watchful," are especially strong. Anyone imagining big business as instinctively conservative need only remember the speed with which corporations jumped on the same-sex marriage and "gay rights" bandwagon. The lavish business support showered on the "Black Lives Matter" (BLM) movement is also revealing, since — beneath its calls for racial justice — the BLM agenda is toxic to what most Americans believe. The lesson here is simple: Absent a grounding in broadly biblical principles, corporations follow profits, wherever they lead. In Part Two, the chapters on cultural memory, families as resistance cells, and "the gift of suffering," make for essential reading. The excellence of this text flows not just from the richness of its content, or the clarity and passion of its presentation, but also from the providential nature of its timing. We live in a uniquely weird moment of uncertainty: a time of peril from a changing culture, but also of opportunity to witness, with our lives, the power of what we believe. It demands a new kind of missionary work, done family to family, friend to friend, local church to local church. It's a moment when many of our Christian leaders, including Catholic leaders, seem too weak, or confused, or coopted — or dealing with regimes like China, too deluded — to inspire trust. But the work of the Gospel still does need to be done. And that's on us. It's also why a book like Live Not By Lies is so important. This article was originally published by Catholic World Report. It is republished here with permission. Previous articleWhy is devotion to Mary important? Next articleFour principles for Catholics during election season Mary Beth Bonacci
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,661
Well after my weekend I am still hanging in there. I was very upset when my weight went to 155 to 156. It was so hard seeing the scale that high after all my hard work. Then on Sunday I weigh myself and I am 154. I don't know if Saturday or Sunday is right, but I'll take 154. I already eaten a pint of Ben & Jerry's on Saturday so I told myself I have to try and work off some of those calories. So I have worked out every morning this week with my usual 30 minutes. Then on Monday my boss took everyone out to Olive Garden. I had 2 breadsticks, salad, and whole wheat pasta with five cheese marinara. I added in all my calories and lunch was 1,050 calories, so with my light breakfast and light dinner I was still in goal range. Then today I went out with a few co-workers since one of them is leaving and I ate well. I know on Thursday I have a lunch meeting (more food!) and next week I have diner plans two nights. All I keep thinking is if I can eat really good for breakfast and dinner, then try and pick healthy options I will be ok. And the weekend is coming up and I always eat bad, but I am going to concentrate on getting through tomorrow first.
{ "redpajama_set_name": "RedPajamaC4" }
3,889
Dražljáj ali stímulus je v fiziologiji določena količina kake energije, ki vpliva na čutilni receptor ali vzdražno tkivo, na primer bolečinski dražljaj, ki vpliva na receptorje za bolečino, ali vidni dražljaj, ki vpliva na mrežnico. Dražljaji so lahko kemični, električni, mehanični, svetlobni ali toplotni. Čutilni receptorji so zelo selektivni za določeno vrsto dražljaja (zato na primer zvok ne vpliva na čutilne celice v očesu). Po delovanju dražljaja na čutilni receptor se spremeni prevodnost membrane receptorja za enega ali več ionov in posledično se spremeni membranski potencial receptorja; pride lahko do hiperpolarizacije ali depolarizacije membrane. Signal se prenese v osrednje živčevje v obliki akcijskih potencialov, ki se prožijo zaradi sprememb membranskega potenciala. Na osnovi vrste dražljaja tudi delimo čutilne receptorje v organizmu: fotoreceptorji se odzivajo na svetlobne dražljaje mehanoreceptorji se odzivajo na mehanične dražljaje termoreceptorji se odzivajo na toplotne dražljaje kemoreceptorji se odzivajo na kemične dražljaje (recimo kemične snovi, ki povzročajo zaznavo vonja) nociceptorji se odzivajo na bolečinske dražljaje ... Viri Nevrofiziologija Čutila
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,392
{"url":"https:\/\/puzzling.stackexchange.com\/questions\/72810\/palindromic-number-puzzle-make-505-from-20202?noredirect=1","text":"# Palindromic number puzzle - make 505 from 20202\n\nCan you assemble a formula using the numbers $$2$$, $$0$$, $$2$$, $$0$$, and $$2$$ in any order so that the results equals $$505$$. You may use the operations $$x + y$$, $$x - y$$, $$x \\times y$$, $$x \\div y$$, $$x!$$, $$\\sqrt{x}$$, $$\\sqrt[\\leftroot{-2}\\uproot{2}x]{y}$$ and $$x^y$$, as long as all operands are either $$2$$, $$0$$, $$2$$, $$0$$, or $$2$$. Operands may of course also be derived from calculations e.g. $$200*(2+2)$$. You may also use brackets to clarify order of operations, and you may concatenate two or more of the five digits you start with (such as $$2$$ and $$0$$ to make the number $$20$$) if you wish. You may only use each of the starting digits once and you must use all five of them. I'm afraid that concatenation of numbers from calculations is not permitted, but answers with concatenations will get plus one from me.\n\nAny finite number of functions can be used, though ingenious solutions with infinite numbers of functions will get plus one from me.\n\nNote that\ndouble, triple, etc. factorials (n-druple-factorials), such as $$6!! = 6 \\times 4 \\times 2$$ are not allowed, but factorials of factorials are fine, such as $$((2+0!)!)! = 6! = 720$$. I will upvote answers with double, triple and n-druple-factorials which get the required answers, but will not mark them as correct - particularly because a general method was developed by @Carl Schildkraut to solve these puzzles provided that you can make a single number greater than $$2$$ two times from the numbers you start with - here it could be $$20+0$$ and $$20$$ to get two $$20$$s, for example.\n\nmany thanks to the authors of the similar questions below for inspiring this question.\n\n$$\\sqrt{\\frac{{((2^{2})!)!}}{20!}+0!}$$\n$$504$$ is equal to $$24\\times21$$ and you can find $$24$$ with two factorials in a row with two $$2$$, but we cannot have 21 without using $$2$$ and $$1$$ at the same time, but we had 2 and 0. So the rest becomes suspicious since only 23 and 22 was missing in between which makes me close to 504 more and we could eliminate less than 20! with 2 and 0 and so on.","date":"2022-08-18 16:33:56","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 36, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8286652565002441, \"perplexity\": 244.38955697740846}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-33\/segments\/1659882573242.55\/warc\/CC-MAIN-20220818154820-20220818184820-00774.warc.gz\"}"}
null
null
package com.txsec; import com.txsec.model.Model; import org.lwjgl.LWJGLException; import org.lwjgl.opengl.*; import javax.swing.text.AbstractDocument; public class Main { private void start() { try { ContextAttribs attribs = new ContextAttribs(1,2); Display.setDisplayMode(new DisplayMode(800,600)); Display.create(); Display.setTitle("OpenGL Learning"); GL11.glViewport(0, 0, 800, 600); } catch (LWJGLException e ) { e.printStackTrace(); System.exit(0); } // init OpenGL here //CREATE THE VAO float[] data = { // Left bottom triangle -0.5f, 0.5f, 0f, -0.5f, -0.5f, 0f, 0.5f, -0.5f, 0f, // Right top triangle 0.5f, 0.5f, 0f, }; int[] indices = {0,1,2,2,3,0}; Model model = new Model(data,indices); while (!Display.isCloseRequested()) { // render OpenGL here GL11.glClearColor(0,0,0,0); model.render(); //Render our model. Display.update(); Display.sync(60); } model.cleanUp(); Display.destroy(); } /** * The entry point of the application * @param argv Arguments. */ public static void main(String[] argv) { new Main().start(); } }
{ "redpajama_set_name": "RedPajamaGithub" }
1,851
{"url":"https:\/\/tex.stackexchange.com\/questions\/354347\/small-caps-in-title-but-not-for-words-like-of","text":"# Small caps in title but not for words like \u201cof\u201d\n\nI want the command \\textsc but not for all words (words like \"of\" or \"to\" should be set the normal way without small caps).\n\n\\documentclass{article}\n\n\\begin{document}\n\\textsc{Contribution submission to the conference}\n\\end{document}\n\n\nHow can I use the command \\textsc and automate this small caps thing? How does it work if I use \\title and \\@title to place\/create my title?\n\n\u2022 Can you elaborate a little? Using \\textsc{submission}, \"submission\" will be set in all lowercase small caps font shape, rather than uppercase \"S\" and lowercase \"ubmission\". It would seem you are not talking about issues of auto-capitalization of words (as in the case I just mention), but rather about using a roman (upright) font versus a small-caps font...Is that a correct understanding of your question? By the way, welcome to the site. \u2013\u00a0Steven B. Segletes Feb 17 '17 at 15:39\n\u2022 Which engine do you use btw? \u2013\u00a0TeXnician Feb 17 '17 at 15:41\n\u2022 Which of these two cases are you seeking the result to look like? \\textsc{Contribution submission \\textup{to} the conference} or \\textsc{Contribution Submission to The Conference}? If the second case, see the titlecaps package. \u2013\u00a0Steven B. Segletes Feb 17 '17 at 15:43\n\u2022 Note that it will look very strange if you want the \\textup{} mixture, as your wording suggests. \u2013\u00a0cfr Feb 18 '17 at 0:55\n\u2022 So you are looking for titlecaps? \u2013\u00a0Johannes_B Feb 18 '17 at 17:11\n\nIt is still not quite clear what the OP seeks, but here I show what the titlecaps package can accomplish, which is:\n\n1. capitalizing the first letter of each word\n\n2. excluding a specified list of words from capitalization, using \\Addlcwords.\n\nHere is an example, shown in small-caps and upright shapes.\n\n\\documentclass{article}\n\\usepackage{titlecaps}\n\\begin{document}\n\\titlecap{\\textsc{Contribution submission to the conference}}\n\n\\titlecap{Contribution submission to the conference}\n\\end{document}\n\n\nIn the end, this seems to be what the OP wanted. I don't know of a way to automate it, but it is not particularly difficult to do manually.\n\n\\documentclass{article}\n\\begin{document}\n\\textsc{CONTRIBUTION SUBMISSION to the CONFERENCE}\n\\end{document}\n\n\n\u2022 I think the OP is after \\textsc{Contribution submission} to the \\textsc{conference}. \u2013\u00a0Werner Feb 18 '17 at 20:10\n\u2022 @Werner I too saw that as a possibility, but as cfr pointed out, that \"will look very strange.\" \u2013\u00a0Steven B. Segletes Feb 18 '17 at 20:14\n\u2022 Thanks for the answers. The first one is near what I'm looking for. All letters should be in upper-case letters (\"to\" or \"the\" as well) but they should have exactly the same size except of \"to\" or \"the\". They should be in upper-case as well but smaller than the other main words. I don't want the first letters of any Word higher than the others. Is it clearer? \u2013\u00a0Mike1993 Feb 19 '17 at 12:43\n\u2022 @Mike1993 Do you mean something like \\textsc{CONTRIBUTION SUBMISSION to the CONFERENCE}? \u2013\u00a0Steven B. Segletes Feb 19 '17 at 16:30\n\u2022 Yes, this is exactly what I was looking for. It's crazy but this didn't come to my mind. Perhaps it is too easy. \u2013\u00a0Mike1993 Feb 20 '17 at 16:21","date":"2019-10-16 12:02:40","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6539680361747742, \"perplexity\": 1833.0975570447954}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-43\/segments\/1570986668569.22\/warc\/CC-MAIN-20191016113040-20191016140540-00263.warc.gz\"}"}
null
null
@interface PodsDummy_Pods_ABToolKit_SwiftyUserDefaults : NSObject @end @implementation PodsDummy_Pods_ABToolKit_SwiftyUserDefaults @end
{ "redpajama_set_name": "RedPajamaGithub" }
1,339
Q: Power sums and Jack symmetric functions Let $\Lambda$ be the algebra of symmetric functions in infinitely many variables over $\mathbb{C}$. The $n$-th power sum symmetric function $p_n$ is defined (formally) as \begin{equation} p_n=\sum_i x_i^n\ . \end{equation} The set consisting of symmetric functions $p_\mu=p_{\mu_1}\cdots p_{\mu_t}$, for all partitions $\mu=(\mu_1, \ldots, \mu_t)$, is a basis of $\Lambda$. For any partition $\lambda$, let us denote by $J_\lambda^\alpha$ the Jack symmetric function associated with $\alpha$. This is uniquely determined by a triangular expansion with respect to the monomial symmetric functions and by the condition \begin{equation} \langle J_\lambda^\alpha, J_\mu^\alpha\rangle_\alpha = 0 \mbox{ for } \lambda\neq \mu\ , \end{equation} where $\langle \cdot , \cdot \rangle_{\alpha}$ is defined over the basis of power sums as \begin{equation} \langle p_\lambda, p_\mu\rangle_\alpha=\delta_{\lambda,\mu} z_\lambda \alpha^{\ell(\lambda)}\ , \end{equation} where $\delta_{\lambda,\mu}=\prod_a \delta_{\lambda_a,\mu_a}$, $z_\lambda=\prod_j j^{\, m_j}\, m_j!$ ($m_j=\# \{a\in \mathbb{N}\,\vert\, \lambda_a=j\}$) and $\ell(\lambda)$ is the legth of the partition $\lambda$. Question: is it possible to determine explicitly an expression of $p_1^n$, with $n\geq 1$, in terms of Jack symmetric functions $J_\lambda^\alpha$? A: To turn Richard's comment into an answer: $$ J_{1^n} = p_{1^n} = \alpha^n n! \sum_{\lambda \vdash n} \frac{J_\lambda}{j_\lambda} $$ where $j_\lambda = \langle J_\lambda, J_\lambda \rangle$ is an explicit $\alpha$-deformation of two products of hooks-lengths in the diagram $\lambda$.
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,501
{"url":"http:\/\/engineering.wikia.com\/wiki\/Volt","text":"Volt\n\n651pages on\nthis wiki\n\nThe volt (symbol: V) is the SI derived unit of electric potential difference. The number of volts is a measure of the strength of an electrical source in the sense of how much power is produced for a given current level. It is named in honor of the Italian people, physicist, Alessandro Volta[[1]] (1745\u20131827), who invented the voltaic pile[[2]], the first chemical battery.\n\nDefinitionEdit\n\nThe volt is defined as the potential difference across a conductor when a current of one ampere dissipates one watt of power. Hence, it is the base SI representation m2 \u00b7 kg \u00b7 s-3 \u00b7 A-1, which can be equally represented as one joule of energy per coulomb of charge, J\/C.\n\n1 V = 1 W\/A = 1 m2\u2022kg\u2022s\u20133\u2022A\u20131\n\nSince 1990 the volt is maintained internationally for practical measurement using the Josephson effect[[3]], where a conventional value is used for the Josephson constant[[4]], fixed by the 18th General Conference on Weights and Measures[[5]] as\n\nK{J-90} = 0.4835979 GHz\/\u00b5V.\n\nExplanationEdit\n\nThe electrical potential difference can be thought of as the ability to move electrical charge through a resistance. In essence, the volt measures how much kinetic energy each electron carries. The number of electrons is measured by the charge, in coulombs. Thus the volt is multiplied by the current flow, in amperes which are one coulomb per second, to yield the total electrical power in the current, in Watts. At a time in physics when the word force was used loosely, the potential difference was named the electromotive force or emf - a term which is still used in certain contexts.\n\nElectrical potential difference (\"voltage\") Edit\n\nBetween two points in an electric field, such that exists in an electrical circuit, the potential difference is equal to the difference in their electrical potentials. This difference is proportional to the electrostatic force that tends to push electrons or other charge-carriers from one point to the other. Potential difference, electrical potential and electromotive force are measured in volts, leading to the commonly used term voltage and the symbol V (sometimes $\\mathcal{E}$ is used for voltage).\n\nVoltage is additive in the following sense: the voltage between A and C is the sum of the voltage between A and B and the voltage between B and C. Two points in an electric circuit which are connected by an ideal conductor, without resistance and without the presence of a changing magnetic field, have a potential difference of zero. But other pairs of points may also have a potential difference of zero. If two such points are connected with a conductor, no current will flow through the connection. The various voltages in a circuit can be computed using Kirchhoff's circuit laws.\n\nVoltage is a property of an electric field, not individual electrons. An electron moving across a voltage difference experiences a net change in energy, often measured in electron-volts. This effect is analogous to a mass falling through a given height difference in a gravitational field.\n\nHydraulic analogy Edit\n\nIf one thinks of an electrical circuit in analogy to water circulating in a network of pipes, driven by pumps in the absence of gravity, then the potential difference corresponds to the fluid pressure difference between two points. If there is a pressure difference between two points, then water flowing from the first point to the second will be able to do work, such as driving a turbine.\n\nThis hydraulic analogy (see separate article for full details) is a useful method of teaching a range of electrical concepts. In a hydraulic system, the work done to move water is equal to the pressure multiplied by the volume of water moved. Similarly, in an electrical circuit, the work done to move electrons or other charge-carriers is equal to 'electrical pressure' (an old term for voltage) multiplied by the quantity of electrical charge moved. Voltage is a convenient way of quantifying the ability to do work.\n\nTechnical definitionEdit\n\nThe electrical potential difference is defined as the amount of work per charge needed to move electric charge from the second point to the first, or equivalently, the amount of work that unit charge flowing from the first point to the second can perform. The potential difference between two points a and b is the line integral of the electric field E:\n\n$V_a - V_b = \\int _a ^b \\mathbf{E}\\cdot d\\mathbf{l} = \\int _a ^b E \\cos \\phi dl.$\n\nUseful formulaeEdit\n\nDC circuitsEdit\n\n$V = \\sqrt{PR}$\n$V = \\frac{P}{I}$\n$V = IR \\!\\$\n\nWhere V=Voltage, I=Current, R=Resistance, P=Power\n\nAC circuitsEdit\n\n$V = \\frac{P}{I\\cos\\theta}$\n$V = \\frac{\\sqrt{PZ}}{\\sqrt{\\cos\\theta}} \\!\\$\n$V = \\frac{IR}{\\cos\\theta}$\n\nWhere V=Voltage, I=Current, R=Resistance, P=True Power, Z=Impedance, \u03b8=Phasor Angle\n\nAC conversionsEdit\n\n$V_{avg} = .637V_{pk} \\!\\$\n$V_{rms} = .707V_{pk} \\!\\$\n$V_{rms} = .354V_{ppk}\\!\\$\n$V_{avg} = .319V_{ppk} \\!\\$\n$V_{avg} = .9V_{rms} \\!\\$\n$V_{pk} = .5V_{ppk} \\!\\$\n\nWhere Vpk=Peak Voltage, Vppk=Peak-to-Peak Voltage, Vavg=Average Voltage, Vrms=Effective Voltage\n\nTotal voltageEdit\n\nVoltage sources and drops in series:\n\n$V_T = V_1 + V_2 + V_3 + ... \\!\\$\n\nVoltage sources and drops in parallel:\n\n$V_T = V_1 = V_2 = V_3 = ... \\!\\$\n\nVoltage dropsEdit\n\nAcross a resistor (Resistor n):\n\n$V_n = IR_n \\!\\$\n\nAcross a capacitor (Capacitor n):\n\n$V_n = IX_n \\!\\$\n\nAcross an inductor (Inductor n):\n\n$V_n = IX_n \\!\\$\n\nWhere V=Voltage, I=Current, R=Resistance, X=Reactance\n\nExamplesEdit\n\nVoltage sourcesEdit\n\nCommon sources of emf include:\n\nCommon voltagesEdit\n\nNominal voltages of familiar sources:\n\nMeasuring instruments Edit\n\nInstruments for measuring potential differences include the voltmeter, the potentiometer (measurement device), and the oscilloscope. The voltmeter works by measuring the current through a fixed resistor, which, according to Ohm's Law, is proportional to the potential difference across it. The potentiometer works by balancing the unknown voltage against a known voltage in a bridge circuit. The cathode-ray oscilloscope works by amplifying the potential difference and using it to deflect an electron beam from a straight path, so that the deflection of the beam is proportional to the potential difference.\n\nHistory of the volt Edit\n\nIn 1800, as the result of a professional disagreement over the galvanic response advocated by Luigi Galvani, Alessandro Volta developed the so-called Voltaic pile, a forerunner of the battery, which produced a steady electric current. Volta had determined that the most effective pair of dissimilar metals to produce electricity was zinc[[8]] and silver[[9]]. In the 1880s, the International Electrical Congress, now the International Electrotechnical Commission (IEC)[[10]], approved the volt for electromotive force. The volt was defined as the potential difference across a conductor when a current of one ampere dissipates one watt of power.\n\nPrior to the development of the Josephson junction voltage standard, the volt was maintained in national laboratories using specially constructed batteries called standard cells. The United States used a design called the Weston cell[[11]] from 1905 to 1972.","date":"2016-05-28 11:43:26","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 19, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6348220109939575, \"perplexity\": 693.1729845197451}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-22\/segments\/1464049277592.65\/warc\/CC-MAIN-20160524002117-00214-ip-10-185-217-139.ec2.internal.warc.gz\"}"}
null
null
/* TEMPLATE GENERATED TESTCASE FILE Filename: CWE134_Uncontrolled_Format_String__char_file_printf_66a.c Label Definition File: CWE134_Uncontrolled_Format_String.label.xml Template File: sources-sinks-66a.tmpl.c */ /* * @description * CWE: 134 Uncontrolled Format String * BadSource: file Read input from a file * GoodSource: Copy a fixed string into data * Sinks: printf * GoodSink: printf with "%s" as the first argument and data as the second * BadSink : printf with only data as an argument * Flow Variant: 66 Data flow: data passed in an array from one function to another in different source files * * */ #include "std_testcase.h" #ifndef _WIN32 #include <wchar.h> #endif #ifdef _WIN32 #define FILENAME "C:\\temp\\file.txt" #else #define FILENAME "/tmp/file.txt" #endif #ifndef OMITBAD /* bad function declaration */ void CWE134_Uncontrolled_Format_String__char_file_printf_66b_badSink(char * dataArray[]); void CWE134_Uncontrolled_Format_String__char_file_printf_66_bad() { char * data; char * dataArray[5]; char dataBuffer[100] = ""; data = dataBuffer; { /* Read input from a file */ size_t dataLen = strlen(data); FILE * pFile; /* if there is room in data, attempt to read the input from a file */ if (100-dataLen > 1) { pFile = fopen(FILENAME, "r"); if (pFile != NULL) { /* POTENTIAL FLAW: Read data from a file */ if (fgets(data+dataLen, (int)(100-dataLen), pFile) == NULL) { printLine("fgets() failed"); /* Restore NUL terminator if fgets fails */ data[dataLen] = '\0'; } fclose(pFile); } } } /* put data in array */ dataArray[2] = data; CWE134_Uncontrolled_Format_String__char_file_printf_66b_badSink(dataArray); } #endif /* OMITBAD */ #ifndef OMITGOOD /* goodG2B uses the GoodSource with the BadSink */ void CWE134_Uncontrolled_Format_String__char_file_printf_66b_goodG2BSink(char * dataArray[]); static void goodG2B() { char * data; char * dataArray[5]; char dataBuffer[100] = ""; data = dataBuffer; /* FIX: Use a fixed string that does not contain a format specifier */ strcpy(data, "fixedstringtest"); dataArray[2] = data; CWE134_Uncontrolled_Format_String__char_file_printf_66b_goodG2BSink(dataArray); } /* goodB2G uses the BadSource with the GoodSink */ void CWE134_Uncontrolled_Format_String__char_file_printf_66b_goodB2GSink(char * dataArray[]); static void goodB2G() { char * data; char * dataArray[5]; char dataBuffer[100] = ""; data = dataBuffer; { /* Read input from a file */ size_t dataLen = strlen(data); FILE * pFile; /* if there is room in data, attempt to read the input from a file */ if (100-dataLen > 1) { pFile = fopen(FILENAME, "r"); if (pFile != NULL) { /* POTENTIAL FLAW: Read data from a file */ if (fgets(data+dataLen, (int)(100-dataLen), pFile) == NULL) { printLine("fgets() failed"); /* Restore NUL terminator if fgets fails */ data[dataLen] = '\0'; } fclose(pFile); } } } dataArray[2] = data; CWE134_Uncontrolled_Format_String__char_file_printf_66b_goodB2GSink(dataArray); } void CWE134_Uncontrolled_Format_String__char_file_printf_66_good() { goodG2B(); goodB2G(); } #endif /* OMITGOOD */ /* Below is the main(). It is only used when building this testcase on its own for testing or for building a binary to use in testing binary analysis tools. It is not used when compiling all the testcases as one application, which is how source code analysis tools are tested. */ #ifdef INCLUDEMAIN int main(int argc, char * argv[]) { /* seed randomness */ srand( (unsigned)time(NULL) ); #ifndef OMITGOOD printLine("Calling good()..."); CWE134_Uncontrolled_Format_String__char_file_printf_66_good(); printLine("Finished good()"); #endif /* OMITGOOD */ #ifndef OMITBAD printLine("Calling bad()..."); CWE134_Uncontrolled_Format_String__char_file_printf_66_bad(); printLine("Finished bad()"); #endif /* OMITBAD */ return 0; } #endif
{ "redpajama_set_name": "RedPajamaGithub" }
8,541
Gioia Tauro este o comună de 18.808 locuitori, în regiunea Calabria, în provincia Reggio Calabria, Italia. De la ea își trag numele câmpia care îl înconjură (Piana di Gioia Tauro) și golful unde se află (Golfo di Gioia Tauro). Portul Gioia Tauro este cel mai mare terminal pentru transhipment din Marea Mediterană și al treilea din Europa, după Rotterdam și Hamburg. Demografie Orașe din Italia
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,657
Q: Laravel 4 - Save url by AJAX I have a form to save an 'url'. I want to do it by AJAX, and this is my code: $('#guardar_imagen').click(function(e){ e.preventDefault(); var url = $('#url_imagen_1').serialize(); $.post('uploadImageUrl/'+url, function(data){ alert(data); // This alert is to test }); jQuery.noConflict() $('#nuevaImagen1').modal('hide'); }); This is my route: Route::post('uploadImageUrl/{url}', 'ImagenController@uploadImageUrl'); But I'm having an error, because the system thinks the {url} I'm trying to POST is part of the 'route', and it makes the error. Any idea how can I send the 'url' to my Controller using AJAX? A: You're using post but you're trying to send the data as part of the route. The right way to send data via POST is: $.post('/uploadImageUrl', url, function(data){ alert(data); // This alert is to test }); Then your route would be: Route::post('/uploadImageUrl', 'ImagenController@uploadImageUrl'); And ImagenController@uploadImageUrl would access the data: $imageUrl = Input::get('name-of-parameter');
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,351
How to Create a FedEx Shipping Label By: Bailey Shoemaker Richards How to Mail With FedEx Shipping both nationally and internationally is a crucial part of business and sometimes everyday life. Creating a shipping label can seem like the most confusing part of sending a package via FedEx, since shipping with a specific company requires you to use their label. However, FedEx provides a simple template on their website that will enable you to create and print a shipping label within a few minutes. Visit FedEx.com and go to the New Customer section of the website. There is a link to the New Customer section on the left side of the FedEx homepage. Create an account with FedEx to make shipping the future easier, or click on the link that will allow you to ship one package. Fill out the information on the form page presented. This will include the address to which you are shipping, your return address, details about the package and your billing address. Click the button that says Ship. Print off the shipping label that FedEx gives you and affix it to your package with clear packing tape. How to Create a Shipping Label Bailey Shoemaker Richards is a writer from Ohio. She has contributed to numerous online and print publications, including "The North Central Review." Shoemaker Richards also edits for several independent literary journals and the Pink Fish Press publishing company. She holds a Bachelor of Arts in creative writing from Ohio University.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,441
\section{A high-order Spectral Difference method with Constrained Transport} \label{sec:sd_mhd} We present in this section a new method within the FE framework that addresses most of the issues we discussed in the previous section. It is both locally and globally divergence free, and it does not need the introduction of a new variable and a new equation, together with free parameters sometimes difficult to adjust. This new method is based on the Spectral Difference (SD) method \cite{Liu2006}. In this section, we present the original SD method for the semi-discrete scheme, followed by a description of our time integration strategy. We then focus on the modifications to the original method to solve the induction equation. We prove that our method is both strictly divergence free at the continuous level inside the elements, and at the interface between elements, maintaining the strict continuity of the normal component of the field. Using Fourier analysis, we finally show that our new method attains the same stability properties as reported in \citep{abeele2008,jameson2010}. \subsection{The SD method in a nutshell} For sake of simplicity, we present the SD method using a simple scalar problem in one space dimension. The generalisation to multiple space dimensions will be discussed later. Let us denote the numerical solution as $u(x)$. We focus on the description of the solution in one element, which is given by Lagrange interpolation polynomials $\{\ell^s_{i}(x)\}_{i=0}^n$, built on a set of points $\mathcal{S}^s=\{x^s_i\}_{i=0}^n$, called the solution points (with the superscript $s$). The numerical solution inside an element is given by: \[ u(x) = \sum_{i=0}^n u(x^s_i)\ell^s_i(x), \] where $n$ is the polynomial degree of the interpolation Lagrange polynomials. The SD method features a second set of nodes $\mathcal{S}^f=\{x^f_i\}_{i=0}^{n+1}$, called the flux points (with the superscript $f$). A numerical approximation of the flux is evaluated by another set of Lagrange interpolation polynomials $\{\ell^f_{i}(x)\}_{i=0}^{n+1}$ built on the flux points. Note that we have $n+1$ solution points and $n+2$ flux points, and that the first and the last flux points coincide with the boundary of the elements ($x^f_0$ and $x^f_{n+1}$). Moreover, at the interfaces between elements, a numerical flux based on a Riemann solver must be used to enforce the continuity of the flux between elements. Let $\hat{f}(\cdot)$ denote this single-valued numerical flux, common to the element and its direct neighbour. The approximation for the flux is given by: \begin{equation} \label{eq:numerical_flux} f(x) = \hat{f}(u(x^f_0)) \ell^f_0(x) + \sum_{i=1}^{n} f_i(u(x^f_i)) \ell^f_i(x) + \hat{f}(u(x^f_{n+1})) \ell^f_{n+1}(x), \end{equation} where we wrote separately the two extreme flux points with their corresponding numerical flux. The final update of the solution is obtained using the exact derivative of the flux evaluated at the solution points, so that the semi-discrete scheme reads: \begin{align*} \frac{{\rm d}}{{\rm d}t} u(x^s_j) = - \hat{f}(u(x^f_0)) \ell^{f\prime}_0(x^s_j) -\sum_{i=1}^{n} f_i(u(x^f_i)) \ell^{f\prime}_i(x^s_j) - \hat{f}(u(x^f_{n+1})) \ell^{f\prime}_{n+1}(x^s_j), \end{align*} where the primes stand for the derivative of the Lagrange polynomials. A straightforward extension to more space dimensions can be achieved by the tensor product between the set of one dimensional solution points and flux points. The left panel on Fig.~\ref{fig:sd_representation} shows in blue the solution points and in red (and salmon) colour the flux points for a classical SD scheme in two space dimensions, as well as the subcells (denoted by the black lines) which we call \textit{control volumes}. The stability of the SD method in one dimension has been shown in \cite{jameson2010} at all orders of accuracy, while the stability of the SD scheme in two dimensions, for both Cartesian meshes and unstructured meshes, has been demonstrated in \cite{abeele2008}. As shown in \cite{jameson2010}, the stability of the standard SD method depends on the proper choice of the flux points and not on the position of the solution points. The only important requirement is that the solution points must be contained anywhere within the (inner) control volume delimited by the flux points. With this in mind, we use Gauss-Legendre quadrature points for the inner flux points and the zeros of the Chebyshev polynomials for the solution points, and we show in section \ref{sec:stability} that indeed this general result also holds for the induction equation. \subsection{High-order time integration using ADER} We decided not to use the same SSP Runge Kutta method as for the RKDG scheme. Instead, we decided to explore the modern version of the ADER method \cite{Dumbser2008,balsara2009,mhv2020}. Indeed, we believe this method is well suited to compute solutions to arbitrary high order in time. We exploit this nice property in our numerical experiments shown in section \ref{sec:numerics}. Consider again the scalar, one-dimensional conservation law given in Eq.~\eqref{eq:conslaw}, \begin{equation} \begin{cases} \partial_t u + \partial_x f( u ) = 0 \quad \in \Omega \times [0,\infty]\\ u(t=0) = u_0 \\ u_{\partial \Omega} = g, \end{cases} \end{equation} with suitable initial conditions and boundary conditions. For simplicity, we are only updating the solution $u(x^s_i,t)$ for a single solution point $x^s_i$. Modern ADER schemes are based on a Galerkin projection in time. We multiply the previous conservation law by an arbitrary test function $\psi(t)$, integrating in time over $\Delta t$: \[\int^{\Delta t}_0 \psi(t)\partial_t u {\rm d}t + \int^{\Delta t}_0 \psi(t)\partial_x f(u) {\rm d}t = 0.\] Integrating by parts (in time) yields: \begin{equation} \label{eq:ADER_ibp} \psi(\Delta t) u(\Delta t) - \psi(0) u(0) - \int^{\Delta t}_0 \partial_t \psi(t) u(t) {\rm d}t + \int^{\Delta t}_0 \psi(t) \partial_x f(u(t)) {\rm d}t = 0. \end{equation} Note that here we do not show the spatial dependency to simplify the notations. We now represent our solution using Lagrange polynomials {\it in time} $\ell_i(t)$ defined on $n+1$ Legendre quadrature points $\lbrace t_i \rbrace_{i=0}^n \in [0,\Delta t]$, which together with the quadrature weights $\lbrace w_i \rbrace_{i=0}^n$ can be used to perform integrals at the correct order in time. We are aiming at a solution with the same order of accuracy in time than in space, so $n$ is taken here equal to the polynomial degree of the spatial discretisation. We can write: \[ u(t) = \sum_{i=0}^n u_i \ell_i(t),\] and replace the integrals in Eq.~\eqref{eq:ADER_ibp} by the respective quadratures. We now replace the arbitrary test function $\psi(t)$ by the set of Lagrange polynomials $\{\ell_j(t)\}_{i=0}^n$ and obtain: \begin{equation}\label{eq:System} \ell_j(\Delta t)\left(\sum_{i=0}^{n} u_i \ell_i(\Delta t)\right) - \ell_j(0)u(0) - \Delta t \sum_{i=0}^{n} w_i \ell^\prime_j(t_i) u_i + \Delta t \sum_{i=0}^{n} w_i \ell_j(t_i) \partial_x f(u_i) =0 . \end{equation} To derive the previous equation, we used the interpolation property of the Lagrange polynomials with $u(t_i) = u_i$. Note that $u(0)$ corresponds to the solution at the beginning of the time step. The previous system can be rewritten in a matrix form, defining a mass matrix $M \in \mathbb{R}^{(n+1)\times(n+1)}$ and a right-hand side vector $r$ as: \begin{equation}\label{eq:MassmatrixAder} M_{ji} = \ell_j(\Delta t)\ell_i(\Delta t)- \Delta t w_i \ell^\prime_j(t_i)~~~{\rm and}~~~r_j = \ell_j(0)u(0) - \Delta t \sum_{i=0}^{n} w_i \ell_j(t_i) \partial_x f(u_i). \end{equation} The previous implicit non-linear equation with unknown $\lbrace u_i \rbrace_{i=0}^n$, is now written as: \begin{equation}\label{fix:point} M_{ji} u_i = r_j(u_0,...,u_n), \end{equation} which can be solved with a fixed-point iteration method. We use a uniform initial guess with $\lbrace u^0_i=u(0)\rbrace_{i=0}^n$ and perform a standard Picard iterative scheme as follows \begin{equation} \label{eq:fixpoint_iteration} u_i^{k+1}=M^{-1}_{ij} r_j (u_0^k,...,u_n^k), \end{equation} where index $k$ stands for the iteration count. Finally, we use these final predicted states $\lbrace u^{n}_i \rbrace_{i=0}^n$ at our quadrature points and update the final solution as: \begin{equation} u(\Delta t) = u(0) - \Delta t \sum_{i=0}^n w_i \partial_x f(u^{n}_i). \end{equation} Because we always have this final update, we only need $n$ internal corrections to the solution (iterations) to obtain a solution that is accurate up to order $n+1$ in time \citep{dumbser_ader_2013}. The first order scheme with $n=0$ does not require any iteration, as it uses only the initial first guess to compute the final update, corresponding exactly to the first-order forward Euler scheme. Note that in this flavour of the ADER scheme, we need to estimate the derivative of the flux for each time slice according to the SD method, including the Riemann solvers at element boundaries, making it different from the traditional ADER-DG framework presented in \cite{dumbser_ader_2013} and more similar with \cite{mhv2020}, which remains local until the final update. Precisely because we include the Riemann solver at the element boundaries, we maintain the continuity requirement on the normal component, needed for the appropriate evolution of the induction equation. We use a Courant stability condition adapted to the SD scheme, as explained in \cite{vanharen2017} and compute the time step as: \[ \Delta t = \frac{C}{n+1} \frac{\Delta x}{|v_{\rm max}|}, \] where again $C=0.8$ and $n$ is the polynomial degree of our discretisation in space. We justify this choice by a careful time stability analysis in the following section. \subsection{A modified SD scheme for the induction equation} The traditional SD method is particularly well suited for the Euler sub-system with conservation laws based on the divergence operator. In Fig.~\ref{fig:sd_representation}, we show on the left panel the traditional discretisation of one element using SD, with the control volume boundaries shown as black solid lines, the solution points in blue inside the control volumes, and the flux points in red on the faces of each control volume. The strict conservation property of SD can be explained using for example the density. Defining the corner points of the control volumes as $( x_i, y_j)$, we can compute the total mass within a rectangle defined by the four points $( 0, 0)$, $( x_i, 0)$, $( 0, y_j)$ and $( x_i, y_j)$ as $M(x_i,y_j)$. Note that the corner points are defined as the intersection of the lines where the flux points are defined. \begin{figure} \centering \includegraphics[width=1.0\textwidth]{chapters/img/sd_rep.pdf} \caption{Left panel: position of the solution points in blue, and of the flux points in red, for a traditional SD method with $n=2$. Right panel: position of the solution points for $B_x$ and $B_y$ in blue and of the flux points for the electric field $E_z$ and the vector potential $A_z$ in red, for our new SD method for the induction equation with $n=2$. \label{fig:sd_representation}} \end{figure} We now represent this cumulative mass everywhere inside the element using Lagrange polynomials defined on the flux points as \begin{equation} M(x,y) = \sum_{i=0}^{n+1} \sum_{j=0}^{n+1} M(x_i,y_j)\ell_i(x) \ell_j(y), \end{equation} where we dropped the superscript $f$ for simplicity. The density field is obtained by taking the derivative of the cumulative mass as: \begin{equation} \rho(x,y) = \sum_{i=0}^{n+1} \sum_{j=0}^{n+1} M(x_i,y_j)\ell^{\prime}_i(x) \ell^{\prime}_j(y), \end{equation} where the prime stands for the spatial derivative. This exact procedure can be used to initialise the value of $\rho$ at the solution points, as well as to prove that the SD method as described in the previous section is strictly conservative \cite{Liu2004,Liu2006}. The induction equation, however, is a conservation law for the magnetic flux through a surface, as it does not feature a divergence operator but a curl operator. We therefore propose a small modification of the classical SD scheme, similar to that of \citep{chandrashekar_2020,praveen_2019}, with different collocation points for the magnetic field components $B_x$ and $B_y$. In the two-dimensional case, we start with a vector potential $\vec{A} = (0,0,A_z)$. We approximate the z-component using the red square collocation points, as denoted in the right panel of Fig. \ref{fig:sd_representation}. The approximation takes the form \[ A_z(x,y) = \sum_{i=0}^{n+1}\sum_{j=0}^{n+1} A_z(x_i,y_j)\ell_i(x)\ell_j(y) \] Then, the magnetic field, in 2-dimensions is obtained by: \[ B_x(x,y) = \sum_{i=0}^{n+1}\sum_{j=0}^{n+1} A_z(x_i,y_j)\ell_i(x)\ell^{'}_j(y) \quad B_y(x,y) = -\sum_{i=0}^{n+1}\sum_{j=0}^{n+1} A_z(x_i,y_j)\ell^{'}_i(x)\ell_j(y) \] Because of how the magnetic field $\vec{B} = (B_x, B_y)$ is initialised, this is by definition divergence-free. Then, we define the magnetic flux $\phi_x$ through the surface defined by two corner points $(x_i,0)$ and $(x_i,y_j)$ as: \begin{equation} \label{eq:mfx} \phi_x(x_i,y_i) = \int_0^{y_i} B_x(x_i,y) {\rm d} y. \end{equation} Similarly, we define the magnetic flux $\phi_y$ through the surface defined by two corner points $(0,y_j)$ and $(x_i,y_j)$ as: \begin{equation} \label{eq:mfy} \phi_y(x_i,y_i) = \int_0^{x_i} B_y(x,y_i) {\rm d}x. \end{equation} We see that $\phi_x$ and $\phi_y$ are both defined over the set of corner points $(x_i,y_j)$. We can now represent the numerical approximation $\phi_x$ (resp. $\phi_y$) using Lagrange polynomials defined on the flux points as: \[\phi_x(x,y) = \sum_{i=0}^{n+1} \sum_{j=0}^{n+1} \phi_x(x_i,y_j) \ell_i(x) \ell_j(y). \] Then, we deduce the numerical approximation of $B_{x,h}$ as: \begin{equation} \label{eq:sd_bx} B_{x}(x,y) = \partial_y \phi_x = \sum_{i=0}^{n+1} \sum_{j=0}^{n+1} \phi_x(x_i,y_j) \ell_i(x) \ell^{\prime}_j(y), \end{equation} and the numerical approximation of $B_{y,h}$ as: \begin{equation} \label{eq:sd_by} B_{y}(x,y) = \partial_x \phi_y = \sum_{i=0}^{n+1} \sum_{j=0}^{n+1} \phi_y(x_i,y_j) \ell^{\prime}_i(x) \ell_j(y). \end{equation} The key difference between this configuration and the traditional SD method is that for $B_x$, the $x$ direction has an extra degree of freedom and a higher polynomial degree (similarly for $B_y$ in the y direction). This also means that the corresponding solution points for $B_x$ and $B_y$ are staggered with respect to the traditional SD method. In the right panel of Fig~\ref{fig:sd_representation}, we show the position of these new solution points for $B_x$, $B_y$ (in blue) and new flux points (in red) where the electric field will be defined, as explained below. Note that if the initial magnetic field is divergence free, applying the divergence theorem to the rectangle defined by the same four corner points as before leads to the constraint: \begin{equation} \label{eq:discrete_circulation} \phi_x(x_i,y_j) + \phi_y(x_i,y_j) - \phi_x(0,y_j) - \phi_y(x_i,0) = 0, \quad \forall i, j. \end{equation} \begin{comment} Then, we define the magnetic flux $\phi_x$ through the surface defined by two corner points $(x_i,0)$ and $(x_i,y_j)$ as: \begin{equation} \label{eq:mfx} \phi_x(x_i,y_i) = \int_0^{y_i} B_x(x_i,y) {\rm d} y. \end{equation} We assume we work with an initially divergence-free field $\vec{B} = (B_x, B_y)$. Similarly, we define the magnetic flux $\phi_y$ through the surface defined by two corner points $(0,y_j)$ and $(x_i,y_j)$ as: \begin{equation} \label{eq:mfy} \phi_y(x_i,y_i) = \int_0^{x_i} B_y(x,y_i) {\rm d}x. \end{equation} We see that $\phi_x$ and $\phi_y$ are both defined over the set of corner points $(x_i,y_j)$. We can now represent the numerical approximation $\phi_x$ (resp. $\phi_y$) using Lagrange polynomials defined on the flux points as: \[\phi_x(x,y) = \sum_{i=0}^{n+1} \sum_{j=0}^{n+1} \phi_x(x_i,y_j) \ell_i(x) \ell_j(y). \] Then, we deduce the numerical approximation of $B_{x,h}$ as: \begin{equation} \label{eq:sd_bx} B_{x}(x,y) = \partial_y \phi_x = \sum_{i=0}^{n+1} \sum_{j=0}^{n+1} \phi_x(x_i,y_j) \ell_i(x) \ell^{\prime}_j(y), \end{equation} and the numerical approximation of $B_{y,h}$ as: \begin{equation} \label{eq:sd_by} B_{y}(x,y) = \partial_x \phi_y = \sum_{i=0}^{n+1} \sum_{j=0}^{n+1} \phi_y(x_i,y_j) \ell^{\prime}_i(x) \ell_j(y). \end{equation} The key difference between this configuration and the traditional SD method is that for $B_x$, the $x$ direction has an extra degree of freedom and a higher polynomial degree (similarly for $B_y$ in the y direction). This also means that the corresponding solution points for $B_x$ and $B_y$ are staggered with respect to the tradition SD method. In the right panel of Fig~\ref{fig:sd_representation}, we show the position of these new solution points for $B_x$, $B_y$ (in blue) and new flux points (in red) where the electric field will be defined, as explained below. Note that if the initial magnetic field is divergence free, applying the divergence theorem to the rectangle defined by the same four corner points as before leads to the constraint: \begin{equation} \label{eq:discrete_circulation} \phi_x(x_i,y_j) + \phi_y(x_i,y_j) - \phi_x(0,y_j) - \phi_y(x_i,0) = 0, \quad \forall i, j. \end{equation} \end{comment} \begin{proposition} \label{proposition:eq_25} Equation \eqref{eq:discrete_circulation} holds if we can integrate $\vec{B}$ exactly or by starting from a vector potential $\vec{A}$. \end{proposition} \begin{proof} Using the numerical approximation of $A_z$ (and sub-consequently of $\vec{B}$), we can write the fields $\phi_x(x,y)$ and $\phi_y(x,y)$ for any control volume $K = [0,x_m]\times [0,y_m]$ \begin{align*} \phi_x(x_m,y_m) &= \int_0^{y_m} \sum_{i=0}^{n+1}\sum_{j=0}^{n+1} A_z(x_i,y_j)\ell_i(x_m)\ell^{'}_j(y) dy \\ &= \sum_{i=0}^{n+1}\sum_{j=0}^{n+1} A_z(x_i,y_j)\ell_i(x_m) \int_0^{y_m} \ell^{'}_j(y) dy \\ &= \sum_{i=0}^{n+1}\sum_{j=0}^{n+1} A_z(x_i,y_j)\ell_i(x_m) \left[\ell_j(y_m) - \ell_j(0) \right] \end{align*} \begin{align*} \phi_y(x_m,y_m) &= -\int_0^{x_m} \sum_{i=0}^{n+1}\sum_{j=0}^{n+1} A_z(x_i,y_j)\ell_j(y_m)\ell^{'}_i(x) dx \\ &= - \sum_{i=0}^{n+1}\sum_{j=0}^{n+1} A_z(x_i,y_j)\ell_j(y_m) \int_0^{x_m}\ell^{'}_i(x) dx \\ &= - \sum_{i=0}^{n+1}\sum_{j=0}^{n+1} A_z(x_i,y_j)\ell_j(y_m) \left[\ell_i(x_m) - \ell_i(0) \right] \end{align*} The integration is exact given the right quadrature rule. Then, we can observe that \eqref{eq:discrete_circulation} holds. \end{proof} \begin{proposition} \label{proposition:pointwise_div_free} The proposed numerical representation of $\vec{B}$ is pointwise (or locally) strictly divergence free. \end{proposition} \begin{proof} We now evaluate the divergence of the numerical approximation $\vec{B}(x,y) = [B_x,B_y]$: \begingroup \allowdisplaybreaks \begin{align*} \partial_x B_x + \partial_y B_y &= \partial_x \left(\sum_{i=0}^{n+1} \sum_{j=0}^{n+1} \phi_x(x_i,y_j) \ell_i(x) \ell^\prime_j(y)\right) + \partial_y \left(\sum_{i=0}^{n+1} \sum_{j=0}^{n+1} \phi_y(x_i,y_j) \ell^\prime_i(x) \ell_j(y)\right)\\ &= \sum_{i=0}^{n+1} \sum_{j=0}^{n+1} \left( \phi_x(x_i,y_j) + \phi_y(x_i,y_j) \right)\ell_i'(x) \ell_j'(y)\\ &= \sum_{i=0}^{n+1} \sum_{j=0}^{n+1} \left( \phi_x(0,y_j) + \phi_y(x_i,0) \right)\ell_i'(x) \ell_j'(y), \end{align*} \endgroup where we used the property that the total magnetic flux through the rectangle vanishes (see Eq.~\eqref{eq:discrete_circulation}). We can now separate and factor out the $i$ and $j$ sums as: \begingroup \allowdisplaybreaks \begin{align*} \partial_x B_x + \partial_y B_y &= \sum_{i=0}^{n+1} \sum_{j=0}^{n+1}\phi_x(0,y_j)\ell_i'(x) \ell_j'(y) + \sum_{i=0}^{n+1} \sum_{j=0}^{n+1} \phi_y(x_i,0) \ell_i'(x) \ell_j'(y)\\ &= \left( \sum_{j=0}^{n+1}\phi_x(0,y_j) \ell_j'(y)\right) \left(\sum_{i=0}^{n+1}\ell_i'(x)\right) + \left(\sum_{i=0}^{n+1} \phi_y(x_i,0) \ell_i'(x)\right)\left(\sum_{j=0}^{n+1} \ell_j'(y)\right)\\ &= 0, \end{align*} \endgroup where we used the property of the Lagrange polynomials that $\sum_{i=0}^{n+1} \ell_i (x)=1$ so that the corresponding derivative vanishes uniformly. \end{proof} \begin{proposition} \label{proposition:globally_div_free} The proposed numerical representation of $\vec{B}$ is globally divergence free. \end{proposition} \begin{proof} If the initial magnetic field is divergence free, $B_x$ is continuous across the left and right boundaries of each element. Similarly, $B_y$ is continuous across the bottom and top boundaries of the element. It follows that $\phi_x$ (resp. $\phi_y$) is initially identical on the left (resp. bottom) edge of the right (resp. top) element and on the right (resp. top) edge of the left (resp. bottom) element. Because the adopted Lagrange polynomial basis is an interpolatory basis, and because the solution points of $B_x$ and $B_y$ are collocated on the element boundaries, the continuity of the magnetic field in the component normal to the element face is enforced by construction and at all orders. Note that the case $n=0$ corresponds exactly to the Constrained Transport method, as implemented in popular FV codes. The proposed discretisation is a generalisation of CT to arbitrary high order. \end{proof} We now describe the SD update for the induction equation. We define the electric field $\vec{E}= - \vec{v}\times\vec{B}$ and write the induction equation as \[\partial_t \vec{B} = - \nabla\times \vec{E} . \] Once we know the prescribed velocity field and the polynomial representation of the magnetic field throughout the element, as in Eq.~\eqref{eq:sd_bx} and Eq.~\eqref{eq:sd_by}, we can compute the electric field at the control volume corner points $(x_i,y_j)$. These are the equivalent of the flux points in the traditional SD method. Since the electric field is continuous across element boundaries, we need to use a 1D Riemann solver for flux points inside the element edges, and a 2D Riemann solver at corner points between elements. This step is crucial as it maintains the global divergence-free property. We see for example that the electric field on an element face will be identical to the electric field on the same face of a neighbouring element. 2D Riemann solvers at element corners are also important to maintain this global property, and in the case of the induction equation, we just need to determine the 2D upwind direction using both $v_x$ and $v_y$. After we have enforced a single value for the electric field on element edges, we can interpolate the electric field inside the element, using flux points and the corresponding Lagrange polynomials, as before: \begin{equation} E_z(x,y) = \sum_{i=0}^{n+1} \sum_{j=0}^{n+1} E_z(x_i,y_j) \ell_i(x) \ell_j(y), \end{equation} and update the magnetic field directly using the pointwise update: \begin{equation} \label{eq:induction_update} \partial_t B_{x} = - \partial_y E_z,~~~{\rm and}~~~ \partial_t B_{y} = \partial_x E_z. \end{equation} We have directly: \begin{equation} \partial_t \left( \partial_x B_{x} + \partial_y B_{y} \right) = 0, \end{equation} so that the divergence of the field, if zero initially, will remain zero at all time. The continuity of $E_z$ at element boundaries also implies that the continuity of the normal components of the magnetic field will be preserved after the update. Note that at the beginning of the time step, we only need to know the values of $B_x$ and $B_y$ at their corresponding solution points to obtain the same polynomial interpolation as the one we derived using the magnetic fluxes. This follows from the uniqueness of the representation of the solution by polynomials of degree $n$. Similarly, the time update we just described can be performed only for the magnetic solution points (see Fig.~\ref{fig:sd_representation}) to fully specify our zero divergence field for the next time step. \begin{algorithm}[H] \KwData{ $A_z$ at $t=0$} \KwResult{$\vec{B}$ at $t=T$ } compute the numerical representation $A_z$\; build $\phi_x$ and $\phi_y$ by integrating $B_x$ and $B_y$ (which are given by differentiating $A_z$)\; get $B_x$ and $B_y$ by differentiating $\phi_x$ and $\phi_y$\; \While{t < T}{ perform ADER-SD update on nodal values of $B_x$ and $B_y$ through \eqref{eq:induction_update}\; } \caption{SD-ADER algorithm compatible with the induction equation.} \end{algorithm} \begin{algorithm}[H] \KwData{$B_x$ and $B_y$ at $t=t^n$} \KwResult{$B_x$ and $B_y$ at $t=t^{n+1}$} \While{iteration < total iterations}{ compute $E_z$ field on flux points (refer to Fig. \ref{fig:sd_representation}) for all time-substeps\; compute unique value of $E_z$ using a 1-dimensional Riemann solver at cell faces and a 2-dimensional Riemann solver at cell corner points\; build $E_z$ flux in space-time\; perform ADER sub-timestep update on degrees of freedom of $B_x$ and $B_y$ \; } \caption{ADER-SD update} \end{algorithm} \begin{remark} The idea of approximating the magnetic field $\vec{B}$ through tensor product polynomials while keeping $\vec{B}\cdot\vec{n}$ continuous across cell faces is a well known idea, for example, through the use of Raviart-Thomas (RT) elements \cite{Brezzi_1991} in the finite element context. In fact, this approach has been used to treat the induction equation \cite{Balsara_weno_2009,praveen_2019}. The main difference between our method and RT is that we do not explicitly have to build RT approximation basis. In particular, the continuity of $\vec{B}\cdot\vec{n}$ across cells is guaranteed by exact interpolation of nodal values collocated appropriately, as well as a Constrained Transport-like update using a unique electric field $E_z$. \end{remark} \begin{proposition} The previous scheme is equivalent to a simple evolution equation for the vector potential with a continuous SD scheme, for which both the solution points and the flux points are defined on the corner points of the control volumes. \end{proposition} \begin{proof} The magnetic fluxes introduced in \eqref{eq:mfx} and \eqref{eq:mfy} are analogous to a magnetic potential in two space dimensions. Indeed, in this case, one can compute directly the magnetic vector potential $\vec{A} = (0,0,A_z)$ at the corner points, using \begin{equation} A_z(x_i,y_i) = \phi_x(x_i,y_i) - \phi_y(x_i,0) = \phi_x(0,y_i) - \phi_y(x_i,y_i). \end{equation} We then interpolate the vector potential within the elements using Lagrange polynomials as: \begin{equation} A_z(x,y) = \sum_{i=0}^{n+1} \sum_{j=0}^{n+1} A_z(x_i,y_j) \ell_i(x) \ell_j(y) \end{equation} and compute the magnetic field components as: \begin{equation} B_{x}(x,y) = \partial_y A_z~~~{\rm and}~~~ B_{y}(x,y) = - \partial_x A_z. \end{equation} This definition is equivalent to the previous one. For the SD update, we compute the electric field at each corner point, using again a Riemann solver for multi-valued flux points. The vector potential is then updated directly at the corner point using \begin{equation} \label{eq:HJ} \partial_t A_z = - E_z = v_x B_y - v_y B_x = -v_x \partial_x A_z - v_y \partial_y A_z. \end{equation} We see that this last equation yields an evolution equation for $A_z$, where all terms are evaluated at the corner points. This corresponds to a variant of the SD method, for which the solution points are not placed at the centre of the control volumes, but migrated for example to their upper-right corner, and for which the flux points are not placed at the centre of the faces, but migrated to the same upper-right corner (thus overlapping). Note however an important difference with the traditional SD method: the vector potential $A_z$ is a continuous function of both $x$ and $y$ so that we have $n+2$ solution points instead of $n+1$. In other words, each element face shares the same values for $A_z$ and $E_z$ with the corresponding neighbouring element face. We have therefore a strict equivalence between the induction equation solved using our SD scheme for the magnetic field and the evolution equation solved using this particular SD variant for the vector potential. \end{proof} \subsection{Stability for the linear induction equation} \label{sec:stability} \begin{figure} \centering \includegraphics[width=.48\textwidth]{chapters/img/real_part.pdf} \includegraphics[width=.48\textwidth]{chapters/img/im_part.pdf} \caption{Real (left panel) and imaginary (right panel) parts of $\omega$ for different polynomial degrees $n$. In each panel, the grey line represents the exact dispersion (left) and diffusion (right) relation for the waves. The wave number is expressed here in units of $(n+1)/\Delta x$, while the frequency $\omega$ is shown in units of $(n+1) v /\Delta x$. \label{fig:stability}} \end{figure} We will now demonstrate that the proposed SD scheme for the induction equation is stable. We will also analyze the dispersion properties of the scheme at various orders. To achieve this goal, we will first exploit the strict equivalence between the magnetic field update and the vector potential update, as explained previously. We will prove the stability of the scheme in one space dimension, and use the tensor product rule to extend these results to higher dimensions. We write our vector potential equation in 1D, assuming here, without loss of generality, a positive velocity field $v>0$: \begin{equation} \label{eq:vector_potential_pde} \partial_t A + v \partial_x A = 0. \end{equation} Space is discretised using N equal-size element with width $\Delta x = L/N$ and labelled by a superscript $p=1,\dots,N$. We have $n+2$ solution points for the vector potential, which coincide with the flux points of the SD method, here labelled as $x_i^p$ with $i=0,\dots,n+1$. As explained earlier, we use for the inner $n$ flux points the Gauss-Legendre quadrature points, while the leftmost one, $x_0^p$, is aligned with the left element boundary, and the rightmost one, $x_{n+1}^p$, is aligned with the right element boundary. We see that we have redundant information in our solution vector $A_i^p$ as $x_{0}^p = x_{n+1}^{p-1}$, so that $A_0^p = A_{n+1}^p$ and $A_{n+1}^p=A_0^{p+1}$. This redundancy is a fundamental difference with the traditional SD method and ensures that the vector potential is continuous across element boundaries. In our present derivation, we need to avoid this duplication and define the solution vector $A_i^p$ in each element only for $i=1,\dots,n+1$, dropping the leftmost point and assigning it to the left element. This choice is arbitrary, but it corresponds here to the upwind solution of the Riemann solver at the left boundary, as we have $v>0$. Our vector potential solution points now resemble the classical SD method solution points shifted to the right of their closest flux points. The vector potential is interpolated within element $p$ using the $n+2$ flux points as: \begin{equation} A^p(x) = A_{n+1}^{p-1} \ell_0(x) + \sum_{j=1}^{n+1} A_j^p \ell_j(x). \end{equation} We can write the corresponding SD update as \begin{equation} \partial_t A_i^p = -v \left( A_{n+1}^{p-1} \ell^\prime_0(x_i) + \sum_{j=1}^{n+1} A_j^p \ell^\prime_j(x_i) \right). \end{equation} For the stability analysis, we follow the methodology presented in \cite{hu1999, abeele2008} and study the response of the scheme to a planar wave solution of the form: \begin{equation} A(x) = \tilde A \exp(i ( k x -\omega t )), \end{equation} using periodic boundary conditions. The stability of a planar wave solution will depend on the imaginary part of $\omega$. Indeed, the amplitude of the wave will not increase if $\Im(\omega)$ remains smaller than 0. The flux points coordinates are split between the element leftmost coordinates and the relative flux point coordinates as $x_i^p = (p-1) \Delta x + x_i$, so that we have: \begin{equation} A_i^p = \tilde A \exp(-i \omega t)\exp(i k (p-1) \Delta x) \exp(i k x_i). \end{equation} The update now writes \begin{equation} -i \omega \tilde A \exp(i k x_i) = -v \left( \tilde A \exp(i k x_{n+1}) \ell^\prime_0(x_i) \exp(-i k \Delta x) + \sum_{j=1}^{n+1} \tilde A \exp(i k x_{j}) \ell^\prime_j(x_i) \right) . \end{equation} We define the solution vector for the planar wave as $u_i = \tilde A \exp(i k x_i)$. The previous equation can be written in matrix form as follows: \begin{equation} \left( -i \frac{\omega \Delta x}{v} \mathbb{I} + \mathbb{M} \right) \vec{u} = 0, \end{equation} where, for sake of simplicity, we have considered a normalized coordinate system inside each element so that $x_0=0$ and $x_{n+1}=1$. This explains why the factor $\Delta x$ has been factored out. The matrix $\mathbb{M}$ is defined as: \begin{equation} \mathbb{M} = \begin{bmatrix} \ell^\prime_1(x_1) & \ldots & \ell^\prime_n(x_1) & \ell^\prime_{n+1}(x_1) + \ell^\prime_0(x_1) \exp(-i k \Delta x) \\ \ldots & \ldots & \ldots & \ldots\\ \ell^\prime_1(x_{n+1}) & \ldots & \ell^\prime_n(x_{n+1}) & \ell^\prime_{n+1}(x_{n+1}) + \ell^\prime_0(x_{n+1}) \exp(-i k \Delta x) \end{bmatrix}. \end{equation} The dispersion relation of the waves is obtained by requiring \begin{equation} \det \left( -i \frac{\omega \Delta x}{v} \mathbb{I} + \mathbb{M} \right) = 0 , \end{equation} which amounts to finding the $n+1$ complex eigenvalues of matrix $-i\mathbb{M}$. We can then represent the dispersion relation of the scheme with $\Re{(\omega)}$ and the diffusion relation with $\Im{(\omega)}$. More importantly, the wave amplitude will be damped if $\Im{(\omega)}<0$, corresponding to a stable numerical scheme, and will be amplified exponentially if $\Im{(\omega)}>0$, corresponding to an unstable numerical scheme. The maximum wave number is set by the grid size $\Delta x$ and the polynomial degree $n$ so that: \begin{equation} k_{\rm max} = \left( n+1 \right) \frac{\pi}{\Delta x} = \left( n+1 \right) N \frac{\pi}{L}. \end{equation} We see that the maximum wave number depends on the product $(n+1)\times N$ which corresponds to the number of degrees of freedom of the SD method. The previous dispersion relation generates $n+1$ eigenvalues in the k-interval $\left[ -\pi/\Delta x, \pi/\Delta x\right]$, owing to the periodicity of the function $\exp(-i k \Delta x)$. In order to derive the dispersion relation in the entire range of wave number $\left[ -(n+1)\pi/\Delta x, (n+1)\pi/\Delta x\right]$, the eigenvalues have to be shifted by an integer multiple of $2\pi/\Delta x$ to generate a single branch in the dispersion relation. We show in Fig.~\ref{fig:stability} the real and imaginary part of $\omega$ for a set of SD schemes that have exactly the same number of degrees of freedom $(n+1) \times N$, with $n$ ranging from 0 to 9. We note that, although our scheme is different from the classical SD scheme, the dispersion relations at these various orders are identical to the corresponding dispersion relation found by \cite{abeele2008} for the classical SD method. This strict equivalence is true only for a constant velocity field. We see also in Fig.~\ref{fig:stability} that, although all these schemes have exactly the same number of degrees of freedom, the higher the polynomial degree, the closer the scheme gets to the true solution, namely $\Im{(\omega)}=0$ and $\Re{(\omega)}=v k$, shown as a grey line in Fig.~\ref{fig:stability}. We conclude from this analysis that the SD spatial operator is stable, because $\Im{(\omega)}<0$ everywhere. To explicitly connect these results to \cite{abeele2008}, one can see that the Fourier footprint $\Omega$ can be obtained from the relation $\Omega = -i\omega$. With this nomenclature, our scheme has $\mathcal{R}(\Omega) < 0$. The SD semi-discretisation of the PDE \eqref{eq:vector_potential_pde} leads to a system of first order ordinary differential equations in time: \begin{equation} \begin{cases} U'(t) &= F(U) \\ U(0) &= U_0. \end{cases} \end{equation} We note $DOF$ the total number of degrees of freedom of the semi-discrete spatial SD operator defined by $U(t):\mathbb{R}\to\mathbb{R}^{DOF}$ and $F:\mathbb{R}^{DOF}\to\mathbb{R}^{DOF}$, respectively the vectors of unknowns and the discrete operator in space for all the degrees of freedom. We now show that using the ADER time stepping strategy, we obtain a stable, fully discrete in space and time, high-order numerical scheme. We can investigate the full system of semi-discretised equations by isolating a single mode. Taking an eigenvalue $\Omega$ of the spatial discretisation operator, we consider the canonical ODE: \[ \frac{d}{dt} u = \Omega u.\] We can write a general time integration method as \[u^{n+1} = P(\Omega \Delta t) \cdot u^n,\] where the operator $P$, called the numerical amplification factor, depends on the single parameter $\Omega \Delta t$. If we designate the eigenvalues of $P$ as $z_P$, the necessary stability condition is that all the eigenvalues $z_p$ should be of modulus lower than, or equal to, one \cite{Hirsch1988NumericalCO}. Similarly to \cite{mhv2020}, we perform a numerical stability study of the ADER scheme presented in this paper. In Fig.~\ref{fig:ader_stability}, we show the stability domains of the SD scheme, together with ADER in the $\Omega\Delta t-$plane. We can note that from $n=2$ onwards, the CFL should be reduced to a value slightly smaller than unity. We note that the stability region that we obtain is the same as the exact amplification factor $\exp(\Omega \Delta t)$ up to order $n$. This is no surprise as all methods with $n$ stages (in our case, corrections) and order $n$ have the same domain of stability \cite{Hirsch1988NumericalCO}. Then, by choosing an appropriate CFL condition, we are able to guarantee that $z_P(\Omega \Delta t)$ remain inside the ADER stability region. \begin{figure} \centering \includegraphics[width=.55\textwidth]{chapters/img/stability.pdf} \caption{Stability limits for ADER methods in the complex $\Omega \Delta t$-plane (continuous lines), from 0 to 9 corrections (ADER0 to ADER9), together with the stability domains of the SD space discretisation (symbols) for CFL = 1.0. Note that the stability region of the ADER scheme is identical to the exact amplification factor of $\exp(\Omega \Delta t)$. See text for details. \label{fig:ader_stability}} \end{figure} In the future, we would like to study the stability of our method in more detail, similarly to \cite{glaubitz2018application}, and, given the similarities between our work and the one presented in \cite{balsara_kappeli_2018}, a more detailed numerical study of the stability of this scheme is of high interest as well. \section{Acknowledgments} We gratefully thank R. Abgrall (University of Zurich) and S. Mishra (ETH Zurich) for the fruitful discussions and insights regarding this work. This research was supported in part through computational resources provided by ARC-TS (University of Michigan) and CSCS, the Swiss National Supercomputing Centre. \section{Discussion} \label{sec:mhd-discussion} \subsection{Comparing SD to RKDG for the induction equation} In this section, we compare in detail the different methods presented in this paper, namely our reference scheme, the traditional RKDG, a locally divergence-free basis variant of the scheme, called LDF, another variant of RKDG with divergence cleaning, called DivClean RKDG, and finally a novel Spectral Difference (SD) scheme specially designed for the induction equation, with the ADER time discretisation. The strong similarities with the Constrained Transport method would justify to call our new scheme using the long acronym CT-SD-ADER. From a theoretical point of view, since the traditional RKDG scheme does not have any mechanism to deal with $\nabla\cdot\vec{B} \neq 0$, it is not so surprising to see this scheme perform relatively poorly. What is puzzling is why going to higher orders is so detrimental. Although the global contribution to the divergence error decreases with increasing order, the local divergence errors seem to increase with increasing order. As truncation errors decrease, the global divergence error decreases owing to smaller discontinuities at element boundaries, but the local divergence increases because of high-frequency and high-amplitude oscillations that damage the solution. Considering a locally divergence-free polynomial basis for the magnetic field, as an explicit way to control the local divergence of the solution, seems like an obvious improvement of the scheme. However, we see that in this case the surface term, which measures the global divergence errors, becomes larger. We attribute this adverse effect to the fact that there are significantly less degrees of freedom available in the polynomial representation of the magnetic field, when comparing to the traditional RKDG scheme at the same order. Furthermore, as there is still no explicit mechanism to control global divergence errors, it is usually required to use the LDF basis in conjunction with an additional divergence cleaning mechanism to deal with the surface term. Indeed, we have shown that the divergence cleaning method (DivClean) provides an explicit, albeit non-exact, control on both the surface and the volume terms of the divergence errors, provided the two parameters, the hyperbolic cleaning speed $c_h$ and the diffusive coefficient $c_p^2$ are chosen appropriately. With these considerations in mind, we designed a new numerical method based on the SD scheme, for which both the volume term and the surface term of the divergence errors vanish exactly. This new scheme satisfies an exact conservation law for the magnetic flux through a surface. We argue this is the natural way to interpret and solve the induction equation. This approach, traditionally referred to as the Constrained Transport method, leads to a natural way to maintain zero divergence of $\vec{B}$ both locally and globally, as proved in Proposition \ref{proposition:pointwise_div_free} and Proposition \ref{proposition:globally_div_free}. We compared these 4 different methods by analyzing their performance when solving the advection of a discontinuous magnetic loop. The first (resp. second) panel of Fig.~\ref{fig:all-div-energy} shows the local (resp. global) divergence error of the schemes at different orders of accuracy. We note that for the SD scheme, we have zero contribution in both the volume and the surface terms. On the third panel of Fig.~\ref{fig:all-div-energy}, we show the magnetic energy evolution over time for the different methods. The traditional RKDG method is the only one to exhibit a spurious dynamo at third and fourth orders. The SD scheme appears slightly more diffusive than the other methods at second order, but its performance becomes comparable to LDF and DivClean at higher orders. Note that the extension to orders higher than $4$ for our new SD method is straightforward, as shown in the previous section, while the extension of the LDF method to orders higher than $4$ is quite cumbersome \citep[see for example][]{Guillet2019}. In Fig.~\ref{fig:disc-adv-allcomparison}, we show the maps of the magnetic energy density for the different schemes at fourth order and at $t=2$. First, we note that the magnetic energy distribution is well behaved for all the schemes, except RKDG, for which strong high-frequency oscillations are generated. We also see that the solution computed using LDF retains some artifacts, which appear to be aligned with the velocity field. The solution computed with DivClean appears more symmetric and overall seems to have less artifacts, although some oscillations near the discontinuous boundary are still present, similarly to the solution computed with SD. To obtain the DivClean solution, some tuning of the parameters $c_h$ and $c_p$ is required. In particular, if $c_h$ is reduced from twice the advection velocity like here, to exactly equal to the advection velocity, the same artifacts that are seen in the solution computed with LDF appear in the solution using DivClean. It is also worth stressing again that the DivClean method comes with a price: a new equation and a new variable, whose physical interpretations are unclear. A comparison of the methods with respect to their computational complexity is beyond the scope of this paper. In particular, the codes used to produce the numerical results have been developed with different programming languages and architectures. However, we can briefly comment on key similarities and differences between the DG-based methods presented and our SD-ADER method. We note that SD can be interpreted as a nodal, quadrature-free DG scheme \cite{May2011}, thus, making the proposed SD method not so different from a nodal DG one in terms of its computational complexity. Another key difference is the time-integration schemes used: for the DG-based schemes, we used SSP-RK time-integration whereas for the SD scheme we have used the ADER time-integration scheme. We note that to reach an $(n+1)$-order approximation in time, the ADER algorithm requires $n+1$ flux evaluations per time slice \cite{Jackson2017}, yielding an overall complexity of $(n+1)^2$ in time. Then, it becomes computationally more expensive than an explicit RK scheme, as the number of stages needed to reach an $(n+1)$-order approximation is typically well below $(n+1)^2$. However, as noted in \cite{Dumbser2018}, the ADER procedure can be formulated as a completely local predictor step suited for vectorisation, reducing then the complexity to $n+1$, whereas the RK scheme requires communication with its neighbours at every stage. \begin{figure} \centering \includegraphics[width=0.94\textwidth]{chapters/img/all_div_energy.pdf} \caption{Local and global divergence errors and magnetic energy evolution of the four different methods discussed in this paper for the discontinuous magnetic field loop advection test and for polynomial degrees $n=1,~2,~3$. \label{fig:all-div-energy} } \end{figure} \begin{figure} \centering \includegraphics[width=0.8\textwidth]{chapters/img/all_maps.pdf} \caption{Maps of the magnetic energy density for the four different methods discussed in this paper for the discontinuous magnetic field loop advection test and for polynomial degree $n=3$. \label{fig:disc-adv-allcomparison} } \end{figure} \subsection{SD method for non-trivial velocity fields} In subsection \ref{subsection: rotating-hump}, we consider the problem of a rotating velocity field. We show the ability of our method to solve problems with non-trivial velocity fields, as well as Dirichlet boundary conditions. For approximation polynomial degree of $n=1$, we obtain similar qualitative results to those of \cite{Torrilhon2004} (given that the initial $B_0$ is different). As we increase the approximation order, we can observe that the numerical solution converges to the analytical one. \subsection{Extension of our new method to three space dimensions} In this section, we speculate about a possible straightforward extension of our scheme to three dimensions in space. It is not the scope of this paper to present a detailed implementation of this algorithm, however, we want to stress that this extension is not only possible, but also relatively easy and consistent with the present work. They are however a few key differences with respect to the 2D case. The first difference comes from the definition of the magnetic flux and from the resulting magnetic field. We now define the magnetic flux of $B_x$ across a rectangle sitting in the plane $x=x_i$ and defined by the 4 points $(0,0)$, $(0,z_k)$, $(y_j,0)$ and $(y_j,z_k)$ as: \begin{equation} \phi_x(x_i,y_j,z_k) = \int_0^{y_j} \int_0^{z_k} B_x(x_i,y,z) {\rm d}y {\rm d}z, \end{equation} where the coordinates $y_j$ and $z_k$ correspond to the flux points in each direction, or in other words, to the corner points of each control volume inside the element. The magnetic flux is then interpolated everywhere inside the element using Lagrange polynomials defined using the flux points. \[\phi_x(x,y,z) = \sum_{i=0}^{n+1} \sum_{j=0}^{n+1} \sum_{k=0}^{n+1}\phi_x(x_i,y_j,z_k) \ell_i(x) \ell_j(y) \ell_k(z). \] The magnetic field inside the element is obtained through a second-order derivative as follows: \begin{equation} B_{x}(x,y,z) = \partial^2_{yz} \phi_x = \sum_{i=0}^{n+1} \sum_{j=0}^{n+1} \sum_{k=0}^{n+1} \phi_x(x_i,y_j,z_k) \ell_i(x) \ell^{\prime}_j(y) \ell^{\prime}_k(z). \end{equation} Note that this formulation is equivalent to the alternative approach we describe below using the vector potential. It is however important to understand that this interpolation is used only at initialisation, to make sure the corresponding magnetic field is strictly divergence free. The next step is to evaluate the magnetic field at the solution points, which, in the 3D case, are now located at the centre of the face in the staggered direction: $B_x(x^f_i,y^s_j,z^s_k)$, $B_y(x^s_i,y^f_j,z^s_k)$ and $B_z(x^s_i,y^s_j,z^f_k)$, where the $f$ and $s$ superscripts correspond again to flux and solution points respectively. Once the field has been initialised on the solution points, we then interpolate the field within each face of the control volumes using Lagrange polynomials, which are defined using the solution points as in the traditional SD method. Using these definitions, it is straightforward to generalise Proposition~\ref{proposition:pointwise_div_free} to the 3D case, and prove that $\nabla\cdot\vec{B}=0$. The components of the electric field are defined at the centre of the edges between control volumes, located at flux points in the directions orthogonal to the component, and at solution points along the components direction: $E_x(x^s_i,y^f_j,z^f_k)$, $E_y(x^f_i,y^s_j,z^f_k)$ and $E_z(x^f_i,y^f_j,z^s_k)$. The electric field is again defined as $\vec{E}= - \vec{v}\times\vec{B}$, therefore this method requires to know the orthogonal velocities at those same edges, and to solve a 1D Riemann problem at element's faces and a 2D Riemann problem at element's edges. As in Proposition~\ref{proposition:globally_div_free}, the SD update of the magnetic field is obtained directly using a pointwise update at the magnetic field solution points: \begin{equation} \partial_t B_{x} = \partial_z E_y - \partial_y E_z,~~~ \partial_t B_{y} = \partial_x E_z - \partial_z E_x~~~{\rm and}~~~ \partial_t B_{z} = \partial_y E_x - \partial_x E_y. \end{equation} It follows trivially, like in the 2D case, that \begin{equation} \partial_t \left( \partial_x B_{x} + \partial_y B_{y} + \partial_z B_{z}\right) = 0. \end{equation} We have here again an equivalence between the SD method applied to the magnetic field and a similar SD method applied to the vector potential. It is however more difficult in 3D to compute the vector potential from the magnetic field. It requires a complex inversion and the choice of a gauge, using for example the Coulomb gauge, for which $\nabla \cdot \vec{A}=0$. Assuming we know the vector potential, we define for each component the line integral over the component's direction, as shown here for the $z$-direction: \begin{equation} \alpha_z(x_i,y_j,z_k) =\int_0^{z_k} A_z(x_i,y_j,z) {\rm d}z. \end{equation} As for the magnetic flux, this quantity is defined at the corner points of the control volumes using flux points in each direction. We can then use the Lagrange polynomials defined using the flux points to compute the vector potential everywhere as: \begin{equation} A_{z}(x,y,z) = \partial_z \alpha_z = \sum_{i=0}^{n+1} \sum_{j=0}^{n+1} \sum_{k=0}^{n+1} \alpha_z(x_i,y_j,z_k) \ell_i(x) \ell_j(y) \ell^{\prime}_k(z). \end{equation} We can now evaluate the vector potential at the corresponding solution points, which are, as for $\vec{E}$, defined at the edges of the control volumes: $A_x(x^s_i,y^f_j,z^f_k)$, $A_y(x^f_i,y^s_j,z^f_k)$ and $A_z(x^f_i,y^f_j,z^s_k)$. Once we know the polynomial representation of the vector potential, the magnetic field can be derived using pointwise derivatives and $\vec{B} = \nabla \times \vec{A}$. The vector potential can finally be updated directly at its solution points using (shown here only for $A_z$): \begin{equation} \partial_t A_z = -v_x \partial_x A_z - v_y \partial_y A_z + v_x \partial_z A_x + v_y \partial_z A_y. \end{equation} This is again the vector potential equation, although in a more complex form than in the 2D case. It can however be solved using our SD scheme, exactly like in 2D. \subsection{Extension of our new method to ideal MHD } The natural progression of this work is to extend the proposed SD method to the full magneto-hydrodynamics equations. The first difficulty is to solve 2D Riemann problems at element edges. Fortunately, 2D Riemann solvers in the context of ideal MHD have been already developed in the past years in multiple implementations of Constrained Transport for the FV Godunov method \cite{Londrillo2004,Teyssier2007,balsara2010,balsara2012,Balsara2014,balsara2015a,balsara2017}. As for the time stepping, the ADER methodology is trivially extended to 3-D and nonlinear problems \cite[see e.g.][]{dumbser_ader_2013}. Our proposed version of ADER only differs in the fact that we do not remain local during the iterative process, as we require Riemann solvers as part of the SD space discretization. This means that in the MHD case, we ought to use an appropriate Riemann solver as described above. The second difficulty comes from finding the appropriate shock capturing techniques for the SD method, which traditionally has been achieved through artificial viscosity \cite{Premasuthan2014}. Finding both a way to enforce preservation of positiveness and not clipping smooth extrema, while constraining as least as possible the performance of the method, is of extreme importance. Recent advances in shock capturing methods, such as \cite{Vilar2019}, provide a natural way of performing sub-cell limiting in a nodal discontinuous Galerkin method based on the \textit{a posteriori} subcell limiting strategy (MOOD) \cite{Dumbser2016} that can guarantee both positivity and as little as possible extrema clipping in smooth profiles. This methodology seems promising to be applied to our SD method in the context of the ideal MHD equations when used in combination with a robust finite volume scheme that preserves the divergence free nature of the solution. \section{Various high-order Discontinuous Galerkin methods for the induction equation} \label{sec:overview} \begin{figure} \includegraphics[width=0.94\textwidth]{chapters/img/rkdg_div_energy.pdf} \begin{center} \includegraphics[width=1.0\textwidth]{chapters/img/rkdg_maps.pdf} \caption{Performance of the traditional RKDG scheme for the magnetic loop test defined in Eq.~\eqref{eq:magloop-ics}. In the top row, the first two panels show the divergence contribution of the volume term and of the surface term, respectively. The third panel shows the magnetic energy of the solution over time. In the bottom row, maps of the magnetic energy density are shown at $t = 2$. The three runs corresponds to increasing polynomial degree ($n=1$, $n=2$ and $n=3$) and a fixed number of cells ($N = 128$ per dimension). \label{fig:rkdg-loopadvection-div-energy}} \end{center} \end{figure} The Discontinuous Galerkin (DG) method is a very popular FE scheme in the context of fluid dynamics. It is based on a weak formulation of the conservative form of the MHD equations using a Legendre polynomial basis \cite{cockburn1998}. In this context, the induction equation has to be written as \begin{equation} \partial_t \vec{B} + \nabla \cdot (\vec{v}\otimes \vec{B} - \vec{B}\otimes \vec{v}) = 0, \label{eq:mhd-induction-eq-div} \end{equation} in a conservative form compatible with the Euler sub-system. Indeed, this equation is now based on the divergence operator, which forces us to deal with the magnetic field through volume integrals. In this section, we describe three different implementations of the DG method for the induction equation. The first one is the classical DG method with Runge-Kutta time integration (RKDG), for which nothing particular is done to preserve the divergence-free character of the solution. The second method, presented for example in \cite{Guillet2019,Cockburn2004}, uses a modified polynomial basis for the magnetic field, so that it is locally exactly divergence free. The third one allows the divergence to explicitly deviate from zero, but tries to damp the divergence errors using an additional scalar field and its corresponding equation \citep{munz2000}. We will evaluate the performance of these three classical methods using a proper measurement of the divergence error, as well as the conservation of the magnetic energy, using the famous magnetic loop advection test. \subsection{A traditional RKDG for the induction equation} In this section we describe the classical modal RKDG method using a simple scalar problem in one space dimension: \begin{equation} \label{eq:conslaw} \begin{cases} \partial_t u + \partial_x f( u ) = 0 \quad \in \Omega \times [0,\infty]\\ u(t=0) = u_0 \\ u_{\partial \Omega} = g. \end{cases} \end{equation} The generalisation to multiple space dimensions for structured Cartesian grids can be achieved through tensor products. Let $\Omega \in \mathbb{R}$ be a regular domain which is discretised by $N$ elements $K_p = [x_{p-1/2},x_{p+1/2}]$ for $p=1,...,N$. Consider the local space $\mathcal{V}$ given by the set $\{\phi_i\}_{i=0}^{n}$ of one dimensional Legendre polynomials with degree of at most $n$ in $x$. For each element $K_{p}$, the numerical solution is written as: \[u(x,t) = \sum_{i=0}^{n} \hat{u}_i(t) \phi_i(x),\] where the modal coefficient $\hat{u}_i(t)$ is obtained by the $L^2$ projection of the solution $u(x)$ on the $i$-th Legendre basis polynomial. The DG method is based on a weak form of Eq.~\eqref{eq:conslaw}, projecting it on the polynomial basis, followed by an integration by parts. We obtain the following semi-discrete formulation of the DG method as: \begin{align*} \label{eq:dg} \frac{d \hat{u}_i}{dt} + \left[ \hat{f}(u(x,t))\phi_i(x)\right]_{x_{p-1/2}}^{x_{p+1/2}} - \int_{K_p} f(u(x,t)) \partial_x \phi_i(x) {\rm d}x = 0,\quad i=0,...,n, \end{align*} where we exploited the fact that Legendre polynomials form an orthonormal basis. Note that the surface term in the previous equation needs a Riemann solver to compute a continuous numerical flux at element boundaries, noted here $\hat{f}$. Once the spatial component has been discretised, we are left with an ordinary differential equation of the form: \[ \frac{d}{dt} u = \mathcal{L}(u), \] where $\mathcal{L}$ denotes the DG discretisation operator. Integration in time is performed using a Strong Stability Preserving (SSP) RK method \cite{Gottlieb2005, Kubatko2014}. The time step has to fulfill a Courant-Friedrich-Lewy (CFL) condition to achieve numerical stability, which for the RKDG scheme reads \cite{cockburn1998}: \[ \Delta t = \frac{C}{2n + 1} \frac{\Delta x}{\left| v_{\rm max}\right| }, \] where $n$ is the polynomial degree and $C$ is a constant usually set to $C=0.8$. \subsection{Quantifying divergence errors} It is highly non-trivial to estimate the error in the divergence of the magnetic field for high-order methods in general, and for FE schemes in particular. Indeed, the numerical approximation of the solution is defined in a local sense, with polynomials of degree at most $n$ inside each element, but also in a global sense by considering the solution given by the union of all the elements. A suitable measurement for $\nabla\cdot \vec{B}$ has been proposed by \cite{Cockburn2004} as \begin{equation} \label{eq:divmeas} \@ifstar{\oldnorm}{\oldnorm*}{\nabla \cdot \vec{B}} = \sum_{e\in \mathcal{E}} \int_e \@ifstar{\oldabs}{\oldabs*}{ \jp{ \vec{B}\cdot \vec{n} } } {\rm d}s + \sum_{K\in \mathcal{K}} \int_K \@ifstar{\oldabs}{\oldabs*}{\nabla \cdot \vec{B}} {\rm d}\vec{x}, \end{equation} where $\jp{ \vec{B}\cdot\vec{n}_x } = B_x^{int(K)}-B_x^{ext(K)} $ (for example) denotes the jump operator and $B_x^{int(K)},~ B_x^{ext(K)}$, are the limits of $B_x$ at interface $e$ from the interior and exterior of $K$ respectively. We assume $\vec{B}$ is smooth within each element $K \in \Omega$. However, in the DG framework, $\vec{B}$ can be discontinuous across element boundaries (noted here $e$). In the previous formula, $\mathcal{E}$ denotes the set of element interfaces and $\mathcal{K}$ the set of element volumes. Note that, for a piecewise-smooth function that is divergence free inside each element, it is globally divergence free if and only if the normal component of the vector field across each interface $e$ is continuous, hence the consideration of the jump in the normal component of the magnetic field across the interfaces $e$, given by the first term in Eq.~\eqref{eq:divmeas}. This divergence error measurement has been derived by exploiting the properties of the space $H(div)$ \cite{nedelec1980} or by using a functional approach \cite{Cockburn2004}. In what follows, we call the first contribution the surface term, and the second contribution the volume term. \subsection{Magnetic energy conservation} The other metric used in this paper to characterise different numerical methods is the evolution of the magnetic energy. This is particularly important in the context of magnetic dynamos \citep{Roberts2000, Brandenburg2005}, as one wishes to avoid having spurious magnetic dynamos triggered by numerical errors. Using \eqref{eq:induction-eq-ext} and considering again a non-zero divergence, the magnetic energy equation can be written as: \begin{equation} \label{eq:induction-energy-eq} \partial_t \left( \frac{B^2}{2} \right) + \left( \vec{v}\cdot \nabla \right) \left( \frac{B^2}{2} \right) = - B^2(\nabla\cdot\vec{v}) + \vec{B}\cdot(\vec{B}\cdot\nabla)\vec{v} + (\vec{B}\cdot\vec{v})(\nabla\cdot\vec{B}), \end{equation} where the last term is here again spurious. For example, in the simple case of pure advection where $\vec{v}$ is constant, one can observe that the first two terms on the right hand side vanish, while the third term vanishes only if $\nabla\cdot\vec{B}=0$. On the other hand, if $\nabla\cdot\vec{B} \ne 0$, depending on the solution properties, one could observe a spurious increase of the magnetic energy over time, and interpret it wrongly as a dynamo. In the advection case, the magnetic energy is expected to remain constant, although we expect the numerical solution of the magnetic energy to decay, owing to the numerical dissipation associated to the numerical method. It should however never increase. \subsection{The field loop advection test} The advection of a magnetic loop is a well-known numerical experiment introduced for example in \cite{Gardiner2005} to test the quality of the numerical solution, with respect to both divergence errors and magnetic energy conservation. The test is defined using the following {\it discontinuous} initial magnetic field, \begin{equation} \label{eq:magloop-ics} \vec{B}_0 = \begin{pmatrix} B_{x,0} \\ B_{y,0} \end{pmatrix} = \begin{pmatrix} -A_0(y-y_c)/r \\ A_0(x-x_c)/r \end{pmatrix} \quad {\rm ~for~}r < r_0, \end{equation} and $\vec{B}_0=0$ otherwise, advected with a constant velocity field $\vec{v}=(1,1)$. We use here $A_0 = 0.001$, $r_0=0.25$ and $(x_c,y_c)=(0.5, 0.5)$. We consider a square box $[0,1]\times[0,1]$ and the final time $t = 2$. This allows the loop to cross the box twice before returning to its initial position. In Fig.~\ref{fig:rkdg-loopadvection-div-energy} we show the performance of our traditional RKDG scheme at different approximation orders. When measuring the divergence errors of the numerical solution, we observe that the volume term (measuring global divergence errors) seems to decrease with the approximation order (middle panel) as expected. On the contrary, the surface term (measuring local divergence errors) does not decrease at all. In fact, local errors increase with increasing polynomial degree. Furthermore, the magnetic energy evolution is clearly incorrect. Namely, at $3^{rd}$ and $4^{th}$ orders (corresponding to a maximal polynomial degree of $n=2$ and $n=3$, respectively), an initial increase on the magnetic energy is observed. In the bottom panel (Fig.~\ref{fig:rkdg-loopadvection-div-energy}), maps of the magnetic energy density $B^2/2$ (normalised to the maximum value in the initial condition) are shown at $t = 2$ and at different orders. We see spurious stripes with high-frequency oscillations, aligned with the direction of the velocity field. Our results are similar to the numerical experiments performed in \cite{nunez2018} and consistent with Eq.~\eqref{eq:induction-energy-eq}. We clearly have to give up our initial hopes that going to very high order would solve the divergence-free problem. \subsection{RKDG with a locally divergence-free basis (LDF)} \label{sec:ldfrkdg} The locally divergence-free (LDF) method was first introduced by \cite{Cockburn2004} with the intention to control the local contribution of the divergence. Indeed, we have seen in the last sub-section that this term dominates the error budget in our simple numerical experiment. This method has been recently revisited in \cite{Guillet2019} in conjunction with several divergence cleaning schemes. LDF is built upon the previous RKDG scheme with the key difference that the approximation space considered for the magnetic field $\vec{B}$ is given by: \[ \vec{\mathcal{V}}^n = \{ \vec{v} \in [L^1]^d: \vec{v}\restrict{K} \in[\mathbb{P}^n]^d, \nabla \cdot \vec{v}\restrict{K} = 0 \}. \] The trial space considered contains only functions which are divergence free inside each element $K$ and belong to the $d$-dimensional vector space $[\mathbb{P}^n]^d$, where each polynomial is a polynomial of at most degree $n$. One key difference between this method and the traditional RKDG is that the modal coefficients of the solution are now shared between $B_x, B_y$ and $B_z$ due to this new carefully designed vector basis. We show in this paragraph only the example for $n=1$ in two space dimensions. For more details on the implementation, please refer to Appendix~\ref{ap:LDF}. \begin{example} {\textbf{d = 2, n = 1:}} Consider the basis elements of the $\mbox{span}(\{1,x,y,xy,y^2,x^2\}) = \mathbb{P}^{2}(x,y)$. Form the vector $\vec{b}_i = (0, 0, v_i)^T$ for $v_i \in {\rm basis}(\mathbb{P}^{2}(x,y))$ and take the $curl$ of its elements. This set of vectors spans a subspace of $[\mathbb{P}^{1}(x,y)]^2$. \[ \vec{\mathcal{V}}^1 = {\rm span}\left(\{(0,-1),(1,0),(-x,y),(2y,0),(0,2x)\}\right) \subset [\mathbb{P}^1(x,y)]^2. \] \end{example} In Fig.~\ref{fig:ldf-loopadvection-div-energy} we show the performance of the LDF scheme at different approximation orders. When measuring the divergence of the numerical solution, we observe that the local contribution of the divergence is zero (as expected). The global contribution (middle panel), while decreasing with the order, is considerably larger than the traditional RKDG scheme. We believe this is due to the reduced number of degrees of freedom in the LDF basis. For the measured magnetic energy, we don't see a spurious dynamo anymore, only a decay due to numerical dissipation. We also observe less numerical dissipation when increasing the order of the method, a desirable property. In the bottom panel of the same figure, the magnetic energy density maps show some residual oscillations at $t = 2$, although much less than for the original RKDG scheme. In order to reduce these oscillations even more, one traditionally uses the LDF scheme in conjunction with a divergence cleaning strategy \cite{Cockburn2004,Guillet2019,klingenberg2017}, similar to the one presented in the next section. \begin{figure} \includegraphics[width=0.94\textwidth]{chapters/img/ldf_div_energy.pdf} \begin{center} \includegraphics[width=1.0\textwidth]{chapters/img/ldf_maps.pdf} \caption{ Same as Fig.~\ref{fig:rkdg-loopadvection-div-energy} but now for the LDF scheme (solid lines). For comparison, the results of the traditional RKDG scheme are shown as dotted lines. Note that the LDF scheme has no local divergence errors by construction (no solid lines in the top left panel). \label{fig:ldf-loopadvection-div-energy}} \end{center} \end{figure} \subsection{RKDG with hyperbolic and parabolic divergence cleaning (DivClean)} Divergence cleaning is a general strategy aiming at actively reducing divergence errors by modifying the solution at every time step. Among many possibilities that can be found in the literature \citep[see][for example]{toth1996}, we have adopted here a robust technique based on the addition of a new variable that can be used to control and dissipate the divergence of the magnetic field $\vec{B}$. Following \cite{dedner2002}, we briefly describe this method that performs what is called parabolic and hyperbolic \textit{divergence cleaning}. The idea is to introduce an additional scalar field $\psi$ and couple it to the induction equation. This method is also known as the Generalised Lagrangian Multiplier (GLM) approach \cite{munz1999, munz2000}. The induction equation in its divergence form in Eq.~\eqref{eq:mhd-induction-eq-div} is modified as \begin{equation} \label{eq:glm-induction-eq} \begin{split} \partial_t \vec{B} + \nabla \cdot (\vec{v}\otimes \vec{B} - \vec{B} \otimes \vec{v}) + \nabla \psi &= 0,\\ \mathcal{D}(\psi) + \nabla \cdot \vec{B} &= 0, \end{split} \end{equation} where $\mathcal{D}(\cdot)$ is a linear differential operator. There are different ways to choose $\mathcal{D}(\cdot)$ \cite{munz1999,munz2000}. In this work, we choose a \textit{mixed} type of correction, defining $\mathcal{D}(\cdot)$ as \[ \mathcal{D}(\psi):= \frac{1}{c_h^2}\partial_t \psi + \frac{1}{c_p^2}\psi. \] The new scalar variable $\psi$ is coupled to the non-vanishing divergence of the magnetic field and evolves according to a new additional partial differential equation: \[\partial_t \psi + \frac{c_h^2}{c_p^2}\psi + c_h^2\nabla\cdot\vec{B} = 0.\] Both $c_h$ and $c_p$ are free parameters tuned for each particular problem at hand. The hyperbolic parameter $c_h$ corresponds to the velocity of the waves that are carrying the divergence away from regions where errors are created. The parabolic parameter $c_p^2$ corresponds to the diffusion coefficient of the parabolic diffusion operator that damps divergence errors. There are different strategies to choose $c_h$ and $c_p$ that could lead to a robust scheme. Different methods have been proposed in the literature \cite{dedner2002,Guillet2019,Mignone2010}, and these choices boil down to setting the speed $c_h$ to be a small multiple of the maximum of the velocity field $\left| v_{\rm max} \right|$ and the magnitude of the diffusion coefficient $c_p^2$ as a small multiple of $c_h \Delta x$. In Fig.~\ref{fig:divc-loopadvection-div-energy} we show the performance of the RKDG scheme with both hyperbolic and parabolic divergence cleaning, called here DivClean, at different approximation orders. For implementation details, please refer to Appendix~\ref{ap:DivClean}. In this numerical experiment, we set $c_h$ and $c_p^2$ according to \cite{Mignone2010}, namely, we choose $c_h = 2 \left| v_{\rm max}\right| $ such that the overall time step is almost not affected, and $c_p^2 = 0.8 c_h \Delta x$. We see that both surface and volume terms of the divergence error norm are small, and they both decrease with increasing orders. The magnetic energy density maps look very smooth and symmetrical, with very small residual features close to the discontinuity. It is worth stressing that none of the tests performed here make use of TVD slope limiters, so that some residual oscillations due to the Runge phenomenon are expected. \begin{figure} \includegraphics[width=0.94\textwidth]{chapters/img/divclean_div_energy.pdf} \begin{center} \includegraphics[width=1.0\textwidth]{chapters/img/divclean_maps.pdf} \caption{Same as Fig.~\ref{fig:rkdg-loopadvection-div-energy} but now for the DivClean scheme (solid lines). for comparison, the results of the traditional RKDG scheme are shown as dotted lines. \label{fig:divc-loopadvection-div-energy}} \end{center} \end{figure} \section{Conclusions} \label{sec:conclusion} In this work, we have analysed in detail several variants of the high-order DG method with RK time integration for the induction equation, while attempting to preserve the constraint $\nabla\cdot\vec{B}=0$ with various degrees of success. We have then presented a novel, arbitrary high-order numerical scheme based on a modification of the Spectral Difference (SD) method with ADER time integration for the induction equation. This new scheme preserves $\nabla\cdot\vec{B}=0$ exactly by construction. It is a natural extension of the Constrained Transport scheme to the SD method. We have proved that both the volume term and the surface term in the norm definition of the divergence vanish. We have also reformulated our scheme in terms of the vector potential, which allows a direct connection with a classical SD method for the evolution of the vector potential, allowing us to analyse its stability and dispersion properties, with results similar to \cite{abeele2008}. Furthermore, we show that the combination of ADER and SD result in a stable method when choosing the appropriate CFL condition. We have shown with various numerical experiments that our method converges at the expected order, namely $\Delta x^{n+1}$, where $n$ is the polynomial degree of the adopted interpolation Lagrange polynomials and $\Delta x$ the element size. We have also considered the discontinuous field loop advection test case \cite{Gardiner2005}, a problem known to reveal artifacts caused by not preserving $\nabla\cdot\vec{B}=0$. We have shown again that our new method behaves well, up to incredibly high orders (polynomial degree $n=39$), conserving the magnetic energy almost exactly by drastically reducing advection errors, provided the order is high enough. Furthermore, we also test our method using a non-trivial velocity field and show our method leads to the correct solution, and that we qualitatively get similar results as in \cite{Torrilhon2004}. We have then compared our novel method with the high-order DG variants presented in the first part of the paper. The magnetic energy evolution and the solution maps of the SD-ADER scheme all show qualitatively similar and overall good performances when compared to the Divergence Cleaning method applied to RKDG, but without the need for an additional equation and an extra variable to help controlling the divergence errors. We have finally discussed our future plans to extend this work to three dimensions and to fully non-linear ideal MHD. \section{Numerical results} \label{sec:numerics} \begin{figure} \centering \includegraphics[width=.48\textwidth]{chapters/img/mhd-smooth.pdf} \includegraphics[width=.5\textwidth]{chapters/img/Bx-L1error.pdf} \caption{Left panel: map of the magnetic energy density for the case $n=4$ and $N=32$. Right panel: $L^1$ convergence of the SD method for the smooth magnetic potential from Eq.~\eqref{eq:smoothloop} at different orders and spatial resolutions. \label{fig:conv-rate-smooth-pot}} \end{figure} In this section, we test our new SD-ADER scheme for the induction equation using first a smooth initial condition, ensuring that our method is truly high order, and then using a more difficult tests, namely the advection of a discontinuous field loop under a constant velocity and a rotating velocity field. Finally, we compare our new SD-ADER scheme's performance to the various variants of the RKDG scheme on the advection of a discontinuous field loop problem. \subsection{Continuous magnetic field loop} In order to check that we are indeed solving the induction equation at the expected order of accuracy, we consider the advection of a smooth and periodic magnetic field given by the following initial conditions: \begin{equation} \label{eq:smoothloop} \vec{B} = \left( \cos(2\pi y), -\cos(2\pi x), 0\right), \end{equation} with a constant velocity field $\vec{v} = (1,1)$. We estimate the convergence rate of the proposed SD method by computing the $L^1$ error of each magnetic field component, averaged over the control volumes within each element. The $L^1$ error is defined as the $L^1$ norm of the difference in the mean value for each control volume between the numerical solution $u(t)$ at $t=1$ and the initial numerical approximation $u(0)$: \begin{equation} \label{eq:L1} L^1 = ||u(t)-u(0)||_1 = \sum_{K\in \mathcal{K}} \int_K |u(t)-u(0)|{\rm d} x{\rm d} y. \end{equation} In Fig.~\ref{fig:conv-rate-smooth-pot}, we present the $L^1$ convergence rates for $B_x$ only. We omit the results for $B_y$ as these are identical to the ones of $B_x$ due to the symmetry of the initial conditions. We can observe that the convergence rate of the method scales as $\Delta x^{n+1}$ (where $n$ is the polynomial degree of the interpolation Lagrange polynomial), as expected of a high-order method, and as observed in other high-order method implementations \cite{Schaal2015,Guillet2019,Derigs2018}. As introduced in the previous section, the product $(n+1) \times N$ gives the number of control volumes per spatial direction, and corresponds to the number of degrees of freedom of the method. We conclude from the observed error rates that considering a high-order approximation will reach machine precision with a drastically reduced number of degrees of freedom. For example, we see that the $7^{th}$-order method is able to reach machine precision for a cell size as large as $\Delta x = L/32$. \subsection{Discontinuous magnetic field loop} In this section we consider the initial conditions given by the discontinuous magnetic field loop test case, as introduced in section \ref{sec:overview}. We start by presenting in Fig.~\ref{fig:sd-loop-order} the solution maps computed at $t=1$ with the SD method while increasing the polynomial degree $n$, specifically for $n=0,1,2,3,6$ and $9$, and for $N=32$ cells per side. As we can see, increasing the order considerably improves the quality of the solution, and furthermore, even for a number of cells as small as $32$ per side, both the seventh- and tenth-order simulations ($n=6$ and $9$ respectively) show remarkable results preserving the shape of these discontinuous initial conditions. \begin{figure} \centering \includegraphics[width=\textwidth]{chapters/img/mhd-loop.pdf} \caption{Discontinuous magnetic field loop advection test with increasing polynomial degree and $32$ cells on a side. \label{fig:sd-loop-order} } \end{figure} In Fig.~\ref{fig:sd-loop-dof} we go a step further, testing the "arbitrary high-order" character of our numerical implementation. In this figure we present again the solution maps at $t=1$, showing in black the mesh for the cells and in grey the mesh for the inner control volumes. While keeping constant the number of degrees of freedom per dimension $(n+1)\times N$, we show the increasingly better results as the order of the scheme is increased and the number of cells is decreased, keeping a constant product $(n+1)\times N=40$. In the most extreme case, of little practical interest, we go as far as testing a $40^{th}$-order method with one cell (as shown in the bottom-left panel of Fig.~\ref{fig:sd-loop-dof}). Surprisingly for us, this one cell simulation is able to preserve the initial conditions better than all the other cases. Indeed, in this extreme case, the flux points are "squeezed" towards the boundaries of the element, which results in an apparent loss of resolution of the control volumes at the centre of the element. The increased order of accuracy easily compensates for this effect. \begin{figure} \centering \includegraphics[width=\textwidth]{chapters/img/mhd-loop-dof.pdf} \caption{Discontinuous magnetic field loop advection test with increasing polynomial degree while maintaining the number of degrees of freedom equal to $40$. \label{fig:sd-loop-dof} } \end{figure} We now present the performance of the method in preserving the magnetic energy. We show the normalised magnetic energy as a function of time for the simulations presented in Fig.~\ref{fig:sd-loop-order} (resp. Fig.~\ref{fig:sd-loop-dof}) on the left panel (resp. right panel) of Fig~\ref{fig:sd-loop-EB}. We see that going to higher order at fixed element resolution significantly improves the conservation property of the scheme. Our simulation with 32 elements and order 10 shows virtually no advection error anymore within the simulated time interval, at the expense of increasing the number of degrees of freedom significantly. The second experiment, with a fixed number of degrees of freedom $(n+1)\times N = 40$, still shows a significant improvement in the energy conservation as the order of the method is increased. Our extreme case with only one element and a polynomial degree $n=39$ has also no visible advection errors in the magnetic energy evolution. Note however that the computational cost of the method increases significantly with the order of accuracy, even when keeping the number of degrees of freedom constant. Regarding the conservation of the magnetic energy, for a given target accuracy, it is more efficient to go to higher order than to go to more computational elements. \begin{figure} \centering \includegraphics[width=\textwidth]{chapters/img/EB-dof.pdf} \caption{Normalised magnetic energy as a function of time for the discontinuous magnetic field loop advection test. The left panel shows the results for a fixed number of elements $N=32$ and an increasing order of accuracy, corresponding to Fig~\ref{fig:sd-loop-order}. The right panel shows the results for a fixed number of degrees of freedom $(n+1)\times N = 40$, corresponding to Fig~\ref{fig:sd-loop-dof}. \label{fig:sd-loop-EB} } \end{figure} In order to compare our new SD scheme with the RKDG variants we have presented in section~\ref{sec:overview}, we show in Fig.~\ref{fig:sd-loopadvection-div-energy} the exact same field loop advection test for the SD implementation with $N=128$ elements per side. The reader is kindly asked to compare to Fig.~\ref{fig:rkdg-loopadvection-div-energy} through Fig.~\ref{fig:divc-loopadvection-div-energy}. The top left panel shows our results for the divergence errors of the numerical solution, compared to the traditional RKDG scheme, for both the volume and surface terms. This plot is meant as a joke, as obviously both terms are identically zero for the SD scheme, so only the traditional RKDG results are visible. We confirm that the SD method preserves $\nabla\cdot\vec{B} = 0$ to machine precision, both in a global and in a local sense. The right top panel shows again the magnetic energy evolution of the SD method, but this time with the same number of elements and order of accuracy than the experiments performed in section~\ref{sec:overview}. We see that the SD method shows no spurious dynamo. In the bottom panel of Fig.~\ref{fig:sd-loopadvection-div-energy}, we show the solution maps for magnetic energy density at $t=2$, in order to compare with the maps of Fig.~\ref{fig:rkdg-loopadvection-div-energy}, Fig.~\ref{fig:ldf-loopadvection-div-energy} and Fig. ~\ref{fig:divc-loopadvection-div-energy}. We note that the solution features a slight upwind asymmetry, as opposed to the solution of the DivClean RKDG method, especially for $n\leq3$. This upwind bias seems to disappear when moving to higher order. A detailed comparison of the various schemes is presented in the next section. \begin{figure} \includegraphics[width=0.94\textwidth,]{chapters/img/sd_div_energy.pdf} \begin{center} \includegraphics[width=1.0\textwidth]{chapters/img/sd_maps.pdf} \caption{Same as Fig.~\ref{fig:rkdg-loopadvection-div-energy} but now for the SD scheme (solid lines). For comparison, the results of the traditional RKDG scheme are shown as dotted lines. Note that the SD scheme has no local and no global divergence errors by construction (no solid lines in the top left and top middle panels). \label{fig:sd-loopadvection-div-energy}} \end{center} \end{figure} \subsection{Rotating discontinuous magnetic field loop} \label{subsection: rotating-hump} In this section, we consider the rotation of a discontinuous magnetic field loop. This test describes a linear velocity field $\vec{v} = (-y,x)^T$ acting on the magnetic field, resulting in a rotation around the origin. In this work, we use the following initial condition for the magnetic field $\vec{B_0}$: \begin{equation} \label{eq:magloop-ics} \vec{B}_0 = \begin{pmatrix} B_{x,0} \\ B_{y,0} \end{pmatrix} = \begin{pmatrix} -A_0(y-y_c)/r \\ A_0(x-x_c)/r \end{pmatrix} \quad {\rm ~for~}r < r_0, \end{equation} and $\vec{B}_0=0$ otherwise. We use here $A_0 = 0.001$, $r_0=\sfrac{1}{8}$ and $(x_c,y_c)=(\sfrac{3}{4}, \sfrac{1}{2})$. Then, the exact solution at time $t$ is given by: \begin{equation} \vec{B}(\vec{x},t) = R(t)^{-1}B_0(R(t)\vec{x}), \end{equation} where $R(t)$ is a orthogonal matrix which rotates a vector by the angle $t$, \begin{equation} R(t) = \begin{pmatrix} \cos(t) & -\sin(t) \\ \sin(t) & \cos(t) \end{pmatrix}. \end{equation} Lastly, the computation domain considered is a box $[0,1]^2$ and at the boundary, the exact solution is prescribed in the ghost cells. In Fig.~ \ref{fig:sd-rotatinghump-order}, we show the solution computed by our proposed SD-ADER method varying the polynomial degree approximations. The solution is shown at $t=\pi$, corresponding to half a rotation. We observe that, as well as in the previous case, the method is able to preserve the discontinuous magnetic loop for $n$ greater or equal to 1. When comparing to Fig.~\ref{fig:sd-loop-order}, we have to highlight that the magnetic loop is being evolved up to a time $\pi$ times larger. Even then, the results remain similar, that is, increasingly better for higher order, thus showcasing the low numerical advection error that the method can reach. Furthermore, in Fig.~ \ref{fig:sd-energy-rotation}, the magnetic energy is shown. Once again, we expect the magnetic energy to remain constant over time, and indeed, we observe improvement in the conservation of the magnetic energy as the order is increased. In concrete, we observe for $n=6$ and $9$ a loss in magnetic energy below $1\%$. \begin{figure} \centering \includegraphics[width=\textwidth]{chapters/img/rotation-mhd-loop.pdf} \caption{Rotating field loop test with increasing polynomial degree and $32$ cells on a side. \label{fig:sd-rotatinghump-order} } \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{chapters/img/E-rotation-mhd-loop.pdf} \caption{Normalised magnetic energy as a function of time for the rotating discontinuous field loop test. We show the results for a fixed number of elements $N=32$ and an increasing order of accuracy, corresponding to Fig~\ref{fig:sd-rotatinghump-order}. } \label{fig:sd-energy-rotation} \end{figure} \section{Introduction} \label{sec:introduction} Developing numerical algorithms for the equations of ideal magneto-hydrodynamics (MHD) is of great interest in many fields of science, such as plasma physics, geophysics and astrophysics. Magnetic fields play an important role in a large variety of phenomena in nature, from the early universe, to interstellar and intergalactic medium, to environments and interiors of stars and planets \cite{Brandenburg2005}. The ideal MHD equations describe conservation laws for mass, momentum and total energy on the one hand, and for magnetic flux on the other hand. The first 3 conservation laws form what is called the Euler sub-system, while the fourth one is called the induction sub-system. In this paper, we focus on the latter, usually called the induction equation: \begin{equation} \label{eq:mhd-induction-eq} \partial_t \vec{B} = \nabla\times(\vec{v}\times \vec{B}) + \eta \nabla^2 \vec{B}.\end{equation} This partial differential equation describes the evolution of a magnetic field $\vec{B}$ under the effect of the velocity $\vec{v}$ of an electrically conductive fluid. The coefficient $\eta$ denotes the magnetic diffusivity. In the ideal MHD case, the fluid has infinite electric conductivity, so that $\eta \to 0$ and the diffusive term can be ignored. By taking the divergence of Eq.~\eqref{eq:mhd-induction-eq}, we note that the time evolution of the divergence of $\vec{B}$ is zero for all times, meaning that the initial divergence of $\vec{B}$ is preserved: \begin{equation} \label{eq:divbevol} \partial_t (\nabla \cdot \vec{B}) = \nabla \cdot \left( \nabla\times(\vec{v}\times \vec{B}) \right) = 0, \end{equation} as the divergence of the curl of a vector is always zero. Physically, the fact that magnetic fields have no monopoles and that magnetic field lines form closed loops, is translated in the initial condition \begin{equation} \label{eq:divfree} \nabla \cdot \vec{B} = 0. \end{equation} Considering Eq.~\eqref{eq:divbevol} and Eq.~\eqref{eq:divfree} together means that the divergence of $\vec{B}$ must be zero at all times. To clearly see the erroneous evolution of our system if $\nabla\cdot\vec{B}$ happens to be nonzero, we can re-formulate Eq.~\eqref{eq:mhd-induction-eq} as \begin{equation} \label{eq:induction-eq-ext} \partial_t \vec{B} + (\vec{v}\cdot\nabla)\vec{B} = -\vec{B}(\nabla\cdot \vec{v}) + (\vec{B}\cdot\nabla)\vec{v} + \vec{v}(\nabla\cdot \vec{B}). \end{equation} Note that the second term of the left-hand side, $(\vec{v}\cdot\nabla)\vec{B}$, corresponds to the advection of $\vec{B}$ by the fluid, the first term on the right-hand side models the \textit{compression} of the magnetic field lines and the second term is due to the \textit{stretching} of the field lines. This interpretation can be done by establishing an analogy with the vorticity equation \citep[see][for example]{davidson2001}. The last term, proportional to $\nabla\cdot \vec{B}$, is also proportional to the velocity of the flow $\vec{v}$, and vanishes only if the magnetic field is divergence free. When applying common discretisation schemes, for example the popular Finite Volume (FV) method, the divergence-free constraint in Eq.~\eqref{eq:divfree} is not necessarily fulfilled in the discrete sense. Indeed, in this case, the numerical representation of the field is based on a volume integral of the magnetic field and the magnetic flux is not conserved anymore. This is a big issue in the numerical evolution of the MHD equations, as shown for example in the seminal studies of \cite{BRACKBILL1980,toth1996}, which show that a non-physical force parallel to the velocity $\vec{v}$ and proportional to $\nabla \cdot \vec{B}$ appears in the discretised conservative form of the momentum equation in the Euler sub-system. There have been many proposed methods to guarantee a divergence-free description of $\vec{B}$. For example, the non-solenoidal component of $\vec{B}$ is removed through a Hodge-Helmholtz projection at every time step (e.g. \cite{BRACKBILL1980,zachary1994}), or the system in Eq.~\eqref{eq:induction-eq-ext} is written in a non-conservative formulation where the non-solenoidal component of $\vec{B}$ is damped and advected by the flow (e.g. \cite{powell1999,dedner2002,munz1999}). Another approach is done at the discretisation level, where the numerical approximation of the magnetic field is defined as a surface integral and collocated at face centres, while the electric field used to updated the magnetic field is collocated at edge centres, in a staggered fashion \cite{yee1966, brecht1981, Evans1988, devore1989}. This method, called Constrained Transport (CT), was later adapted to the FV framework applied to the MHD equations \citep[see e.g.][]{Dai1998, Ryu_1998, Balsara_mhd_1999, Balsara_2004, Fromang2006}. The CT method is obviously closer to the original conservation law, as the magnetic flux is explicitly conserved through the cell faces. A comprehensive review of these methods in astrophysics can be found in \cite{Teyssier2019} and references therein. In addition, finite element methods can be naturally used to solve the induction and MHD type equations. In particular, when the magnetic field is approximated by a $H(div)$ vector function space (where elements of this space have square integrable divergence), it leads to continuous normal components of the approximation across element faces, while when the electric field is approximated by a $H(curl)$ vector function space (elements of this space have square integrable curl), it leads to continuous tangential components across cell faces \cite{Brezzi_1991}. For example, Raviart-Thomas/N\'ed\'elec basis functions are conforming with the aforementioned vector function spaces and have been used successfully to solve the induction equation \cite{balsara_kappeli_2018, chandrashekar_2020, praveen_2019}. With the increased availability of high-order methods, one could ask whether a high-order approximation of the magnetic field $\vec{B}$ alone could be sufficient to control the non-vanishing divergence problem of the magnetic field. Indeed, very high-order methods have been developed in many fields of science and engineering, both in the context of the FV method \cite{Jiang1999} and in the context of the Finite Element (FE) method \cite{Li2005,Mocz2013,Fu2018,Guillet2019}. These very high-order methods have proven successful in minimizing advection errors in case of very long time integration \cite{Gassner2013,Sengupta2006,Velasco2018}. Very high-order methods have already been developed specifically for the ideal MHD equations \citep{Nordlund1990,Jiang1999,Balsara_2004,Balsara_weno_2009,Felker2018,balsara_kappeli_2018}. It turns out, as we also show in this paper, that a very high-order scheme does not solve by itself the problem of non-zero divergence and specific schemes have to be developed to control the associated spurious effects \citep{munz1999,Li2012,Fu2018,Guillet2019}. In this paper, we present a new, arbitrary high-order method that can perform simulations of the induction equation, based on the Spectral Difference method developed in \cite{Kopriva1998, Liu2006} and on the ADER timestepping scheme \cite{dumbser_ader_2013, balsara_ader_2018}. We show that this technique is by construction strictly divergence free, both in a local and in a global sense. While there are similarities between this work and the work presented in \cite{balsara_kappeli_2018}, there are some key differences: our scheme includes internal nodal values which are evolved according to a standard SD scheme, similar to \cite{praveen_2019}. Furthermore, there is no need for an explicit divergence-free reconstruction step, which means achieving arbitrarily high-order is simpler. In particular, Propositions \ref{proposition:pointwise_div_free} and \ref{proposition:globally_div_free} (see below) prove that our method is divergence-free by construction and arbitrarily high-order. The paper is organised as follows: we start in section \ref{sec:overview} with a detailed description of several well-known high-order methods used to model the induction equation, discussing the challenges of controlling efficiently the magnitude of the divergence of the magnetic field. Then, in section \ref{sec:sd_mhd}, we present our new Spectral Difference method for the induction equation, highlighting our new solution points for the magnetic field components and the need of two-dimensional Riemann solvers. In section \ref{sec:numerics} we evaluate the performance of the new SD-ADER method numerical through different test cases. In section \ref{sec:mhd-discussion}, we compare our new method to other very high-order schemes using different numerical experiments. Finally, in section \ref{sec:conclusion}, we present our conclusions and outlook. \section{Appendix} \subsection{Locally divergence-free basis} \label{ap:LDF} In order to design a locally-divergence free basis to represent $\vec{B}$ up to order $n+1$, the vector basis elements are computed as the curl of the elements of the polynomial space $\mathbb{P}^{n+1}$, which contains all polynomials in $x$ and $y$ up to degree $n+1$. Furthermore, as noted in \cite{klingenberg2017}, an orthogonal basis yields better conditioned mass matrices, so we apply the Gram-Schmidt orthogonalisation algorithm with the inner product: \[\eta(\vec{b}_i, \vec{b}_j) = \int_{[-1,1]^2} \vec{b}_i \cdot \vec{b}_j {\rm d} x {\rm d}y.\] The orthogonal and normalised basis vectors (up to $4^{th}$ order of approximation) are given below. These were obtained through the symbolic computation package {\ttfamily{sympy}} \cite{sympy}. \begin{gather*} \vec{\mathcal{V}}^1 = {\rm span}\left(\left\{ \begin{pmatrix} 1.0 \\ 0 \end{pmatrix}, \begin{pmatrix} 0 \\ 1.0 \end{pmatrix}, \begin{pmatrix} \sqrt{3}y \\ 0 \end{pmatrix}, \begin{pmatrix} 0\\ \sqrt{3}x \end{pmatrix}, \begin{pmatrix} \sqrt{\frac{3}{2}}x \\ -\sqrt{\frac{3}{2}}y \end{pmatrix}\right\}\right) \end{gather*} \begin{equation*} \begin{split} \vec{\mathcal{V}}^2 = \vec{\mathcal{V}}^1 \cup {\rm span}\left(\Bigg\{ \sqrt{30}\begin{pmatrix} \frac{3x^2-1}{12} \\ -\frac{xy}{2} \end{pmatrix}, \sqrt{30}\begin{pmatrix} -\frac{xy}{2} \\ \frac{3y^2-1}{12} \end{pmatrix}, \sqrt{5} \begin{pmatrix} \frac{3y^2 - 1}{2} \\ 0\end{pmatrix}, \right. \left. \sqrt{5} \begin{pmatrix} 0 \\ \frac{3x^2-1}{2}\end{pmatrix} \Bigg\}\right) \end{split} \end{equation*} \begin{equation*} \begin{split} \vec{\mathcal{V}}^3 = \vec{\mathcal{V}}^2 \cup {\rm span}\left(\Bigg\{ \frac{\sqrt{42}\sqrt{83}}{166}\begin{pmatrix} 5x^3-4x \\ -15yx^2+4y \end{pmatrix}, \frac{\sqrt{30}}{4}\begin{pmatrix}3x^2y-y \\ -3y^2x + x \end{pmatrix}, \frac{\sqrt{7}}{2}\begin{pmatrix} 5y^3-3y \\ 0 \end{pmatrix}, \frac{\sqrt{7}}{2}\begin{pmatrix}0\\ 5x^3-3x \end{pmatrix}, \right.\\ \left. \frac{\sqrt{165585}}{1824}\begin{pmatrix} -\frac{56x^3}{83}-2x(12y^2-1)+\frac{410x}{83} \\ 8y^3 + \frac{14y(12x^2-1.0)}{83} - \frac{562y}{83} \end{pmatrix}\Bigg\}\right). \end{split} \end{equation*} A detailed discussion on the approximation properties of this polynomial vector space can be found in \cite{Cockburn2004}. \subsection{Divergence cleaning} \label{ap:DivClean} We evolve the system defined by Eq.~\eqref{eq:glm-induction-eq} as described in \cite{klingenberg2017} and using the following steps: \begin{enumerate} \item We apply SSP-RK to the DG discretisation of the induction equation in its divergence form as in Eq.~\eqref{eq:mhd-induction-eq-div}. \item{ We then apply in an operator split fashion SSP-RK to the DG discretisation of the system \begin{equation*} \begin{split} \partial_t \vec{B} + \nabla \psi &= 0,\\ \partial_t \psi + c_h^2\nabla\cdot\vec{B} &= 0. \end{split} \end{equation*} } \item{We finally apply operator splitting to the source term of the parabolic term \[ \psi^{n+1} :=\exp\left(-\frac{c_h^2}{c_p^2}\Delta t\right) \psi^{n+1/2}. \] } \end{enumerate} \section{A high-order Spectral Difference method with Constrained Transport} \label{sec:sd_mhd} We present in this section a new method within the FE framework that addresses most of the issues we discussed in the previous section. It is both locally and globally divergence free, and it does not need the introduction of a new variable and a new equation, together with free parameters sometimes difficult to adjust. This new method is based on the Spectral Difference (SD) method \cite{Liu2006}. In this section, we present the original SD method for the semi-discrete scheme, followed by a description of our time integration strategy. We then focus on the modifications to the original method to solve the induction equation. We prove that our method is both strictly divergence free at the continuous level inside the elements, and at the interface between elements, maintaining the strict continuity of the normal component of the field. Using Fourier analysis, we finally show that our new method attains the same stability properties as reported in \citep{abeele2008,jameson2010}. \subsection{The SD method in a nutshell} For sake of simplicity, we present the SD method using a simple scalar problem in one space dimension. The generalisation to multiple space dimensions will be discussed later. Let us denote the numerical solution as $u(x)$. We focus on the description of the solution in one element, which is given by Lagrange interpolation polynomials $\{\ell^s_{i}(x)\}_{i=0}^n$, built on a set of points $\mathcal{S}^s=\{x^s_i\}_{i=0}^n$, called the solution points (with the superscript $s$). The numerical solution inside an element is given by: \[ u(x) = \sum_{i=0}^n u(x^s_i)\ell^s_i(x), \] where $n$ is the polynomial degree of the interpolation Lagrange polynomials. The SD method features a second set of nodes $\mathcal{S}^f=\{x^f_i\}_{i=0}^{n+1}$, called the flux points (with the superscript $f$). A numerical approximation of the flux is evaluated by another set of Lagrange interpolation polynomials $\{\ell^f_{i}(x)\}_{i=0}^{n+1}$ built on the flux points. Note that we have $n+1$ solution points and $n+2$ flux points, and that the first and the last flux points coincide with the boundary of the elements ($x^f_0$ and $x^f_{n+1}$). Moreover, at the interfaces between elements, a numerical flux based on a Riemann solver must be used to enforce the continuity of the flux between elements. Let $\hat{f}(\cdot)$ denote this single-valued numerical flux, common to the element and its direct neighbour. The approximation for the flux is given by: \begin{equation} \label{eq:numerical_flux} f(x) = \hat{f}(u(x^f_0)) \ell^f_0(x) + \sum_{i=1}^{n} f_i(u(x^f_i)) \ell^f_i(x) + \hat{f}(u(x^f_{n+1})) \ell^f_{n+1}(x), \end{equation} where we wrote separately the two extreme flux points with their corresponding numerical flux. The final update of the solution is obtained using the exact derivative of the flux evaluated at the solution points, so that the semi-discrete scheme reads: \begin{align*} \frac{{\rm d}}{{\rm d}t} u(x^s_j) = - \hat{f}(u(x^f_0)) \ell^{f\prime}_0(x^s_j) -\sum_{i=1}^{n} f_i(u(x^f_i)) \ell^{f\prime}_i(x^s_j) - \hat{f}(u(x^f_{n+1})) \ell^{f\prime}_{n+1}(x^s_j), \end{align*} where the primes stand for the derivative of the Lagrange polynomials. A straightforward extension to more space dimensions can be achieved by the tensor product between the set of one dimensional solution points and flux points. The left panel on Fig.~\ref{fig:sd_representation} shows in blue the solution points and in red (and salmon) colour the flux points for a classical SD scheme in two space dimensions, as well as the subcells (denoted by the black lines) which we call \textit{control volumes}. The stability of the SD method in one dimension has been shown in \cite{jameson2010} at all orders of accuracy, while the stability of the SD scheme in two dimensions, for both Cartesian meshes and unstructured meshes, has been demonstrated in \cite{abeele2008}. As shown in \cite{jameson2010}, the stability of the standard SD method depends on the proper choice of the flux points and not on the position of the solution points. The only important requirement is that the solution points must be contained anywhere within the (inner) control volume delimited by the flux points. With this in mind, we use Gauss-Legendre quadrature points for the inner flux points and the zeros of the Chebyshev polynomials for the solution points, and we show in section \ref{sec:stability} that indeed this general result also holds for the induction equation. \subsection{High-order time integration using ADER} We decided not to use the same SSP Runge Kutta method as for the RKDG scheme. Instead, we decided to explore the modern version of the ADER method \cite{Dumbser2008,balsara2009,mhv2020}. Indeed, we believe this method is well suited to compute solutions to arbitrary high order in time. We exploit this nice property in our numerical experiments shown in section \ref{sec:numerics}. Consider again the scalar, one-dimensional conservation law given in Eq.~\eqref{eq:conslaw}, \begin{equation} \begin{cases} \partial_t u + \partial_x f( u ) = 0 \quad \in \Omega \times [0,\infty]\\ u(t=0) = u_0 \\ u_{\partial \Omega} = g, \end{cases} \end{equation} with suitable initial conditions and boundary conditions. For simplicity, we are only updating the solution $u(x^s_i,t)$ for a single solution point $x^s_i$. Modern ADER schemes are based on a Galerkin projection in time. We multiply the previous conservation law by an arbitrary test function $\psi(t)$, integrating in time over $\Delta t$: \[\int^{\Delta t}_0 \psi(t)\partial_t u {\rm d}t + \int^{\Delta t}_0 \psi(t)\partial_x f(u) {\rm d}t = 0.\] Integrating by parts (in time) yields: \begin{equation} \label{eq:ADER_ibp} \psi(\Delta t) u(\Delta t) - \psi(0) u(0) - \int^{\Delta t}_0 \partial_t \psi(t) u(t) {\rm d}t + \int^{\Delta t}_0 \psi(t) \partial_x f(u(t)) {\rm d}t = 0. \end{equation} Note that here we do not show the spatial dependency to simplify the notations. We now represent our solution using Lagrange polynomials {\it in time} $\ell_i(t)$ defined on $n+1$ Legendre quadrature points $\lbrace t_i \rbrace_{i=0}^n \in [0,\Delta t]$, which together with the quadrature weights $\lbrace w_i \rbrace_{i=0}^n$ can be used to perform integrals at the correct order in time. We are aiming at a solution with the same order of accuracy in time than in space, so $n$ is taken here equal to the polynomial degree of the spatial discretisation. We can write: \[ u(t) = \sum_{i=0}^n u_i \ell_i(t),\] and replace the integrals in Eq.~\eqref{eq:ADER_ibp} by the respective quadratures. We now replace the arbitrary test function $\psi(t)$ by the set of Lagrange polynomials $\{\ell_j(t)\}_{i=0}^n$ and obtain: \begin{equation}\label{eq:System} \ell_j(\Delta t)\left(\sum_{i=0}^{n} u_i \ell_i(\Delta t)\right) - \ell_j(0)u(0) - \Delta t \sum_{i=0}^{n} w_i \ell^\prime_j(t_i) u_i + \Delta t \sum_{i=0}^{n} w_i \ell_j(t_i) \partial_x f(u_i) =0 . \end{equation} To derive the previous equation, we used the interpolation property of the Lagrange polynomials with $u(t_i) = u_i$. Note that $u(0)$ corresponds to the solution at the beginning of the time step. The previous system can be rewritten in a matrix form, defining a mass matrix $M \in \mathbb{R}^{(n+1)\times(n+1)}$ and a right-hand side vector $r$ as: \begin{equation}\label{eq:MassmatrixAder} M_{ji} = \ell_j(\Delta t)\ell_i(\Delta t)- \Delta t w_i \ell^\prime_j(t_i)~~~{\rm and}~~~r_j = \ell_j(0)u(0) - \Delta t \sum_{i=0}^{n} w_i \ell_j(t_i) \partial_x f(u_i). \end{equation} The previous implicit non-linear equation with unknown $\lbrace u_i \rbrace_{i=0}^n$, is now written as: \begin{equation}\label{fix:point} M_{ji} u_i = r_j(u_0,...,u_n), \end{equation} which can be solved with a fixed-point iteration method. We use a uniform initial guess with $\lbrace u^0_i=u(0)\rbrace_{i=0}^n$ and perform a standard Picard iterative scheme as follows \begin{equation} \label{eq:fixpoint_iteration} u_i^{k+1}=M^{-1}_{ij} r_j (u_0^k,...,u_n^k), \end{equation} where index $k$ stands for the iteration count. Finally, we use these final predicted states $\lbrace u^{n}_i \rbrace_{i=0}^n$ at our quadrature points and update the final solution as: \begin{equation} u(\Delta t) = u(0) - \Delta t \sum_{i=0}^n w_i \partial_x f(u^{n}_i). \end{equation} Because we always have this final update, we only need $n$ internal corrections to the solution (iterations) to obtain a solution that is accurate up to order $n+1$ in time \citep{dumbser_ader_2013}. The first order scheme with $n=0$ does not require any iteration, as it uses only the initial first guess to compute the final update, corresponding exactly to the first-order forward Euler scheme. Note that in this flavour of the ADER scheme, we need to estimate the derivative of the flux for each time slice according to the SD method, including the Riemann solvers at element boundaries, making it different from the traditional ADER-DG framework presented in \cite{dumbser_ader_2013} and more similar with \cite{mhv2020}, which remains local until the final update. Precisely because we include the Riemann solver at the element boundaries, we maintain the continuity requirement on the normal component, needed for the appropriate evolution of the induction equation. We use a Courant stability condition adapted to the SD scheme, as explained in \cite{vanharen2017} and compute the time step as: \[ \Delta t = \frac{C}{n+1} \frac{\Delta x}{|v_{\rm max}|}, \] where again $C=0.8$ and $n$ is the polynomial degree of our discretisation in space. We justify this choice by a careful time stability analysis in the following section. \subsection{A modified SD scheme for the induction equation} The traditional SD method is particularly well suited for the Euler sub-system with conservation laws based on the divergence operator. In Fig.~\ref{fig:sd_representation}, we show on the left panel the traditional discretisation of one element using SD, with the control volume boundaries shown as black solid lines, the solution points in blue inside the control volumes, and the flux points in red on the faces of each control volume. The strict conservation property of SD can be explained using for example the density. Defining the corner points of the control volumes as $( x_i, y_j)$, we can compute the total mass within a rectangle defined by the four points $( 0, 0)$, $( x_i, 0)$, $( 0, y_j)$ and $( x_i, y_j)$ as $M(x_i,y_j)$. Note that the corner points are defined as the intersection of the lines where the flux points are defined. \begin{figure} \centering \includegraphics[width=1.0\textwidth]{chapters/img/sd_rep.pdf} \caption{Left panel: position of the solution points in blue, and of the flux points in red, for a traditional SD method with $n=2$. Right panel: position of the solution points for $B_x$ and $B_y$ in blue and of the flux points for the electric field $E_z$ and the vector potential $A_z$ in red, for our new SD method for the induction equation with $n=2$. \label{fig:sd_representation}} \end{figure} We now represent this cumulative mass everywhere inside the element using Lagrange polynomials defined on the flux points as \begin{equation} M(x,y) = \sum_{i=0}^{n+1} \sum_{j=0}^{n+1} M(x_i,y_j)\ell_i(x) \ell_j(y), \end{equation} where we dropped the superscript $f$ for simplicity. The density field is obtained by taking the derivative of the cumulative mass as: \begin{equation} \rho(x,y) = \sum_{i=0}^{n+1} \sum_{j=0}^{n+1} M(x_i,y_j)\ell^{\prime}_i(x) \ell^{\prime}_j(y), \end{equation} where the prime stands for the spatial derivative. This exact procedure can be used to initialise the value of $\rho$ at the solution points, as well as to prove that the SD method as described in the previous section is strictly conservative \cite{Liu2004,Liu2006}. The induction equation, however, is a conservation law for the magnetic flux through a surface, as it does not feature a divergence operator but a curl operator. We therefore propose a small modification of the classical SD scheme, similar to that of \citep{chandrashekar_2020,praveen_2019}, with different collocation points for the magnetic field components $B_x$ and $B_y$. In the two-dimensional case, we start with a vector potential $\vec{A} = (0,0,A_z)$. We approximate the z-component using the red square collocation points, as denoted in the right panel of Fig. \ref{fig:sd_representation}. The approximation takes the form \[ A_z(x,y) = \sum_{i=0}^{n+1}\sum_{j=0}^{n+1} A_z(x_i,y_j)\ell_i(x)\ell_j(y) \] Then, the magnetic field, in 2-dimensions is obtained by: \[ B_x(x,y) = \sum_{i=0}^{n+1}\sum_{j=0}^{n+1} A_z(x_i,y_j)\ell_i(x)\ell^{'}_j(y) \quad B_y(x,y) = -\sum_{i=0}^{n+1}\sum_{j=0}^{n+1} A_z(x_i,y_j)\ell^{'}_i(x)\ell_j(y) \] Because of how the magnetic field $\vec{B} = (B_x, B_y)$ is initialised, this is by definition divergence-free. Then, we define the magnetic flux $\phi_x$ through the surface defined by two corner points $(x_i,0)$ and $(x_i,y_j)$ as: \begin{equation} \label{eq:mfx} \phi_x(x_i,y_i) = \int_0^{y_i} B_x(x_i,y) {\rm d} y. \end{equation} Similarly, we define the magnetic flux $\phi_y$ through the surface defined by two corner points $(0,y_j)$ and $(x_i,y_j)$ as: \begin{equation} \label{eq:mfy} \phi_y(x_i,y_i) = \int_0^{x_i} B_y(x,y_i) {\rm d}x. \end{equation} We see that $\phi_x$ and $\phi_y$ are both defined over the set of corner points $(x_i,y_j)$. We can now represent the numerical approximation $\phi_x$ (resp. $\phi_y$) using Lagrange polynomials defined on the flux points as: \[\phi_x(x,y) = \sum_{i=0}^{n+1} \sum_{j=0}^{n+1} \phi_x(x_i,y_j) \ell_i(x) \ell_j(y). \] Then, we deduce the numerical approximation of $B_{x,h}$ as: \begin{equation} \label{eq:sd_bx} B_{x}(x,y) = \partial_y \phi_x = \sum_{i=0}^{n+1} \sum_{j=0}^{n+1} \phi_x(x_i,y_j) \ell_i(x) \ell^{\prime}_j(y), \end{equation} and the numerical approximation of $B_{y,h}$ as: \begin{equation} \label{eq:sd_by} B_{y}(x,y) = \partial_x \phi_y = \sum_{i=0}^{n+1} \sum_{j=0}^{n+1} \phi_y(x_i,y_j) \ell^{\prime}_i(x) \ell_j(y). \end{equation} The key difference between this configuration and the traditional SD method is that for $B_x$, the $x$ direction has an extra degree of freedom and a higher polynomial degree (similarly for $B_y$ in the y direction). This also means that the corresponding solution points for $B_x$ and $B_y$ are staggered with respect to the traditional SD method. In the right panel of Fig~\ref{fig:sd_representation}, we show the position of these new solution points for $B_x$, $B_y$ (in blue) and new flux points (in red) where the electric field will be defined, as explained below. Note that if the initial magnetic field is divergence free, applying the divergence theorem to the rectangle defined by the same four corner points as before leads to the constraint: \begin{equation} \label{eq:discrete_circulation} \phi_x(x_i,y_j) + \phi_y(x_i,y_j) - \phi_x(0,y_j) - \phi_y(x_i,0) = 0, \quad \forall i, j. \end{equation} \begin{comment} Then, we define the magnetic flux $\phi_x$ through the surface defined by two corner points $(x_i,0)$ and $(x_i,y_j)$ as: \begin{equation} \label{eq:mfx} \phi_x(x_i,y_i) = \int_0^{y_i} B_x(x_i,y) {\rm d} y. \end{equation} We assume we work with an initially divergence-free field $\vec{B} = (B_x, B_y)$. Similarly, we define the magnetic flux $\phi_y$ through the surface defined by two corner points $(0,y_j)$ and $(x_i,y_j)$ as: \begin{equation} \label{eq:mfy} \phi_y(x_i,y_i) = \int_0^{x_i} B_y(x,y_i) {\rm d}x. \end{equation} We see that $\phi_x$ and $\phi_y$ are both defined over the set of corner points $(x_i,y_j)$. We can now represent the numerical approximation $\phi_x$ (resp. $\phi_y$) using Lagrange polynomials defined on the flux points as: \[\phi_x(x,y) = \sum_{i=0}^{n+1} \sum_{j=0}^{n+1} \phi_x(x_i,y_j) \ell_i(x) \ell_j(y). \] Then, we deduce the numerical approximation of $B_{x,h}$ as: \begin{equation} \label{eq:sd_bx} B_{x}(x,y) = \partial_y \phi_x = \sum_{i=0}^{n+1} \sum_{j=0}^{n+1} \phi_x(x_i,y_j) \ell_i(x) \ell^{\prime}_j(y), \end{equation} and the numerical approximation of $B_{y,h}$ as: \begin{equation} \label{eq:sd_by} B_{y}(x,y) = \partial_x \phi_y = \sum_{i=0}^{n+1} \sum_{j=0}^{n+1} \phi_y(x_i,y_j) \ell^{\prime}_i(x) \ell_j(y). \end{equation} The key difference between this configuration and the traditional SD method is that for $B_x$, the $x$ direction has an extra degree of freedom and a higher polynomial degree (similarly for $B_y$ in the y direction). This also means that the corresponding solution points for $B_x$ and $B_y$ are staggered with respect to the tradition SD method. In the right panel of Fig~\ref{fig:sd_representation}, we show the position of these new solution points for $B_x$, $B_y$ (in blue) and new flux points (in red) where the electric field will be defined, as explained below. Note that if the initial magnetic field is divergence free, applying the divergence theorem to the rectangle defined by the same four corner points as before leads to the constraint: \begin{equation} \label{eq:discrete_circulation} \phi_x(x_i,y_j) + \phi_y(x_i,y_j) - \phi_x(0,y_j) - \phi_y(x_i,0) = 0, \quad \forall i, j. \end{equation} \end{comment} \begin{proposition} \label{proposition:eq_25} Equation \eqref{eq:discrete_circulation} holds if we can integrate $\vec{B}$ exactly or by starting from a vector potential $\vec{A}$. \end{proposition} \begin{proof} Using the numerical approximation of $A_z$ (and sub-consequently of $\vec{B}$), we can write the fields $\phi_x(x,y)$ and $\phi_y(x,y)$ for any control volume $K = [0,x_m]\times [0,y_m]$ \begin{align*} \phi_x(x_m,y_m) &= \int_0^{y_m} \sum_{i=0}^{n+1}\sum_{j=0}^{n+1} A_z(x_i,y_j)\ell_i(x_m)\ell^{'}_j(y) dy \\ &= \sum_{i=0}^{n+1}\sum_{j=0}^{n+1} A_z(x_i,y_j)\ell_i(x_m) \int_0^{y_m} \ell^{'}_j(y) dy \\ &= \sum_{i=0}^{n+1}\sum_{j=0}^{n+1} A_z(x_i,y_j)\ell_i(x_m) \left[\ell_j(y_m) - \ell_j(0) \right] \end{align*} \begin{align*} \phi_y(x_m,y_m) &= -\int_0^{x_m} \sum_{i=0}^{n+1}\sum_{j=0}^{n+1} A_z(x_i,y_j)\ell_j(y_m)\ell^{'}_i(x) dx \\ &= - \sum_{i=0}^{n+1}\sum_{j=0}^{n+1} A_z(x_i,y_j)\ell_j(y_m) \int_0^{x_m}\ell^{'}_i(x) dx \\ &= - \sum_{i=0}^{n+1}\sum_{j=0}^{n+1} A_z(x_i,y_j)\ell_j(y_m) \left[\ell_i(x_m) - \ell_i(0) \right] \end{align*} The integration is exact given the right quadrature rule. Then, we can observe that \eqref{eq:discrete_circulation} holds. \end{proof} \begin{proposition} \label{proposition:pointwise_div_free} The proposed numerical representation of $\vec{B}$ is pointwise (or locally) strictly divergence free. \end{proposition} \begin{proof} We now evaluate the divergence of the numerical approximation $\vec{B}(x,y) = [B_x,B_y]$: \begingroup \allowdisplaybreaks \begin{align*} \partial_x B_x + \partial_y B_y &= \partial_x \left(\sum_{i=0}^{n+1} \sum_{j=0}^{n+1} \phi_x(x_i,y_j) \ell_i(x) \ell^\prime_j(y)\right) + \partial_y \left(\sum_{i=0}^{n+1} \sum_{j=0}^{n+1} \phi_y(x_i,y_j) \ell^\prime_i(x) \ell_j(y)\right)\\ &= \sum_{i=0}^{n+1} \sum_{j=0}^{n+1} \left( \phi_x(x_i,y_j) + \phi_y(x_i,y_j) \right)\ell_i'(x) \ell_j'(y)\\ &= \sum_{i=0}^{n+1} \sum_{j=0}^{n+1} \left( \phi_x(0,y_j) + \phi_y(x_i,0) \right)\ell_i'(x) \ell_j'(y), \end{align*} \endgroup where we used the property that the total magnetic flux through the rectangle vanishes (see Eq.~\eqref{eq:discrete_circulation}). We can now separate and factor out the $i$ and $j$ sums as: \begingroup \allowdisplaybreaks \begin{align*} \partial_x B_x + \partial_y B_y &= \sum_{i=0}^{n+1} \sum_{j=0}^{n+1}\phi_x(0,y_j)\ell_i'(x) \ell_j'(y) + \sum_{i=0}^{n+1} \sum_{j=0}^{n+1} \phi_y(x_i,0) \ell_i'(x) \ell_j'(y)\\ &= \left( \sum_{j=0}^{n+1}\phi_x(0,y_j) \ell_j'(y)\right) \left(\sum_{i=0}^{n+1}\ell_i'(x)\right) + \left(\sum_{i=0}^{n+1} \phi_y(x_i,0) \ell_i'(x)\right)\left(\sum_{j=0}^{n+1} \ell_j'(y)\right)\\ &= 0, \end{align*} \endgroup where we used the property of the Lagrange polynomials that $\sum_{i=0}^{n+1} \ell_i (x)=1$ so that the corresponding derivative vanishes uniformly. \end{proof} \begin{proposition} \label{proposition:globally_div_free} The proposed numerical representation of $\vec{B}$ is globally divergence free. \end{proposition} \begin{proof} If the initial magnetic field is divergence free, $B_x$ is continuous across the left and right boundaries of each element. Similarly, $B_y$ is continuous across the bottom and top boundaries of the element. It follows that $\phi_x$ (resp. $\phi_y$) is initially identical on the left (resp. bottom) edge of the right (resp. top) element and on the right (resp. top) edge of the left (resp. bottom) element. Because the adopted Lagrange polynomial basis is an interpolatory basis, and because the solution points of $B_x$ and $B_y$ are collocated on the element boundaries, the continuity of the magnetic field in the component normal to the element face is enforced by construction and at all orders. Note that the case $n=0$ corresponds exactly to the Constrained Transport method, as implemented in popular FV codes. The proposed discretisation is a generalisation of CT to arbitrary high order. \end{proof} We now describe the SD update for the induction equation. We define the electric field $\vec{E}= - \vec{v}\times\vec{B}$ and write the induction equation as \[\partial_t \vec{B} = - \nabla\times \vec{E} . \] Once we know the prescribed velocity field and the polynomial representation of the magnetic field throughout the element, as in Eq.~\eqref{eq:sd_bx} and Eq.~\eqref{eq:sd_by}, we can compute the electric field at the control volume corner points $(x_i,y_j)$. These are the equivalent of the flux points in the traditional SD method. Since the electric field is continuous across element boundaries, we need to use a 1D Riemann solver for flux points inside the element edges, and a 2D Riemann solver at corner points between elements. This step is crucial as it maintains the global divergence-free property. We see for example that the electric field on an element face will be identical to the electric field on the same face of a neighbouring element. 2D Riemann solvers at element corners are also important to maintain this global property, and in the case of the induction equation, we just need to determine the 2D upwind direction using both $v_x$ and $v_y$. After we have enforced a single value for the electric field on element edges, we can interpolate the electric field inside the element, using flux points and the corresponding Lagrange polynomials, as before: \begin{equation} E_z(x,y) = \sum_{i=0}^{n+1} \sum_{j=0}^{n+1} E_z(x_i,y_j) \ell_i(x) \ell_j(y), \end{equation} and update the magnetic field directly using the pointwise update: \begin{equation} \label{eq:induction_update} \partial_t B_{x} = - \partial_y E_z,~~~{\rm and}~~~ \partial_t B_{y} = \partial_x E_z. \end{equation} We have directly: \begin{equation} \partial_t \left( \partial_x B_{x} + \partial_y B_{y} \right) = 0, \end{equation} so that the divergence of the field, if zero initially, will remain zero at all time. The continuity of $E_z$ at element boundaries also implies that the continuity of the normal components of the magnetic field will be preserved after the update. Note that at the beginning of the time step, we only need to know the values of $B_x$ and $B_y$ at their corresponding solution points to obtain the same polynomial interpolation as the one we derived using the magnetic fluxes. This follows from the uniqueness of the representation of the solution by polynomials of degree $n$. Similarly, the time update we just described can be performed only for the magnetic solution points (see Fig.~\ref{fig:sd_representation}) to fully specify our zero divergence field for the next time step. \begin{algorithm}[H] \KwData{ $A_z$ at $t=0$} \KwResult{$\vec{B}$ at $t=T$ } compute the numerical representation $A_z$\; build $\phi_x$ and $\phi_y$ by integrating $B_x$ and $B_y$ (which are given by differentiating $A_z$)\; get $B_x$ and $B_y$ by differentiating $\phi_x$ and $\phi_y$\; \While{t < T}{ perform ADER-SD update on nodal values of $B_x$ and $B_y$ through \eqref{eq:induction_update}\; } \caption{SD-ADER algorithm compatible with the induction equation.} \end{algorithm} \begin{algorithm}[H] \KwData{$B_x$ and $B_y$ at $t=t^n$} \KwResult{$B_x$ and $B_y$ at $t=t^{n+1}$} \While{iteration < total iterations}{ compute $E_z$ field on flux points (refer to Fig. \ref{fig:sd_representation}) for all time-substeps\; compute unique value of $E_z$ using a 1-dimensional Riemann solver at cell faces and a 2-dimensional Riemann solver at cell corner points\; build $E_z$ flux in space-time\; perform ADER sub-timestep update on degrees of freedom of $B_x$ and $B_y$ \; } \caption{ADER-SD update} \end{algorithm} \begin{remark} The idea of approximating the magnetic field $\vec{B}$ through tensor product polynomials while keeping $\vec{B}\cdot\vec{n}$ continuous across cell faces is a well known idea, for example, through the use of Raviart-Thomas (RT) elements \cite{Brezzi_1991} in the finite element context. In fact, this approach has been used to treat the induction equation \cite{Balsara_weno_2009,praveen_2019}. The main difference between our method and RT is that we do not explicitly have to build RT approximation basis. In particular, the continuity of $\vec{B}\cdot\vec{n}$ across cells is guaranteed by exact interpolation of nodal values collocated appropriately, as well as a Constrained Transport-like update using a unique electric field $E_z$. \end{remark} \begin{proposition} The previous scheme is equivalent to a simple evolution equation for the vector potential with a continuous SD scheme, for which both the solution points and the flux points are defined on the corner points of the control volumes. \end{proposition} \begin{proof} The magnetic fluxes introduced in \eqref{eq:mfx} and \eqref{eq:mfy} are analogous to a magnetic potential in two space dimensions. Indeed, in this case, one can compute directly the magnetic vector potential $\vec{A} = (0,0,A_z)$ at the corner points, using \begin{equation} A_z(x_i,y_i) = \phi_x(x_i,y_i) - \phi_y(x_i,0) = \phi_x(0,y_i) - \phi_y(x_i,y_i). \end{equation} We then interpolate the vector potential within the elements using Lagrange polynomials as: \begin{equation} A_z(x,y) = \sum_{i=0}^{n+1} \sum_{j=0}^{n+1} A_z(x_i,y_j) \ell_i(x) \ell_j(y) \end{equation} and compute the magnetic field components as: \begin{equation} B_{x}(x,y) = \partial_y A_z~~~{\rm and}~~~ B_{y}(x,y) = - \partial_x A_z. \end{equation} This definition is equivalent to the previous one. For the SD update, we compute the electric field at each corner point, using again a Riemann solver for multi-valued flux points. The vector potential is then updated directly at the corner point using \begin{equation} \label{eq:HJ} \partial_t A_z = - E_z = v_x B_y - v_y B_x = -v_x \partial_x A_z - v_y \partial_y A_z. \end{equation} We see that this last equation yields an evolution equation for $A_z$, where all terms are evaluated at the corner points. This corresponds to a variant of the SD method, for which the solution points are not placed at the centre of the control volumes, but migrated for example to their upper-right corner, and for which the flux points are not placed at the centre of the faces, but migrated to the same upper-right corner (thus overlapping). Note however an important difference with the traditional SD method: the vector potential $A_z$ is a continuous function of both $x$ and $y$ so that we have $n+2$ solution points instead of $n+1$. In other words, each element face shares the same values for $A_z$ and $E_z$ with the corresponding neighbouring element face. We have therefore a strict equivalence between the induction equation solved using our SD scheme for the magnetic field and the evolution equation solved using this particular SD variant for the vector potential. \end{proof} \subsection{Stability for the linear induction equation} \label{sec:stability} \begin{figure} \centering \includegraphics[width=.48\textwidth]{chapters/img/real_part.pdf} \includegraphics[width=.48\textwidth]{chapters/img/im_part.pdf} \caption{Real (left panel) and imaginary (right panel) parts of $\omega$ for different polynomial degrees $n$. In each panel, the grey line represents the exact dispersion (left) and diffusion (right) relation for the waves. The wave number is expressed here in units of $(n+1)/\Delta x$, while the frequency $\omega$ is shown in units of $(n+1) v /\Delta x$. \label{fig:stability}} \end{figure} We will now demonstrate that the proposed SD scheme for the induction equation is stable. We will also analyze the dispersion properties of the scheme at various orders. To achieve this goal, we will first exploit the strict equivalence between the magnetic field update and the vector potential update, as explained previously. We will prove the stability of the scheme in one space dimension, and use the tensor product rule to extend these results to higher dimensions. We write our vector potential equation in 1D, assuming here, without loss of generality, a positive velocity field $v>0$: \begin{equation} \label{eq:vector_potential_pde} \partial_t A + v \partial_x A = 0. \end{equation} Space is discretised using N equal-size element with width $\Delta x = L/N$ and labelled by a superscript $p=1,\dots,N$. We have $n+2$ solution points for the vector potential, which coincide with the flux points of the SD method, here labelled as $x_i^p$ with $i=0,\dots,n+1$. As explained earlier, we use for the inner $n$ flux points the Gauss-Legendre quadrature points, while the leftmost one, $x_0^p$, is aligned with the left element boundary, and the rightmost one, $x_{n+1}^p$, is aligned with the right element boundary. We see that we have redundant information in our solution vector $A_i^p$ as $x_{0}^p = x_{n+1}^{p-1}$, so that $A_0^p = A_{n+1}^p$ and $A_{n+1}^p=A_0^{p+1}$. This redundancy is a fundamental difference with the traditional SD method and ensures that the vector potential is continuous across element boundaries. In our present derivation, we need to avoid this duplication and define the solution vector $A_i^p$ in each element only for $i=1,\dots,n+1$, dropping the leftmost point and assigning it to the left element. This choice is arbitrary, but it corresponds here to the upwind solution of the Riemann solver at the left boundary, as we have $v>0$. Our vector potential solution points now resemble the classical SD method solution points shifted to the right of their closest flux points. The vector potential is interpolated within element $p$ using the $n+2$ flux points as: \begin{equation} A^p(x) = A_{n+1}^{p-1} \ell_0(x) + \sum_{j=1}^{n+1} A_j^p \ell_j(x). \end{equation} We can write the corresponding SD update as \begin{equation} \partial_t A_i^p = -v \left( A_{n+1}^{p-1} \ell^\prime_0(x_i) + \sum_{j=1}^{n+1} A_j^p \ell^\prime_j(x_i) \right). \end{equation} For the stability analysis, we follow the methodology presented in \cite{hu1999, abeele2008} and study the response of the scheme to a planar wave solution of the form: \begin{equation} A(x) = \tilde A \exp(i ( k x -\omega t )), \end{equation} using periodic boundary conditions. The stability of a planar wave solution will depend on the imaginary part of $\omega$. Indeed, the amplitude of the wave will not increase if $\Im(\omega)$ remains smaller than 0. The flux points coordinates are split between the element leftmost coordinates and the relative flux point coordinates as $x_i^p = (p-1) \Delta x + x_i$, so that we have: \begin{equation} A_i^p = \tilde A \exp(-i \omega t)\exp(i k (p-1) \Delta x) \exp(i k x_i). \end{equation} The update now writes \begin{equation} -i \omega \tilde A \exp(i k x_i) = -v \left( \tilde A \exp(i k x_{n+1}) \ell^\prime_0(x_i) \exp(-i k \Delta x) + \sum_{j=1}^{n+1} \tilde A \exp(i k x_{j}) \ell^\prime_j(x_i) \right) . \end{equation} We define the solution vector for the planar wave as $u_i = \tilde A \exp(i k x_i)$. The previous equation can be written in matrix form as follows: \begin{equation} \left( -i \frac{\omega \Delta x}{v} \mathbb{I} + \mathbb{M} \right) \vec{u} = 0, \end{equation} where, for sake of simplicity, we have considered a normalized coordinate system inside each element so that $x_0=0$ and $x_{n+1}=1$. This explains why the factor $\Delta x$ has been factored out. The matrix $\mathbb{M}$ is defined as: \begin{equation} \mathbb{M} = \begin{bmatrix} \ell^\prime_1(x_1) & \ldots & \ell^\prime_n(x_1) & \ell^\prime_{n+1}(x_1) + \ell^\prime_0(x_1) \exp(-i k \Delta x) \\ \ldots & \ldots & \ldots & \ldots\\ \ell^\prime_1(x_{n+1}) & \ldots & \ell^\prime_n(x_{n+1}) & \ell^\prime_{n+1}(x_{n+1}) + \ell^\prime_0(x_{n+1}) \exp(-i k \Delta x) \end{bmatrix}. \end{equation} The dispersion relation of the waves is obtained by requiring \begin{equation} \det \left( -i \frac{\omega \Delta x}{v} \mathbb{I} + \mathbb{M} \right) = 0 , \end{equation} which amounts to finding the $n+1$ complex eigenvalues of matrix $-i\mathbb{M}$. We can then represent the dispersion relation of the scheme with $\Re{(\omega)}$ and the diffusion relation with $\Im{(\omega)}$. More importantly, the wave amplitude will be damped if $\Im{(\omega)}<0$, corresponding to a stable numerical scheme, and will be amplified exponentially if $\Im{(\omega)}>0$, corresponding to an unstable numerical scheme. The maximum wave number is set by the grid size $\Delta x$ and the polynomial degree $n$ so that: \begin{equation} k_{\rm max} = \left( n+1 \right) \frac{\pi}{\Delta x} = \left( n+1 \right) N \frac{\pi}{L}. \end{equation} We see that the maximum wave number depends on the product $(n+1)\times N$ which corresponds to the number of degrees of freedom of the SD method. The previous dispersion relation generates $n+1$ eigenvalues in the k-interval $\left[ -\pi/\Delta x, \pi/\Delta x\right]$, owing to the periodicity of the function $\exp(-i k \Delta x)$. In order to derive the dispersion relation in the entire range of wave number $\left[ -(n+1)\pi/\Delta x, (n+1)\pi/\Delta x\right]$, the eigenvalues have to be shifted by an integer multiple of $2\pi/\Delta x$ to generate a single branch in the dispersion relation. We show in Fig.~\ref{fig:stability} the real and imaginary part of $\omega$ for a set of SD schemes that have exactly the same number of degrees of freedom $(n+1) \times N$, with $n$ ranging from 0 to 9. We note that, although our scheme is different from the classical SD scheme, the dispersion relations at these various orders are identical to the corresponding dispersion relation found by \cite{abeele2008} for the classical SD method. This strict equivalence is true only for a constant velocity field. We see also in Fig.~\ref{fig:stability} that, although all these schemes have exactly the same number of degrees of freedom, the higher the polynomial degree, the closer the scheme gets to the true solution, namely $\Im{(\omega)}=0$ and $\Re{(\omega)}=v k$, shown as a grey line in Fig.~\ref{fig:stability}. We conclude from this analysis that the SD spatial operator is stable, because $\Im{(\omega)}<0$ everywhere. To explicitly connect these results to \cite{abeele2008}, one can see that the Fourier footprint $\Omega$ can be obtained from the relation $\Omega = -i\omega$. With this nomenclature, our scheme has $\mathcal{R}(\Omega) < 0$. The SD semi-discretisation of the PDE \eqref{eq:vector_potential_pde} leads to a system of first order ordinary differential equations in time: \begin{equation} \begin{cases} U'(t) &= F(U) \\ U(0) &= U_0. \end{cases} \end{equation} We note $DOF$ the total number of degrees of freedom of the semi-discrete spatial SD operator defined by $U(t):\mathbb{R}\to\mathbb{R}^{DOF}$ and $F:\mathbb{R}^{DOF}\to\mathbb{R}^{DOF}$, respectively the vectors of unknowns and the discrete operator in space for all the degrees of freedom. We now show that using the ADER time stepping strategy, we obtain a stable, fully discrete in space and time, high-order numerical scheme. We can investigate the full system of semi-discretised equations by isolating a single mode. Taking an eigenvalue $\Omega$ of the spatial discretisation operator, we consider the canonical ODE: \[ \frac{d}{dt} u = \Omega u.\] We can write a general time integration method as \[u^{n+1} = P(\Omega \Delta t) \cdot u^n,\] where the operator $P$, called the numerical amplification factor, depends on the single parameter $\Omega \Delta t$. If we designate the eigenvalues of $P$ as $z_P$, the necessary stability condition is that all the eigenvalues $z_p$ should be of modulus lower than, or equal to, one \cite{Hirsch1988NumericalCO}. Similarly to \cite{mhv2020}, we perform a numerical stability study of the ADER scheme presented in this paper. In Fig.~\ref{fig:ader_stability}, we show the stability domains of the SD scheme, together with ADER in the $\Omega\Delta t-$plane. We can note that from $n=2$ onwards, the CFL should be reduced to a value slightly smaller than unity. We note that the stability region that we obtain is the same as the exact amplification factor $\exp(\Omega \Delta t)$ up to order $n$. This is no surprise as all methods with $n$ stages (in our case, corrections) and order $n$ have the same domain of stability \cite{Hirsch1988NumericalCO}. Then, by choosing an appropriate CFL condition, we are able to guarantee that $z_P(\Omega \Delta t)$ remain inside the ADER stability region. \begin{figure} \centering \includegraphics[width=.55\textwidth]{chapters/img/stability.pdf} \caption{Stability limits for ADER methods in the complex $\Omega \Delta t$-plane (continuous lines), from 0 to 9 corrections (ADER0 to ADER9), together with the stability domains of the SD space discretisation (symbols) for CFL = 1.0. Note that the stability region of the ADER scheme is identical to the exact amplification factor of $\exp(\Omega \Delta t)$. See text for details. \label{fig:ader_stability}} \end{figure} In the future, we would like to study the stability of our method in more detail, similarly to \cite{glaubitz2018application}, and, given the similarities between our work and the one presented in \cite{balsara_kappeli_2018}, a more detailed numerical study of the stability of this scheme is of high interest as well. \section{Acknowledgments} We gratefully thank R. Abgrall (University of Zurich) and S. Mishra (ETH Zurich) for the fruitful discussions and insights regarding this work. This research was supported in part through computational resources provided by ARC-TS (University of Michigan) and CSCS, the Swiss National Supercomputing Centre. \section{Discussion} \label{sec:mhd-discussion} \subsection{Comparing SD to RKDG for the induction equation} In this section, we compare in detail the different methods presented in this paper, namely our reference scheme, the traditional RKDG, a locally divergence-free basis variant of the scheme, called LDF, another variant of RKDG with divergence cleaning, called DivClean RKDG, and finally a novel Spectral Difference (SD) scheme specially designed for the induction equation, with the ADER time discretisation. The strong similarities with the Constrained Transport method would justify to call our new scheme using the long acronym CT-SD-ADER. From a theoretical point of view, since the traditional RKDG scheme does not have any mechanism to deal with $\nabla\cdot\vec{B} \neq 0$, it is not so surprising to see this scheme perform relatively poorly. What is puzzling is why going to higher orders is so detrimental. Although the global contribution to the divergence error decreases with increasing order, the local divergence errors seem to increase with increasing order. As truncation errors decrease, the global divergence error decreases owing to smaller discontinuities at element boundaries, but the local divergence increases because of high-frequency and high-amplitude oscillations that damage the solution. Considering a locally divergence-free polynomial basis for the magnetic field, as an explicit way to control the local divergence of the solution, seems like an obvious improvement of the scheme. However, we see that in this case the surface term, which measures the global divergence errors, becomes larger. We attribute this adverse effect to the fact that there are significantly less degrees of freedom available in the polynomial representation of the magnetic field, when comparing to the traditional RKDG scheme at the same order. Furthermore, as there is still no explicit mechanism to control global divergence errors, it is usually required to use the LDF basis in conjunction with an additional divergence cleaning mechanism to deal with the surface term. Indeed, we have shown that the divergence cleaning method (DivClean) provides an explicit, albeit non-exact, control on both the surface and the volume terms of the divergence errors, provided the two parameters, the hyperbolic cleaning speed $c_h$ and the diffusive coefficient $c_p^2$ are chosen appropriately. With these considerations in mind, we designed a new numerical method based on the SD scheme, for which both the volume term and the surface term of the divergence errors vanish exactly. This new scheme satisfies an exact conservation law for the magnetic flux through a surface. We argue this is the natural way to interpret and solve the induction equation. This approach, traditionally referred to as the Constrained Transport method, leads to a natural way to maintain zero divergence of $\vec{B}$ both locally and globally, as proved in Proposition \ref{proposition:pointwise_div_free} and Proposition \ref{proposition:globally_div_free}. We compared these 4 different methods by analyzing their performance when solving the advection of a discontinuous magnetic loop. The first (resp. second) panel of Fig.~\ref{fig:all-div-energy} shows the local (resp. global) divergence error of the schemes at different orders of accuracy. We note that for the SD scheme, we have zero contribution in both the volume and the surface terms. On the third panel of Fig.~\ref{fig:all-div-energy}, we show the magnetic energy evolution over time for the different methods. The traditional RKDG method is the only one to exhibit a spurious dynamo at third and fourth orders. The SD scheme appears slightly more diffusive than the other methods at second order, but its performance becomes comparable to LDF and DivClean at higher orders. Note that the extension to orders higher than $4$ for our new SD method is straightforward, as shown in the previous section, while the extension of the LDF method to orders higher than $4$ is quite cumbersome \citep[see for example][]{Guillet2019}. In Fig.~\ref{fig:disc-adv-allcomparison}, we show the maps of the magnetic energy density for the different schemes at fourth order and at $t=2$. First, we note that the magnetic energy distribution is well behaved for all the schemes, except RKDG, for which strong high-frequency oscillations are generated. We also see that the solution computed using LDF retains some artifacts, which appear to be aligned with the velocity field. The solution computed with DivClean appears more symmetric and overall seems to have less artifacts, although some oscillations near the discontinuous boundary are still present, similarly to the solution computed with SD. To obtain the DivClean solution, some tuning of the parameters $c_h$ and $c_p$ is required. In particular, if $c_h$ is reduced from twice the advection velocity like here, to exactly equal to the advection velocity, the same artifacts that are seen in the solution computed with LDF appear in the solution using DivClean. It is also worth stressing again that the DivClean method comes with a price: a new equation and a new variable, whose physical interpretations are unclear. A comparison of the methods with respect to their computational complexity is beyond the scope of this paper. In particular, the codes used to produce the numerical results have been developed with different programming languages and architectures. However, we can briefly comment on key similarities and differences between the DG-based methods presented and our SD-ADER method. We note that SD can be interpreted as a nodal, quadrature-free DG scheme \cite{May2011}, thus, making the proposed SD method not so different from a nodal DG one in terms of its computational complexity. Another key difference is the time-integration schemes used: for the DG-based schemes, we used SSP-RK time-integration whereas for the SD scheme we have used the ADER time-integration scheme. We note that to reach an $(n+1)$-order approximation in time, the ADER algorithm requires $n+1$ flux evaluations per time slice \cite{Jackson2017}, yielding an overall complexity of $(n+1)^2$ in time. Then, it becomes computationally more expensive than an explicit RK scheme, as the number of stages needed to reach an $(n+1)$-order approximation is typically well below $(n+1)^2$. However, as noted in \cite{Dumbser2018}, the ADER procedure can be formulated as a completely local predictor step suited for vectorisation, reducing then the complexity to $n+1$, whereas the RK scheme requires communication with its neighbours at every stage. \begin{figure} \centering \includegraphics[width=0.94\textwidth]{chapters/img/all_div_energy.pdf} \caption{Local and global divergence errors and magnetic energy evolution of the four different methods discussed in this paper for the discontinuous magnetic field loop advection test and for polynomial degrees $n=1,~2,~3$. \label{fig:all-div-energy} } \end{figure} \begin{figure} \centering \includegraphics[width=0.8\textwidth]{chapters/img/all_maps.pdf} \caption{Maps of the magnetic energy density for the four different methods discussed in this paper for the discontinuous magnetic field loop advection test and for polynomial degree $n=3$. \label{fig:disc-adv-allcomparison} } \end{figure} \subsection{SD method for non-trivial velocity fields} In subsection \ref{subsection: rotating-hump}, we consider the problem of a rotating velocity field. We show the ability of our method to solve problems with non-trivial velocity fields, as well as Dirichlet boundary conditions. For approximation polynomial degree of $n=1$, we obtain similar qualitative results to those of \cite{Torrilhon2004} (given that the initial $B_0$ is different). As we increase the approximation order, we can observe that the numerical solution converges to the analytical one. \subsection{Extension of our new method to three space dimensions} In this section, we speculate about a possible straightforward extension of our scheme to three dimensions in space. It is not the scope of this paper to present a detailed implementation of this algorithm, however, we want to stress that this extension is not only possible, but also relatively easy and consistent with the present work. They are however a few key differences with respect to the 2D case. The first difference comes from the definition of the magnetic flux and from the resulting magnetic field. We now define the magnetic flux of $B_x$ across a rectangle sitting in the plane $x=x_i$ and defined by the 4 points $(0,0)$, $(0,z_k)$, $(y_j,0)$ and $(y_j,z_k)$ as: \begin{equation} \phi_x(x_i,y_j,z_k) = \int_0^{y_j} \int_0^{z_k} B_x(x_i,y,z) {\rm d}y {\rm d}z, \end{equation} where the coordinates $y_j$ and $z_k$ correspond to the flux points in each direction, or in other words, to the corner points of each control volume inside the element. The magnetic flux is then interpolated everywhere inside the element using Lagrange polynomials defined using the flux points. \[\phi_x(x,y,z) = \sum_{i=0}^{n+1} \sum_{j=0}^{n+1} \sum_{k=0}^{n+1}\phi_x(x_i,y_j,z_k) \ell_i(x) \ell_j(y) \ell_k(z). \] The magnetic field inside the element is obtained through a second-order derivative as follows: \begin{equation} B_{x}(x,y,z) = \partial^2_{yz} \phi_x = \sum_{i=0}^{n+1} \sum_{j=0}^{n+1} \sum_{k=0}^{n+1} \phi_x(x_i,y_j,z_k) \ell_i(x) \ell^{\prime}_j(y) \ell^{\prime}_k(z). \end{equation} Note that this formulation is equivalent to the alternative approach we describe below using the vector potential. It is however important to understand that this interpolation is used only at initialisation, to make sure the corresponding magnetic field is strictly divergence free. The next step is to evaluate the magnetic field at the solution points, which, in the 3D case, are now located at the centre of the face in the staggered direction: $B_x(x^f_i,y^s_j,z^s_k)$, $B_y(x^s_i,y^f_j,z^s_k)$ and $B_z(x^s_i,y^s_j,z^f_k)$, where the $f$ and $s$ superscripts correspond again to flux and solution points respectively. Once the field has been initialised on the solution points, we then interpolate the field within each face of the control volumes using Lagrange polynomials, which are defined using the solution points as in the traditional SD method. Using these definitions, it is straightforward to generalise Proposition~\ref{proposition:pointwise_div_free} to the 3D case, and prove that $\nabla\cdot\vec{B}=0$. The components of the electric field are defined at the centre of the edges between control volumes, located at flux points in the directions orthogonal to the component, and at solution points along the components direction: $E_x(x^s_i,y^f_j,z^f_k)$, $E_y(x^f_i,y^s_j,z^f_k)$ and $E_z(x^f_i,y^f_j,z^s_k)$. The electric field is again defined as $\vec{E}= - \vec{v}\times\vec{B}$, therefore this method requires to know the orthogonal velocities at those same edges, and to solve a 1D Riemann problem at element's faces and a 2D Riemann problem at element's edges. As in Proposition~\ref{proposition:globally_div_free}, the SD update of the magnetic field is obtained directly using a pointwise update at the magnetic field solution points: \begin{equation} \partial_t B_{x} = \partial_z E_y - \partial_y E_z,~~~ \partial_t B_{y} = \partial_x E_z - \partial_z E_x~~~{\rm and}~~~ \partial_t B_{z} = \partial_y E_x - \partial_x E_y. \end{equation} It follows trivially, like in the 2D case, that \begin{equation} \partial_t \left( \partial_x B_{x} + \partial_y B_{y} + \partial_z B_{z}\right) = 0. \end{equation} We have here again an equivalence between the SD method applied to the magnetic field and a similar SD method applied to the vector potential. It is however more difficult in 3D to compute the vector potential from the magnetic field. It requires a complex inversion and the choice of a gauge, using for example the Coulomb gauge, for which $\nabla \cdot \vec{A}=0$. Assuming we know the vector potential, we define for each component the line integral over the component's direction, as shown here for the $z$-direction: \begin{equation} \alpha_z(x_i,y_j,z_k) =\int_0^{z_k} A_z(x_i,y_j,z) {\rm d}z. \end{equation} As for the magnetic flux, this quantity is defined at the corner points of the control volumes using flux points in each direction. We can then use the Lagrange polynomials defined using the flux points to compute the vector potential everywhere as: \begin{equation} A_{z}(x,y,z) = \partial_z \alpha_z = \sum_{i=0}^{n+1} \sum_{j=0}^{n+1} \sum_{k=0}^{n+1} \alpha_z(x_i,y_j,z_k) \ell_i(x) \ell_j(y) \ell^{\prime}_k(z). \end{equation} We can now evaluate the vector potential at the corresponding solution points, which are, as for $\vec{E}$, defined at the edges of the control volumes: $A_x(x^s_i,y^f_j,z^f_k)$, $A_y(x^f_i,y^s_j,z^f_k)$ and $A_z(x^f_i,y^f_j,z^s_k)$. Once we know the polynomial representation of the vector potential, the magnetic field can be derived using pointwise derivatives and $\vec{B} = \nabla \times \vec{A}$. The vector potential can finally be updated directly at its solution points using (shown here only for $A_z$): \begin{equation} \partial_t A_z = -v_x \partial_x A_z - v_y \partial_y A_z + v_x \partial_z A_x + v_y \partial_z A_y. \end{equation} This is again the vector potential equation, although in a more complex form than in the 2D case. It can however be solved using our SD scheme, exactly like in 2D. \subsection{Extension of our new method to ideal MHD } The natural progression of this work is to extend the proposed SD method to the full magneto-hydrodynamics equations. The first difficulty is to solve 2D Riemann problems at element edges. Fortunately, 2D Riemann solvers in the context of ideal MHD have been already developed in the past years in multiple implementations of Constrained Transport for the FV Godunov method \cite{Londrillo2004,Teyssier2007,balsara2010,balsara2012,Balsara2014,balsara2015a,balsara2017}. As for the time stepping, the ADER methodology is trivially extended to 3-D and nonlinear problems \cite[see e.g.][]{dumbser_ader_2013}. Our proposed version of ADER only differs in the fact that we do not remain local during the iterative process, as we require Riemann solvers as part of the SD space discretization. This means that in the MHD case, we ought to use an appropriate Riemann solver as described above. The second difficulty comes from finding the appropriate shock capturing techniques for the SD method, which traditionally has been achieved through artificial viscosity \cite{Premasuthan2014}. Finding both a way to enforce preservation of positiveness and not clipping smooth extrema, while constraining as least as possible the performance of the method, is of extreme importance. Recent advances in shock capturing methods, such as \cite{Vilar2019}, provide a natural way of performing sub-cell limiting in a nodal discontinuous Galerkin method based on the \textit{a posteriori} subcell limiting strategy (MOOD) \cite{Dumbser2016} that can guarantee both positivity and as little as possible extrema clipping in smooth profiles. This methodology seems promising to be applied to our SD method in the context of the ideal MHD equations when used in combination with a robust finite volume scheme that preserves the divergence free nature of the solution. \section{Various high-order Discontinuous Galerkin methods for the induction equation} \label{sec:overview} \begin{figure} \includegraphics[width=0.94\textwidth]{chapters/img/rkdg_div_energy.pdf} \begin{center} \includegraphics[width=1.0\textwidth]{chapters/img/rkdg_maps.pdf} \caption{Performance of the traditional RKDG scheme for the magnetic loop test defined in Eq.~\eqref{eq:magloop-ics}. In the top row, the first two panels show the divergence contribution of the volume term and of the surface term, respectively. The third panel shows the magnetic energy of the solution over time. In the bottom row, maps of the magnetic energy density are shown at $t = 2$. The three runs corresponds to increasing polynomial degree ($n=1$, $n=2$ and $n=3$) and a fixed number of cells ($N = 128$ per dimension). \label{fig:rkdg-loopadvection-div-energy}} \end{center} \end{figure} The Discontinuous Galerkin (DG) method is a very popular FE scheme in the context of fluid dynamics. It is based on a weak formulation of the conservative form of the MHD equations using a Legendre polynomial basis \cite{cockburn1998}. In this context, the induction equation has to be written as \begin{equation} \partial_t \vec{B} + \nabla \cdot (\vec{v}\otimes \vec{B} - \vec{B}\otimes \vec{v}) = 0, \label{eq:mhd-induction-eq-div} \end{equation} in a conservative form compatible with the Euler sub-system. Indeed, this equation is now based on the divergence operator, which forces us to deal with the magnetic field through volume integrals. In this section, we describe three different implementations of the DG method for the induction equation. The first one is the classical DG method with Runge-Kutta time integration (RKDG), for which nothing particular is done to preserve the divergence-free character of the solution. The second method, presented for example in \cite{Guillet2019,Cockburn2004}, uses a modified polynomial basis for the magnetic field, so that it is locally exactly divergence free. The third one allows the divergence to explicitly deviate from zero, but tries to damp the divergence errors using an additional scalar field and its corresponding equation \citep{munz2000}. We will evaluate the performance of these three classical methods using a proper measurement of the divergence error, as well as the conservation of the magnetic energy, using the famous magnetic loop advection test. \subsection{A traditional RKDG for the induction equation} In this section we describe the classical modal RKDG method using a simple scalar problem in one space dimension: \begin{equation} \label{eq:conslaw} \begin{cases} \partial_t u + \partial_x f( u ) = 0 \quad \in \Omega \times [0,\infty]\\ u(t=0) = u_0 \\ u_{\partial \Omega} = g. \end{cases} \end{equation} The generalisation to multiple space dimensions for structured Cartesian grids can be achieved through tensor products. Let $\Omega \in \mathbb{R}$ be a regular domain which is discretised by $N$ elements $K_p = [x_{p-1/2},x_{p+1/2}]$ for $p=1,...,N$. Consider the local space $\mathcal{V}$ given by the set $\{\phi_i\}_{i=0}^{n}$ of one dimensional Legendre polynomials with degree of at most $n$ in $x$. For each element $K_{p}$, the numerical solution is written as: \[u(x,t) = \sum_{i=0}^{n} \hat{u}_i(t) \phi_i(x),\] where the modal coefficient $\hat{u}_i(t)$ is obtained by the $L^2$ projection of the solution $u(x)$ on the $i$-th Legendre basis polynomial. The DG method is based on a weak form of Eq.~\eqref{eq:conslaw}, projecting it on the polynomial basis, followed by an integration by parts. We obtain the following semi-discrete formulation of the DG method as: \begin{align*} \label{eq:dg} \frac{d \hat{u}_i}{dt} + \left[ \hat{f}(u(x,t))\phi_i(x)\right]_{x_{p-1/2}}^{x_{p+1/2}} - \int_{K_p} f(u(x,t)) \partial_x \phi_i(x) {\rm d}x = 0,\quad i=0,...,n, \end{align*} where we exploited the fact that Legendre polynomials form an orthonormal basis. Note that the surface term in the previous equation needs a Riemann solver to compute a continuous numerical flux at element boundaries, noted here $\hat{f}$. Once the spatial component has been discretised, we are left with an ordinary differential equation of the form: \[ \frac{d}{dt} u = \mathcal{L}(u), \] where $\mathcal{L}$ denotes the DG discretisation operator. Integration in time is performed using a Strong Stability Preserving (SSP) RK method \cite{Gottlieb2005, Kubatko2014}. The time step has to fulfill a Courant-Friedrich-Lewy (CFL) condition to achieve numerical stability, which for the RKDG scheme reads \cite{cockburn1998}: \[ \Delta t = \frac{C}{2n + 1} \frac{\Delta x}{\left| v_{\rm max}\right| }, \] where $n$ is the polynomial degree and $C$ is a constant usually set to $C=0.8$. \subsection{Quantifying divergence errors} It is highly non-trivial to estimate the error in the divergence of the magnetic field for high-order methods in general, and for FE schemes in particular. Indeed, the numerical approximation of the solution is defined in a local sense, with polynomials of degree at most $n$ inside each element, but also in a global sense by considering the solution given by the union of all the elements. A suitable measurement for $\nabla\cdot \vec{B}$ has been proposed by \cite{Cockburn2004} as \begin{equation} \label{eq:divmeas} \@ifstar{\oldnorm}{\oldnorm*}{\nabla \cdot \vec{B}} = \sum_{e\in \mathcal{E}} \int_e \@ifstar{\oldabs}{\oldabs*}{ \jp{ \vec{B}\cdot \vec{n} } } {\rm d}s + \sum_{K\in \mathcal{K}} \int_K \@ifstar{\oldabs}{\oldabs*}{\nabla \cdot \vec{B}} {\rm d}\vec{x}, \end{equation} where $\jp{ \vec{B}\cdot\vec{n}_x } = B_x^{int(K)}-B_x^{ext(K)} $ (for example) denotes the jump operator and $B_x^{int(K)},~ B_x^{ext(K)}$, are the limits of $B_x$ at interface $e$ from the interior and exterior of $K$ respectively. We assume $\vec{B}$ is smooth within each element $K \in \Omega$. However, in the DG framework, $\vec{B}$ can be discontinuous across element boundaries (noted here $e$). In the previous formula, $\mathcal{E}$ denotes the set of element interfaces and $\mathcal{K}$ the set of element volumes. Note that, for a piecewise-smooth function that is divergence free inside each element, it is globally divergence free if and only if the normal component of the vector field across each interface $e$ is continuous, hence the consideration of the jump in the normal component of the magnetic field across the interfaces $e$, given by the first term in Eq.~\eqref{eq:divmeas}. This divergence error measurement has been derived by exploiting the properties of the space $H(div)$ \cite{nedelec1980} or by using a functional approach \cite{Cockburn2004}. In what follows, we call the first contribution the surface term, and the second contribution the volume term. \subsection{Magnetic energy conservation} The other metric used in this paper to characterise different numerical methods is the evolution of the magnetic energy. This is particularly important in the context of magnetic dynamos \citep{Roberts2000, Brandenburg2005}, as one wishes to avoid having spurious magnetic dynamos triggered by numerical errors. Using \eqref{eq:induction-eq-ext} and considering again a non-zero divergence, the magnetic energy equation can be written as: \begin{equation} \label{eq:induction-energy-eq} \partial_t \left( \frac{B^2}{2} \right) + \left( \vec{v}\cdot \nabla \right) \left( \frac{B^2}{2} \right) = - B^2(\nabla\cdot\vec{v}) + \vec{B}\cdot(\vec{B}\cdot\nabla)\vec{v} + (\vec{B}\cdot\vec{v})(\nabla\cdot\vec{B}), \end{equation} where the last term is here again spurious. For example, in the simple case of pure advection where $\vec{v}$ is constant, one can observe that the first two terms on the right hand side vanish, while the third term vanishes only if $\nabla\cdot\vec{B}=0$. On the other hand, if $\nabla\cdot\vec{B} \ne 0$, depending on the solution properties, one could observe a spurious increase of the magnetic energy over time, and interpret it wrongly as a dynamo. In the advection case, the magnetic energy is expected to remain constant, although we expect the numerical solution of the magnetic energy to decay, owing to the numerical dissipation associated to the numerical method. It should however never increase. \subsection{The field loop advection test} The advection of a magnetic loop is a well-known numerical experiment introduced for example in \cite{Gardiner2005} to test the quality of the numerical solution, with respect to both divergence errors and magnetic energy conservation. The test is defined using the following {\it discontinuous} initial magnetic field, \begin{equation} \label{eq:magloop-ics} \vec{B}_0 = \begin{pmatrix} B_{x,0} \\ B_{y,0} \end{pmatrix} = \begin{pmatrix} -A_0(y-y_c)/r \\ A_0(x-x_c)/r \end{pmatrix} \quad {\rm ~for~}r < r_0, \end{equation} and $\vec{B}_0=0$ otherwise, advected with a constant velocity field $\vec{v}=(1,1)$. We use here $A_0 = 0.001$, $r_0=0.25$ and $(x_c,y_c)=(0.5, 0.5)$. We consider a square box $[0,1]\times[0,1]$ and the final time $t = 2$. This allows the loop to cross the box twice before returning to its initial position. In Fig.~\ref{fig:rkdg-loopadvection-div-energy} we show the performance of our traditional RKDG scheme at different approximation orders. When measuring the divergence errors of the numerical solution, we observe that the volume term (measuring global divergence errors) seems to decrease with the approximation order (middle panel) as expected. On the contrary, the surface term (measuring local divergence errors) does not decrease at all. In fact, local errors increase with increasing polynomial degree. Furthermore, the magnetic energy evolution is clearly incorrect. Namely, at $3^{rd}$ and $4^{th}$ orders (corresponding to a maximal polynomial degree of $n=2$ and $n=3$, respectively), an initial increase on the magnetic energy is observed. In the bottom panel (Fig.~\ref{fig:rkdg-loopadvection-div-energy}), maps of the magnetic energy density $B^2/2$ (normalised to the maximum value in the initial condition) are shown at $t = 2$ and at different orders. We see spurious stripes with high-frequency oscillations, aligned with the direction of the velocity field. Our results are similar to the numerical experiments performed in \cite{nunez2018} and consistent with Eq.~\eqref{eq:induction-energy-eq}. We clearly have to give up our initial hopes that going to very high order would solve the divergence-free problem. \subsection{RKDG with a locally divergence-free basis (LDF)} \label{sec:ldfrkdg} The locally divergence-free (LDF) method was first introduced by \cite{Cockburn2004} with the intention to control the local contribution of the divergence. Indeed, we have seen in the last sub-section that this term dominates the error budget in our simple numerical experiment. This method has been recently revisited in \cite{Guillet2019} in conjunction with several divergence cleaning schemes. LDF is built upon the previous RKDG scheme with the key difference that the approximation space considered for the magnetic field $\vec{B}$ is given by: \[ \vec{\mathcal{V}}^n = \{ \vec{v} \in [L^1]^d: \vec{v}\restrict{K} \in[\mathbb{P}^n]^d, \nabla \cdot \vec{v}\restrict{K} = 0 \}. \] The trial space considered contains only functions which are divergence free inside each element $K$ and belong to the $d$-dimensional vector space $[\mathbb{P}^n]^d$, where each polynomial is a polynomial of at most degree $n$. One key difference between this method and the traditional RKDG is that the modal coefficients of the solution are now shared between $B_x, B_y$ and $B_z$ due to this new carefully designed vector basis. We show in this paragraph only the example for $n=1$ in two space dimensions. For more details on the implementation, please refer to Appendix~\ref{ap:LDF}. \begin{example} {\textbf{d = 2, n = 1:}} Consider the basis elements of the $\mbox{span}(\{1,x,y,xy,y^2,x^2\}) = \mathbb{P}^{2}(x,y)$. Form the vector $\vec{b}_i = (0, 0, v_i)^T$ for $v_i \in {\rm basis}(\mathbb{P}^{2}(x,y))$ and take the $curl$ of its elements. This set of vectors spans a subspace of $[\mathbb{P}^{1}(x,y)]^2$. \[ \vec{\mathcal{V}}^1 = {\rm span}\left(\{(0,-1),(1,0),(-x,y),(2y,0),(0,2x)\}\right) \subset [\mathbb{P}^1(x,y)]^2. \] \end{example} In Fig.~\ref{fig:ldf-loopadvection-div-energy} we show the performance of the LDF scheme at different approximation orders. When measuring the divergence of the numerical solution, we observe that the local contribution of the divergence is zero (as expected). The global contribution (middle panel), while decreasing with the order, is considerably larger than the traditional RKDG scheme. We believe this is due to the reduced number of degrees of freedom in the LDF basis. For the measured magnetic energy, we don't see a spurious dynamo anymore, only a decay due to numerical dissipation. We also observe less numerical dissipation when increasing the order of the method, a desirable property. In the bottom panel of the same figure, the magnetic energy density maps show some residual oscillations at $t = 2$, although much less than for the original RKDG scheme. In order to reduce these oscillations even more, one traditionally uses the LDF scheme in conjunction with a divergence cleaning strategy \cite{Cockburn2004,Guillet2019,klingenberg2017}, similar to the one presented in the next section. \begin{figure} \includegraphics[width=0.94\textwidth]{chapters/img/ldf_div_energy.pdf} \begin{center} \includegraphics[width=1.0\textwidth]{chapters/img/ldf_maps.pdf} \caption{ Same as Fig.~\ref{fig:rkdg-loopadvection-div-energy} but now for the LDF scheme (solid lines). For comparison, the results of the traditional RKDG scheme are shown as dotted lines. Note that the LDF scheme has no local divergence errors by construction (no solid lines in the top left panel). \label{fig:ldf-loopadvection-div-energy}} \end{center} \end{figure} \subsection{RKDG with hyperbolic and parabolic divergence cleaning (DivClean)} Divergence cleaning is a general strategy aiming at actively reducing divergence errors by modifying the solution at every time step. Among many possibilities that can be found in the literature \citep[see][for example]{toth1996}, we have adopted here a robust technique based on the addition of a new variable that can be used to control and dissipate the divergence of the magnetic field $\vec{B}$. Following \cite{dedner2002}, we briefly describe this method that performs what is called parabolic and hyperbolic \textit{divergence cleaning}. The idea is to introduce an additional scalar field $\psi$ and couple it to the induction equation. This method is also known as the Generalised Lagrangian Multiplier (GLM) approach \cite{munz1999, munz2000}. The induction equation in its divergence form in Eq.~\eqref{eq:mhd-induction-eq-div} is modified as \begin{equation} \label{eq:glm-induction-eq} \begin{split} \partial_t \vec{B} + \nabla \cdot (\vec{v}\otimes \vec{B} - \vec{B} \otimes \vec{v}) + \nabla \psi &= 0,\\ \mathcal{D}(\psi) + \nabla \cdot \vec{B} &= 0, \end{split} \end{equation} where $\mathcal{D}(\cdot)$ is a linear differential operator. There are different ways to choose $\mathcal{D}(\cdot)$ \cite{munz1999,munz2000}. In this work, we choose a \textit{mixed} type of correction, defining $\mathcal{D}(\cdot)$ as \[ \mathcal{D}(\psi):= \frac{1}{c_h^2}\partial_t \psi + \frac{1}{c_p^2}\psi. \] The new scalar variable $\psi$ is coupled to the non-vanishing divergence of the magnetic field and evolves according to a new additional partial differential equation: \[\partial_t \psi + \frac{c_h^2}{c_p^2}\psi + c_h^2\nabla\cdot\vec{B} = 0.\] Both $c_h$ and $c_p$ are free parameters tuned for each particular problem at hand. The hyperbolic parameter $c_h$ corresponds to the velocity of the waves that are carrying the divergence away from regions where errors are created. The parabolic parameter $c_p^2$ corresponds to the diffusion coefficient of the parabolic diffusion operator that damps divergence errors. There are different strategies to choose $c_h$ and $c_p$ that could lead to a robust scheme. Different methods have been proposed in the literature \cite{dedner2002,Guillet2019,Mignone2010}, and these choices boil down to setting the speed $c_h$ to be a small multiple of the maximum of the velocity field $\left| v_{\rm max} \right|$ and the magnitude of the diffusion coefficient $c_p^2$ as a small multiple of $c_h \Delta x$. In Fig.~\ref{fig:divc-loopadvection-div-energy} we show the performance of the RKDG scheme with both hyperbolic and parabolic divergence cleaning, called here DivClean, at different approximation orders. For implementation details, please refer to Appendix~\ref{ap:DivClean}. In this numerical experiment, we set $c_h$ and $c_p^2$ according to \cite{Mignone2010}, namely, we choose $c_h = 2 \left| v_{\rm max}\right| $ such that the overall time step is almost not affected, and $c_p^2 = 0.8 c_h \Delta x$. We see that both surface and volume terms of the divergence error norm are small, and they both decrease with increasing orders. The magnetic energy density maps look very smooth and symmetrical, with very small residual features close to the discontinuity. It is worth stressing that none of the tests performed here make use of TVD slope limiters, so that some residual oscillations due to the Runge phenomenon are expected. \begin{figure} \includegraphics[width=0.94\textwidth]{chapters/img/divclean_div_energy.pdf} \begin{center} \includegraphics[width=1.0\textwidth]{chapters/img/divclean_maps.pdf} \caption{Same as Fig.~\ref{fig:rkdg-loopadvection-div-energy} but now for the DivClean scheme (solid lines). for comparison, the results of the traditional RKDG scheme are shown as dotted lines. \label{fig:divc-loopadvection-div-energy}} \end{center} \end{figure} \section{Conclusions} \label{sec:conclusion} In this work, we have analysed in detail several variants of the high-order DG method with RK time integration for the induction equation, while attempting to preserve the constraint $\nabla\cdot\vec{B}=0$ with various degrees of success. We have then presented a novel, arbitrary high-order numerical scheme based on a modification of the Spectral Difference (SD) method with ADER time integration for the induction equation. This new scheme preserves $\nabla\cdot\vec{B}=0$ exactly by construction. It is a natural extension of the Constrained Transport scheme to the SD method. We have proved that both the volume term and the surface term in the norm definition of the divergence vanish. We have also reformulated our scheme in terms of the vector potential, which allows a direct connection with a classical SD method for the evolution of the vector potential, allowing us to analyse its stability and dispersion properties, with results similar to \cite{abeele2008}. Furthermore, we show that the combination of ADER and SD result in a stable method when choosing the appropriate CFL condition. We have shown with various numerical experiments that our method converges at the expected order, namely $\Delta x^{n+1}$, where $n$ is the polynomial degree of the adopted interpolation Lagrange polynomials and $\Delta x$ the element size. We have also considered the discontinuous field loop advection test case \cite{Gardiner2005}, a problem known to reveal artifacts caused by not preserving $\nabla\cdot\vec{B}=0$. We have shown again that our new method behaves well, up to incredibly high orders (polynomial degree $n=39$), conserving the magnetic energy almost exactly by drastically reducing advection errors, provided the order is high enough. Furthermore, we also test our method using a non-trivial velocity field and show our method leads to the correct solution, and that we qualitatively get similar results as in \cite{Torrilhon2004}. We have then compared our novel method with the high-order DG variants presented in the first part of the paper. The magnetic energy evolution and the solution maps of the SD-ADER scheme all show qualitatively similar and overall good performances when compared to the Divergence Cleaning method applied to RKDG, but without the need for an additional equation and an extra variable to help controlling the divergence errors. We have finally discussed our future plans to extend this work to three dimensions and to fully non-linear ideal MHD. \section{Numerical results} \label{sec:numerics} \begin{figure} \centering \includegraphics[width=.48\textwidth]{chapters/img/mhd-smooth.pdf} \includegraphics[width=.5\textwidth]{chapters/img/Bx-L1error.pdf} \caption{Left panel: map of the magnetic energy density for the case $n=4$ and $N=32$. Right panel: $L^1$ convergence of the SD method for the smooth magnetic potential from Eq.~\eqref{eq:smoothloop} at different orders and spatial resolutions. \label{fig:conv-rate-smooth-pot}} \end{figure} In this section, we test our new SD-ADER scheme for the induction equation using first a smooth initial condition, ensuring that our method is truly high order, and then using a more difficult tests, namely the advection of a discontinuous field loop under a constant velocity and a rotating velocity field. Finally, we compare our new SD-ADER scheme's performance to the various variants of the RKDG scheme on the advection of a discontinuous field loop problem. \subsection{Continuous magnetic field loop} In order to check that we are indeed solving the induction equation at the expected order of accuracy, we consider the advection of a smooth and periodic magnetic field given by the following initial conditions: \begin{equation} \label{eq:smoothloop} \vec{B} = \left( \cos(2\pi y), -\cos(2\pi x), 0\right), \end{equation} with a constant velocity field $\vec{v} = (1,1)$. We estimate the convergence rate of the proposed SD method by computing the $L^1$ error of each magnetic field component, averaged over the control volumes within each element. The $L^1$ error is defined as the $L^1$ norm of the difference in the mean value for each control volume between the numerical solution $u(t)$ at $t=1$ and the initial numerical approximation $u(0)$: \begin{equation} \label{eq:L1} L^1 = ||u(t)-u(0)||_1 = \sum_{K\in \mathcal{K}} \int_K |u(t)-u(0)|{\rm d} x{\rm d} y. \end{equation} In Fig.~\ref{fig:conv-rate-smooth-pot}, we present the $L^1$ convergence rates for $B_x$ only. We omit the results for $B_y$ as these are identical to the ones of $B_x$ due to the symmetry of the initial conditions. We can observe that the convergence rate of the method scales as $\Delta x^{n+1}$ (where $n$ is the polynomial degree of the interpolation Lagrange polynomial), as expected of a high-order method, and as observed in other high-order method implementations \cite{Schaal2015,Guillet2019,Derigs2018}. As introduced in the previous section, the product $(n+1) \times N$ gives the number of control volumes per spatial direction, and corresponds to the number of degrees of freedom of the method. We conclude from the observed error rates that considering a high-order approximation will reach machine precision with a drastically reduced number of degrees of freedom. For example, we see that the $7^{th}$-order method is able to reach machine precision for a cell size as large as $\Delta x = L/32$. \subsection{Discontinuous magnetic field loop} In this section we consider the initial conditions given by the discontinuous magnetic field loop test case, as introduced in section \ref{sec:overview}. We start by presenting in Fig.~\ref{fig:sd-loop-order} the solution maps computed at $t=1$ with the SD method while increasing the polynomial degree $n$, specifically for $n=0,1,2,3,6$ and $9$, and for $N=32$ cells per side. As we can see, increasing the order considerably improves the quality of the solution, and furthermore, even for a number of cells as small as $32$ per side, both the seventh- and tenth-order simulations ($n=6$ and $9$ respectively) show remarkable results preserving the shape of these discontinuous initial conditions. \begin{figure} \centering \includegraphics[width=\textwidth]{chapters/img/mhd-loop.pdf} \caption{Discontinuous magnetic field loop advection test with increasing polynomial degree and $32$ cells on a side. \label{fig:sd-loop-order} } \end{figure} In Fig.~\ref{fig:sd-loop-dof} we go a step further, testing the "arbitrary high-order" character of our numerical implementation. In this figure we present again the solution maps at $t=1$, showing in black the mesh for the cells and in grey the mesh for the inner control volumes. While keeping constant the number of degrees of freedom per dimension $(n+1)\times N$, we show the increasingly better results as the order of the scheme is increased and the number of cells is decreased, keeping a constant product $(n+1)\times N=40$. In the most extreme case, of little practical interest, we go as far as testing a $40^{th}$-order method with one cell (as shown in the bottom-left panel of Fig.~\ref{fig:sd-loop-dof}). Surprisingly for us, this one cell simulation is able to preserve the initial conditions better than all the other cases. Indeed, in this extreme case, the flux points are "squeezed" towards the boundaries of the element, which results in an apparent loss of resolution of the control volumes at the centre of the element. The increased order of accuracy easily compensates for this effect. \begin{figure} \centering \includegraphics[width=\textwidth]{chapters/img/mhd-loop-dof.pdf} \caption{Discontinuous magnetic field loop advection test with increasing polynomial degree while maintaining the number of degrees of freedom equal to $40$. \label{fig:sd-loop-dof} } \end{figure} We now present the performance of the method in preserving the magnetic energy. We show the normalised magnetic energy as a function of time for the simulations presented in Fig.~\ref{fig:sd-loop-order} (resp. Fig.~\ref{fig:sd-loop-dof}) on the left panel (resp. right panel) of Fig~\ref{fig:sd-loop-EB}. We see that going to higher order at fixed element resolution significantly improves the conservation property of the scheme. Our simulation with 32 elements and order 10 shows virtually no advection error anymore within the simulated time interval, at the expense of increasing the number of degrees of freedom significantly. The second experiment, with a fixed number of degrees of freedom $(n+1)\times N = 40$, still shows a significant improvement in the energy conservation as the order of the method is increased. Our extreme case with only one element and a polynomial degree $n=39$ has also no visible advection errors in the magnetic energy evolution. Note however that the computational cost of the method increases significantly with the order of accuracy, even when keeping the number of degrees of freedom constant. Regarding the conservation of the magnetic energy, for a given target accuracy, it is more efficient to go to higher order than to go to more computational elements. \begin{figure} \centering \includegraphics[width=\textwidth]{chapters/img/EB-dof.pdf} \caption{Normalised magnetic energy as a function of time for the discontinuous magnetic field loop advection test. The left panel shows the results for a fixed number of elements $N=32$ and an increasing order of accuracy, corresponding to Fig~\ref{fig:sd-loop-order}. The right panel shows the results for a fixed number of degrees of freedom $(n+1)\times N = 40$, corresponding to Fig~\ref{fig:sd-loop-dof}. \label{fig:sd-loop-EB} } \end{figure} In order to compare our new SD scheme with the RKDG variants we have presented in section~\ref{sec:overview}, we show in Fig.~\ref{fig:sd-loopadvection-div-energy} the exact same field loop advection test for the SD implementation with $N=128$ elements per side. The reader is kindly asked to compare to Fig.~\ref{fig:rkdg-loopadvection-div-energy} through Fig.~\ref{fig:divc-loopadvection-div-energy}. The top left panel shows our results for the divergence errors of the numerical solution, compared to the traditional RKDG scheme, for both the volume and surface terms. This plot is meant as a joke, as obviously both terms are identically zero for the SD scheme, so only the traditional RKDG results are visible. We confirm that the SD method preserves $\nabla\cdot\vec{B} = 0$ to machine precision, both in a global and in a local sense. The right top panel shows again the magnetic energy evolution of the SD method, but this time with the same number of elements and order of accuracy than the experiments performed in section~\ref{sec:overview}. We see that the SD method shows no spurious dynamo. In the bottom panel of Fig.~\ref{fig:sd-loopadvection-div-energy}, we show the solution maps for magnetic energy density at $t=2$, in order to compare with the maps of Fig.~\ref{fig:rkdg-loopadvection-div-energy}, Fig.~\ref{fig:ldf-loopadvection-div-energy} and Fig. ~\ref{fig:divc-loopadvection-div-energy}. We note that the solution features a slight upwind asymmetry, as opposed to the solution of the DivClean RKDG method, especially for $n\leq3$. This upwind bias seems to disappear when moving to higher order. A detailed comparison of the various schemes is presented in the next section. \begin{figure} \includegraphics[width=0.94\textwidth,]{chapters/img/sd_div_energy.pdf} \begin{center} \includegraphics[width=1.0\textwidth]{chapters/img/sd_maps.pdf} \caption{Same as Fig.~\ref{fig:rkdg-loopadvection-div-energy} but now for the SD scheme (solid lines). For comparison, the results of the traditional RKDG scheme are shown as dotted lines. Note that the SD scheme has no local and no global divergence errors by construction (no solid lines in the top left and top middle panels). \label{fig:sd-loopadvection-div-energy}} \end{center} \end{figure} \subsection{Rotating discontinuous magnetic field loop} \label{subsection: rotating-hump} In this section, we consider the rotation of a discontinuous magnetic field loop. This test describes a linear velocity field $\vec{v} = (-y,x)^T$ acting on the magnetic field, resulting in a rotation around the origin. In this work, we use the following initial condition for the magnetic field $\vec{B_0}$: \begin{equation} \label{eq:magloop-ics} \vec{B}_0 = \begin{pmatrix} B_{x,0} \\ B_{y,0} \end{pmatrix} = \begin{pmatrix} -A_0(y-y_c)/r \\ A_0(x-x_c)/r \end{pmatrix} \quad {\rm ~for~}r < r_0, \end{equation} and $\vec{B}_0=0$ otherwise. We use here $A_0 = 0.001$, $r_0=\sfrac{1}{8}$ and $(x_c,y_c)=(\sfrac{3}{4}, \sfrac{1}{2})$. Then, the exact solution at time $t$ is given by: \begin{equation} \vec{B}(\vec{x},t) = R(t)^{-1}B_0(R(t)\vec{x}), \end{equation} where $R(t)$ is a orthogonal matrix which rotates a vector by the angle $t$, \begin{equation} R(t) = \begin{pmatrix} \cos(t) & -\sin(t) \\ \sin(t) & \cos(t) \end{pmatrix}. \end{equation} Lastly, the computation domain considered is a box $[0,1]^2$ and at the boundary, the exact solution is prescribed in the ghost cells. In Fig.~ \ref{fig:sd-rotatinghump-order}, we show the solution computed by our proposed SD-ADER method varying the polynomial degree approximations. The solution is shown at $t=\pi$, corresponding to half a rotation. We observe that, as well as in the previous case, the method is able to preserve the discontinuous magnetic loop for $n$ greater or equal to 1. When comparing to Fig.~\ref{fig:sd-loop-order}, we have to highlight that the magnetic loop is being evolved up to a time $\pi$ times larger. Even then, the results remain similar, that is, increasingly better for higher order, thus showcasing the low numerical advection error that the method can reach. Furthermore, in Fig.~ \ref{fig:sd-energy-rotation}, the magnetic energy is shown. Once again, we expect the magnetic energy to remain constant over time, and indeed, we observe improvement in the conservation of the magnetic energy as the order is increased. In concrete, we observe for $n=6$ and $9$ a loss in magnetic energy below $1\%$. \begin{figure} \centering \includegraphics[width=\textwidth]{chapters/img/rotation-mhd-loop.pdf} \caption{Rotating field loop test with increasing polynomial degree and $32$ cells on a side. \label{fig:sd-rotatinghump-order} } \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{chapters/img/E-rotation-mhd-loop.pdf} \caption{Normalised magnetic energy as a function of time for the rotating discontinuous field loop test. We show the results for a fixed number of elements $N=32$ and an increasing order of accuracy, corresponding to Fig~\ref{fig:sd-rotatinghump-order}. } \label{fig:sd-energy-rotation} \end{figure} \section{Introduction} \label{sec:introduction} Developing numerical algorithms for the equations of ideal magneto-hydrodynamics (MHD) is of great interest in many fields of science, such as plasma physics, geophysics and astrophysics. Magnetic fields play an important role in a large variety of phenomena in nature, from the early universe, to interstellar and intergalactic medium, to environments and interiors of stars and planets \cite{Brandenburg2005}. The ideal MHD equations describe conservation laws for mass, momentum and total energy on the one hand, and for magnetic flux on the other hand. The first 3 conservation laws form what is called the Euler sub-system, while the fourth one is called the induction sub-system. In this paper, we focus on the latter, usually called the induction equation: \begin{equation} \label{eq:mhd-induction-eq} \partial_t \vec{B} = \nabla\times(\vec{v}\times \vec{B}) + \eta \nabla^2 \vec{B}.\end{equation} This partial differential equation describes the evolution of a magnetic field $\vec{B}$ under the effect of the velocity $\vec{v}$ of an electrically conductive fluid. The coefficient $\eta$ denotes the magnetic diffusivity. In the ideal MHD case, the fluid has infinite electric conductivity, so that $\eta \to 0$ and the diffusive term can be ignored. By taking the divergence of Eq.~\eqref{eq:mhd-induction-eq}, we note that the time evolution of the divergence of $\vec{B}$ is zero for all times, meaning that the initial divergence of $\vec{B}$ is preserved: \begin{equation} \label{eq:divbevol} \partial_t (\nabla \cdot \vec{B}) = \nabla \cdot \left( \nabla\times(\vec{v}\times \vec{B}) \right) = 0, \end{equation} as the divergence of the curl of a vector is always zero. Physically, the fact that magnetic fields have no monopoles and that magnetic field lines form closed loops, is translated in the initial condition \begin{equation} \label{eq:divfree} \nabla \cdot \vec{B} = 0. \end{equation} Considering Eq.~\eqref{eq:divbevol} and Eq.~\eqref{eq:divfree} together means that the divergence of $\vec{B}$ must be zero at all times. To clearly see the erroneous evolution of our system if $\nabla\cdot\vec{B}$ happens to be nonzero, we can re-formulate Eq.~\eqref{eq:mhd-induction-eq} as \begin{equation} \label{eq:induction-eq-ext} \partial_t \vec{B} + (\vec{v}\cdot\nabla)\vec{B} = -\vec{B}(\nabla\cdot \vec{v}) + (\vec{B}\cdot\nabla)\vec{v} + \vec{v}(\nabla\cdot \vec{B}). \end{equation} Note that the second term of the left-hand side, $(\vec{v}\cdot\nabla)\vec{B}$, corresponds to the advection of $\vec{B}$ by the fluid, the first term on the right-hand side models the \textit{compression} of the magnetic field lines and the second term is due to the \textit{stretching} of the field lines. This interpretation can be done by establishing an analogy with the vorticity equation \citep[see][for example]{davidson2001}. The last term, proportional to $\nabla\cdot \vec{B}$, is also proportional to the velocity of the flow $\vec{v}$, and vanishes only if the magnetic field is divergence free. When applying common discretisation schemes, for example the popular Finite Volume (FV) method, the divergence-free constraint in Eq.~\eqref{eq:divfree} is not necessarily fulfilled in the discrete sense. Indeed, in this case, the numerical representation of the field is based on a volume integral of the magnetic field and the magnetic flux is not conserved anymore. This is a big issue in the numerical evolution of the MHD equations, as shown for example in the seminal studies of \cite{BRACKBILL1980,toth1996}, which show that a non-physical force parallel to the velocity $\vec{v}$ and proportional to $\nabla \cdot \vec{B}$ appears in the discretised conservative form of the momentum equation in the Euler sub-system. There have been many proposed methods to guarantee a divergence-free description of $\vec{B}$. For example, the non-solenoidal component of $\vec{B}$ is removed through a Hodge-Helmholtz projection at every time step (e.g. \cite{BRACKBILL1980,zachary1994}), or the system in Eq.~\eqref{eq:induction-eq-ext} is written in a non-conservative formulation where the non-solenoidal component of $\vec{B}$ is damped and advected by the flow (e.g. \cite{powell1999,dedner2002,munz1999}). Another approach is done at the discretisation level, where the numerical approximation of the magnetic field is defined as a surface integral and collocated at face centres, while the electric field used to updated the magnetic field is collocated at edge centres, in a staggered fashion \cite{yee1966, brecht1981, Evans1988, devore1989}. This method, called Constrained Transport (CT), was later adapted to the FV framework applied to the MHD equations \citep[see e.g.][]{Dai1998, Ryu_1998, Balsara_mhd_1999, Balsara_2004, Fromang2006}. The CT method is obviously closer to the original conservation law, as the magnetic flux is explicitly conserved through the cell faces. A comprehensive review of these methods in astrophysics can be found in \cite{Teyssier2019} and references therein. In addition, finite element methods can be naturally used to solve the induction and MHD type equations. In particular, when the magnetic field is approximated by a $H(div)$ vector function space (where elements of this space have square integrable divergence), it leads to continuous normal components of the approximation across element faces, while when the electric field is approximated by a $H(curl)$ vector function space (elements of this space have square integrable curl), it leads to continuous tangential components across cell faces \cite{Brezzi_1991}. For example, Raviart-Thomas/N\'ed\'elec basis functions are conforming with the aforementioned vector function spaces and have been used successfully to solve the induction equation \cite{balsara_kappeli_2018, chandrashekar_2020, praveen_2019}. With the increased availability of high-order methods, one could ask whether a high-order approximation of the magnetic field $\vec{B}$ alone could be sufficient to control the non-vanishing divergence problem of the magnetic field. Indeed, very high-order methods have been developed in many fields of science and engineering, both in the context of the FV method \cite{Jiang1999} and in the context of the Finite Element (FE) method \cite{Li2005,Mocz2013,Fu2018,Guillet2019}. These very high-order methods have proven successful in minimizing advection errors in case of very long time integration \cite{Gassner2013,Sengupta2006,Velasco2018}. Very high-order methods have already been developed specifically for the ideal MHD equations \citep{Nordlund1990,Jiang1999,Balsara_2004,Balsara_weno_2009,Felker2018,balsara_kappeli_2018}. It turns out, as we also show in this paper, that a very high-order scheme does not solve by itself the problem of non-zero divergence and specific schemes have to be developed to control the associated spurious effects \citep{munz1999,Li2012,Fu2018,Guillet2019}. In this paper, we present a new, arbitrary high-order method that can perform simulations of the induction equation, based on the Spectral Difference method developed in \cite{Kopriva1998, Liu2006} and on the ADER timestepping scheme \cite{dumbser_ader_2013, balsara_ader_2018}. We show that this technique is by construction strictly divergence free, both in a local and in a global sense. While there are similarities between this work and the work presented in \cite{balsara_kappeli_2018}, there are some key differences: our scheme includes internal nodal values which are evolved according to a standard SD scheme, similar to \cite{praveen_2019}. Furthermore, there is no need for an explicit divergence-free reconstruction step, which means achieving arbitrarily high-order is simpler. In particular, Propositions \ref{proposition:pointwise_div_free} and \ref{proposition:globally_div_free} (see below) prove that our method is divergence-free by construction and arbitrarily high-order. The paper is organised as follows: we start in section \ref{sec:overview} with a detailed description of several well-known high-order methods used to model the induction equation, discussing the challenges of controlling efficiently the magnitude of the divergence of the magnetic field. Then, in section \ref{sec:sd_mhd}, we present our new Spectral Difference method for the induction equation, highlighting our new solution points for the magnetic field components and the need of two-dimensional Riemann solvers. In section \ref{sec:numerics} we evaluate the performance of the new SD-ADER method numerical through different test cases. In section \ref{sec:mhd-discussion}, we compare our new method to other very high-order schemes using different numerical experiments. Finally, in section \ref{sec:conclusion}, we present our conclusions and outlook. \section{Appendix} \subsection{Locally divergence-free basis} \label{ap:LDF} In order to design a locally-divergence free basis to represent $\vec{B}$ up to order $n+1$, the vector basis elements are computed as the curl of the elements of the polynomial space $\mathbb{P}^{n+1}$, which contains all polynomials in $x$ and $y$ up to degree $n+1$. Furthermore, as noted in \cite{klingenberg2017}, an orthogonal basis yields better conditioned mass matrices, so we apply the Gram-Schmidt orthogonalisation algorithm with the inner product: \[\eta(\vec{b}_i, \vec{b}_j) = \int_{[-1,1]^2} \vec{b}_i \cdot \vec{b}_j {\rm d} x {\rm d}y.\] The orthogonal and normalised basis vectors (up to $4^{th}$ order of approximation) are given below. These were obtained through the symbolic computation package {\ttfamily{sympy}} \cite{sympy}. \begin{gather*} \vec{\mathcal{V}}^1 = {\rm span}\left(\left\{ \begin{pmatrix} 1.0 \\ 0 \end{pmatrix}, \begin{pmatrix} 0 \\ 1.0 \end{pmatrix}, \begin{pmatrix} \sqrt{3}y \\ 0 \end{pmatrix}, \begin{pmatrix} 0\\ \sqrt{3}x \end{pmatrix}, \begin{pmatrix} \sqrt{\frac{3}{2}}x \\ -\sqrt{\frac{3}{2}}y \end{pmatrix}\right\}\right) \end{gather*} \begin{equation*} \begin{split} \vec{\mathcal{V}}^2 = \vec{\mathcal{V}}^1 \cup {\rm span}\left(\Bigg\{ \sqrt{30}\begin{pmatrix} \frac{3x^2-1}{12} \\ -\frac{xy}{2} \end{pmatrix}, \sqrt{30}\begin{pmatrix} -\frac{xy}{2} \\ \frac{3y^2-1}{12} \end{pmatrix}, \sqrt{5} \begin{pmatrix} \frac{3y^2 - 1}{2} \\ 0\end{pmatrix}, \right. \left. \sqrt{5} \begin{pmatrix} 0 \\ \frac{3x^2-1}{2}\end{pmatrix} \Bigg\}\right) \end{split} \end{equation*} \begin{equation*} \begin{split} \vec{\mathcal{V}}^3 = \vec{\mathcal{V}}^2 \cup {\rm span}\left(\Bigg\{ \frac{\sqrt{42}\sqrt{83}}{166}\begin{pmatrix} 5x^3-4x \\ -15yx^2+4y \end{pmatrix}, \frac{\sqrt{30}}{4}\begin{pmatrix}3x^2y-y \\ -3y^2x + x \end{pmatrix}, \frac{\sqrt{7}}{2}\begin{pmatrix} 5y^3-3y \\ 0 \end{pmatrix}, \frac{\sqrt{7}}{2}\begin{pmatrix}0\\ 5x^3-3x \end{pmatrix}, \right.\\ \left. \frac{\sqrt{165585}}{1824}\begin{pmatrix} -\frac{56x^3}{83}-2x(12y^2-1)+\frac{410x}{83} \\ 8y^3 + \frac{14y(12x^2-1.0)}{83} - \frac{562y}{83} \end{pmatrix}\Bigg\}\right). \end{split} \end{equation*} A detailed discussion on the approximation properties of this polynomial vector space can be found in \cite{Cockburn2004}. \subsection{Divergence cleaning} \label{ap:DivClean} We evolve the system defined by Eq.~\eqref{eq:glm-induction-eq} as described in \cite{klingenberg2017} and using the following steps: \begin{enumerate} \item We apply SSP-RK to the DG discretisation of the induction equation in its divergence form as in Eq.~\eqref{eq:mhd-induction-eq-div}. \item{ We then apply in an operator split fashion SSP-RK to the DG discretisation of the system \begin{equation*} \begin{split} \partial_t \vec{B} + \nabla \psi &= 0,\\ \partial_t \psi + c_h^2\nabla\cdot\vec{B} &= 0. \end{split} \end{equation*} } \item{We finally apply operator splitting to the source term of the parabolic term \[ \psi^{n+1} :=\exp\left(-\frac{c_h^2}{c_p^2}\Delta t\right) \psi^{n+1/2}. \] } \end{enumerate}
{ "redpajama_set_name": "RedPajamaArXiv" }
5,703
The White House director of strategic communications says President Trump proposed a plan to Democrats to fund a southern border wall and resolve the partial government shutdown, but they "refused to negotiate." A Wednesday meeting hasn't yet led to a solution. Mercedes Schlapp speaks to Judy Woodruff about the funding number the president will accept and why a "physical barrier" is necessary now.
{ "redpajama_set_name": "RedPajamaC4" }
2,260
// Copyright 2012 The Closure Library Authors. All Rights Reserved. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS-IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. /** * @fileoverview An interface for a listenable JavaScript object. * * WARNING(chrishenry): DO NOT USE! SUPPORT NOT FULLY IMPLEMENTED. */ goog.provide('goog.events.Listenable'); goog.provide('goog.events.ListenableKey'); goog.require('goog.events.EventLike'); /** * A listenable interface. Also see goog.events.EventTarget. * @interface */ goog.events.Listenable = function() {}; /** * Whether to use the new listenable interface and mechanism in * goog.events and goog.events.EventTarget. * * TODO(user): Remove this once launched and stable. * * @type {boolean} */ goog.events.Listenable.USE_LISTENABLE_INTERFACE = false; /** * Adds an event listener. A listener can only be added once to an * object and if it is added again the key for the listener is * returned. Note that if the existing listener is a one-off listener * (registered via listenOnce), it will no longer be a one-off * listener after a call to listen(). * * @param {string} type Event type or array of event types. * @param {!Function} listener Callback method, or an object * with a handleEvent function. * @param {boolean=} opt_useCapture Whether to fire in capture phase * (defaults to false). * @param {Object=} opt_listenerScope Object in whose scope to call the * listener. * @return {goog.events.ListenableKey} Unique key for the listener. */ goog.events.Listenable.prototype.listen; /** * Adds an event listener that is removed automatically after the * listener fired once. * * If an existing listener already exists, listenOnce will do * nothing. In particular, if the listener was previously registered * via listen(), listenOnce() will not turn the listener into a * one-off listener. Similarly, if there is already an existing * one-off listener, listenOnce does not modify the listeners (it is * still a once listener). * * @param {string} type Event type or array of event types. * @param {!Function} listener Callback method, or an object * with a handleEvent function. * @param {boolean=} opt_useCapture Whether to fire in capture phase * (defaults to false). * @param {Object=} opt_listenerScope Object in whose scope to call the * listener. * @return {goog.events.ListenableKey} Unique key for the listener. */ goog.events.Listenable.prototype.listenOnce; /** * Removes an event listener which was added with listen() or listenOnce(). * * @param {string} type Event type or array of event types. * @param {!Function} listener Callback method, or an object * with a handleEvent function. TODO(user): Consider whether * we can remove Object. * @param {boolean=} opt_useCapture Whether to fire in capture phase * (defaults to false). * @param {Object=} opt_listenerScope Object in whose scope to call * the listener. * @return {boolean} Whether any listener was removed. */ goog.events.Listenable.prototype.unlisten; /** * Removes an event listener which was added with listen() by the key * returned by listen(). * * @param {goog.events.ListenableKey} key The key returned by * listen() or listenOnce(). * @return {boolean} Whether any listener was removed. */ goog.events.Listenable.prototype.unlistenByKey; /** * Dispatches an event (or event like object) and calls all listeners * listening for events of this type. The type of the event is decided by the * type property on the event object. * * If any of the listeners returns false OR calls preventDefault then this * function will return false. If one of the capture listeners calls * stopPropagation, then the bubble listeners won't fire. * * @param {goog.events.EventLike} e Event object. * @return {boolean} If anyone called preventDefault on the event object (or * if any of the listeners returns false this will also return false. */ goog.events.Listenable.prototype.dispatchEvent; /** * Removes all listeners from this listenable. If type is specified, * it will only remove listeners of the particular type. otherwise all * registered listeners will be removed. * * @param {string=} opt_type Type of event to remove, default is to * remove all types. * @return {number} Number of listeners removed. */ goog.events.Listenable.prototype.removeAllListeners; /** * Fires all registered listeners in this listenable for the given * type and capture mode, passing them the given eventObject. This * does not perform actual capture/bubble. Only implementors of the * interface should be using this. * * @param {string} type The type of the listeners to fire. * @param {boolean} capture The capture mode of the listeners to fire. * @param {goog.events.Event} eventObject The event object to fire. * @return {boolean} Whether all listeners succeeded without * attempting to prevent default behavior. If any listener returns * false or called goog.events.Event#preventDefault, this returns * false. */ goog.events.Listenable.prototype.fireListeners; /** * Gets all listeners in this listenable for the given type and * capture mode. * * @param {string} type The type of the listeners to fire. * @param {boolean} capture The capture mode of the listeners to fire. * @return {!Array.<goog.events.ListenableKey>} An array of registered * listeners. */ goog.events.Listenable.prototype.getListeners; /** * An interface that describes a single registered listener. * @interface */ goog.events.ListenableKey = function() {}; /** * The source event target. * @type {!Object} */ goog.events.ListenableKey.prototype.src; /** * The event type the listener is listening to. * @type {string} */ goog.events.ListenableKey.prototype.type; /** * The listener function. * TODO(user): Narrow the type if possible. * @type {Function|Object} */ goog.events.ListenableKey.prototype.listener; /** * Whether the listener works on capture phase. * @type {boolean} */ goog.events.ListenableKey.prototype.capture; /** * The 'this' object for the listener function's scope. * @type {Object} */ goog.events.ListenableKey.prototype.handler;
{ "redpajama_set_name": "RedPajamaGithub" }
8,092
"Casper, Wyoming" by Phillip Stewart , CC BY-SA 2.0 Off Road Vehicles - Wyoming This trail system holds forty-six miles of marked, groomed trails with international signage, including extensive ungroomed play areas. Complete facilities and services available in Casper. Snow depths can range from 1 foot to 3 feet. Elevations: 7,000 feet to 7,800 feet. Season: December 15 through March 15 - WEATHER PERMITTING Season temperatures: +40° F to -20° F Wyoming Pocket Maps Casper Mountain - Snowmobile Trails 2023 Map of Casper Mountain Snowmobile Trails near Casper, Wyoming. Published by Wyoming State Parks, Historic Sites, & Trails (WYSP). Wyoming Public Land - Casper Map of Seasonal and Year-Round BLM Public Land User Limitations in the BLM Casper Field Office area in Wyoming. Published by the Bureau of Land Management (BLM). Wyoming State - Wyoming State Map Casper ORV https://wyoparks.wyo.gov/index.php/orv-trails/orv-maps https://en.wikipedia.org/wiki/Casper,_Wyoming This trail system holds forty-six miles of marked, groomed trails with international signage, including extensive ungroomed play areas. Complete facilities and services available in Casper. Snow depths can range from 1 foot to 3 feet. Elevations: 7,000 feet to 7,800 feet. Season: December 15 through March 15 - WEATHER PERMITTING Season temperatures: +40° F to -20° F Wyoming State Parks
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,666
Q: Java Socket Programming - 301 Error with HTTP 1.1 I just start learning socket programming with Java and I already encountered an unusual behavior. Here's the code snippet writer.println("GET " + path + " " + protocol); //writer.println(); writer.println("Host: " + hostname); writer.println(); writer.flush(); This will give me "301 Moved Permanently" code with both HTTP 1.1 and 1.0. If I uncomment the empty line between the request and host name writer.println("GET " + path + " " + protocol); writer.println(); writer.println("Host: " + hostname); writer.println(); writer.flush(); It would give me "HTTP/1.1 400 Bad Request" for HTTP 1.1 and "HTTP/1.1 200 OK" for HTTP 1.0. Why does it have such behavior? Does this happen because we have the request in HTTP 1.0 and the response is in HTTP 1.1? Thanks. A: This will give me "301 Moved Permanently" code with both HTTP 1.1 and 1.0. HTTP status code 301 is a redirect to a new URL: The requested resource has been assigned a new permanent URI and any future references to this resource SHOULD use one of the returned URIs. Clients with link editing capabilities ought to automatically re-link references to the Request-URI to one or more of the new references returned by the server, where possible. This response is cacheable unless indicated otherwise. The new permanent URI SHOULD be given by the Location field in the response. Unless the request method was HEAD, the entity of the response SHOULD contain a short hypertext note with a hyperlink to the new URI(s). If the 301 status code is received in response to a request other than GET or HEAD, the user agent MUST NOT automatically redirect the request unless it can be confirmed by the user, since this might change the conditions under which the request was issued. Note: When automatically redirecting a POST request after receiving a 301 status code, some existing HTTP/1.0 user agents will erroneously change it into a GET request. The server is telling you that the URL you sent your GET request to is no longer valid. You need to extract the value of the Location header from the server's response and then repeat the same request to the specified URL. It would give me "HTTP/1.1 400 Bad Request" for HTTP 1.1 and "HTTP/1.1 200 OK" for HTTP 1.0. Why does it have such behavior? Does this happen because we have the request in HTTP 1.0 and the response is in HTTP 1.1? The Host header is optional in HTTP 1.0 but is required in HTTP 1.1: A client MUST include a Host header field in all HTTP/1.1 request messages. If the requested URI does not include an Internet host name for the service being requested, then the Host header field MUST be given with an empty value. An HTTP/1.1 proxy MUST ensure that any request message it forwards does contain an appropriate Host header field that identifies the service being requested by the proxy. All Internet-based HTTP/1.1 servers MUST respond with a 400 (Bad Request) status code to any HTTP/1.1 request message which lacks a Host header field. So, when you do not insert the extra blank line, you end up sending these requests separately: GET /path HTTP/1.0 Host: hostname GET /path HTTP/1.1 Host: hostname Which are both valid. But, when you insert the extra blank line, you are actually sending two separate requests at one time: GET /path HTTP/1.x; Host: hostname The request headers and request body are separated by a blank line, and a GET request does not have a request body, so the first blank line ends the request. So, in this case, the first request is valid only for HTTP 1.0 and is invalid for HTTP 1.1 because the Host header is missing. The second request is just plain invalid in either version.
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,618
We are pest controllers locally based to serve Havering and surrounding areas in Essex and strive to provide you with an excellent level of service that you would happily recommend. Much of our work comes from referrals and word of mouth recommendations from our customers, including home owners, tenants, local councils, estate agents, letting agents and landlords. We deal with Domestic and Commercial pest control, we are registered with the NPTA National Pest Technicians Association all of our technicians are fully qualified and we are trading standards approved.
{ "redpajama_set_name": "RedPajamaC4" }
5,914
Q: Failed to start Hadoop services: localhost: nice: cannot set niceness: Permission denied I'm trying to install Hadoop on Ubuntu as Windows subsystem, the installation is finished, but I'm getting error while starting Hadoop services. $ start-dfs.sh Starting namenodes on [localhost] localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduser-namenode-DESKTOP-HD9EL6C.out localhost: nice: cannot set niceness: Permission denied localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduser-datanode-DESKTOP-HD9EL6C.out localhost: nice: cannot set niceness: Permission denied Starting secondary namenodes [0.0.0.0] 0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduser-secondarynamenode-DESKTOP-HD9EL6C.out 0.0.0.0: nice: cannot set niceness: Permission denied I tried many solutions from different platforms, but nothing seems to work. Any solution?
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,698
using System; using System.IO; using System.Text; using NUnit.Framework; using MimeKit; using MimeKit.Utils; namespace UnitTests { [TestFixture] public class MimeParserTests { static FormatOptions UnixFormatOptions; [SetUp] public void Setup () { UnixFormatOptions = FormatOptions.Default.Clone (); UnixFormatOptions.NewLineFormat = NewLineFormat.Unix; } [Test] public void TestSimpleMbox () { using (var stream = File.OpenRead ("../../TestData/mbox/simple.mbox.txt")) { var parser = new MimeParser (stream, MimeFormat.Mbox); while (!parser.IsEndOfStream) { var message = parser.ParseMessage (); Multipart multipart; MimeEntity entity; Assert.IsInstanceOfType (typeof (Multipart), message.Body); multipart = (Multipart) message.Body; Assert.AreEqual (1, multipart.Count); entity = multipart[0]; Assert.IsInstanceOfType (typeof (Multipart), entity); multipart = (Multipart) entity; Assert.AreEqual (1, multipart.Count); entity = multipart[0]; Assert.IsInstanceOfType (typeof (Multipart), entity); multipart = (Multipart) entity; Assert.AreEqual (1, multipart.Count); entity = multipart[0]; Assert.IsInstanceOfType (typeof (TextPart), entity); using (var memory = new MemoryStream ()) { entity.WriteTo (UnixFormatOptions, memory); var text = Encoding.ASCII.GetString (memory.ToArray ()); Assert.IsTrue (text.StartsWith ("Content-Type: text/plain\n\n", StringComparison.Ordinal), "Headers are not properly terminated."); } } } } static void DumpMimeTree (StringBuilder builder, MimeEntity entity, int depth) { if (depth > 0) builder.Append (new string (' ', depth * 3)); builder.AppendFormat ("Content-Type: {0}/{1}\n", entity.ContentType.MediaType, entity.ContentType.MediaSubtype); if (entity is Multipart) { var multipart = (Multipart) entity; foreach (var part in multipart) DumpMimeTree (builder, part, depth + 1); } else if (entity is MessagePart) { DumpMimeTree (builder, ((MessagePart) entity).Message.Body, depth + 1); } } [Test] public void TestEmptyMultipartAlternative () { string expected = @"Content-Type: multipart/mixed Content-Type: multipart/alternative Content-Type: text/plain "; using (var stream = File.OpenRead ("../../TestData/messages/empty-multipart.txt")) { var parser = new MimeParser (stream, MimeFormat.Entity); var message = parser.ParseMessage (); var builder = new StringBuilder (); DumpMimeTree (builder, message.Body, 0); Assert.AreEqual (expected, builder.ToString (), "Unexpected MIME tree structure."); } } [Test] public void TestJwzMbox () { var summary = File.ReadAllText ("../../TestData/mbox/jwz-summary.txt"); var builder = new StringBuilder (); using (var stream = File.OpenRead ("../../TestData/mbox/jwz.mbox.txt")) { var parser = new MimeParser (stream, MimeFormat.Mbox); while (!parser.IsEndOfStream) { var message = parser.ParseMessage (); builder.AppendFormat ("{0}\n", parser.MboxMarker); if (message.From.Count > 0) builder.AppendFormat ("From: {0}\n", message.From); if (message.To.Count > 0) builder.AppendFormat ("To: {0}\n", message.To); builder.AppendFormat ("Subject: {0}\n", message.Subject); builder.AppendFormat ("Date: {0}\n", DateUtils.FormatDate (message.Date)); DumpMimeTree (builder, message.Body, 0); builder.Append ("\n"); } } string actual = builder.ToString (); // WORKAROUND: Mono's iso-2022-jp decoder breaks on this input in versions <= 3.2.3 but is fixed in 3.2.4+ string iso2022jp = Encoding.GetEncoding ("iso-2022-jp").GetString (Convert.FromBase64String ("GyRAOjRGI0stGyhK")); if (iso2022jp != "佐藤豊") actual = actual.Replace (iso2022jp, "佐藤豊"); Assert.AreEqual (summary, actual, "Summaries do not match for jwz.mbox"); } [Test] public void TestJwzPersistentMbox () { var summary = File.ReadAllText ("../../TestData/mbox/jwz-summary.txt"); var builder = new StringBuilder (); using (var stream = File.OpenRead ("../../TestData/mbox/jwz.mbox.txt")) { var parser = new MimeParser (stream, MimeFormat.Mbox, true); while (!parser.IsEndOfStream) { var message = parser.ParseMessage (); builder.AppendFormat ("{0}\n", parser.MboxMarker); if (message.From.Count > 0) builder.AppendFormat ("From: {0}\n", message.From); if (message.To.Count > 0) builder.AppendFormat ("To: {0}\n", message.To); builder.AppendFormat ("Subject: {0}\n", message.Subject); builder.AppendFormat ("Date: {0}\n", DateUtils.FormatDate (message.Date)); DumpMimeTree (builder, message.Body, 0); builder.Append ("\n"); // Force the various MimePart objects to write their content streams. // The idea is that by forcing the MimeParts to seek in their content, // we will test to make sure that parser correctly deals with it. message.WriteTo (Stream.Null); } } string actual = builder.ToString (); // WORKAROUND: Mono's iso-2022-jp decoder breaks on this input in versions <= 3.2.3 but is fixed in 3.2.4+ string iso2022jp = Encoding.GetEncoding ("iso-2022-jp").GetString (Convert.FromBase64String ("GyRAOjRGI0stGyhK")); if (iso2022jp != "佐藤豊") actual = actual.Replace (iso2022jp, "佐藤豊"); Assert.AreEqual (summary, actual, "Summaries do not match for jwz.mbox"); } } }
{ "redpajama_set_name": "RedPajamaGithub" }
582
{"url":"https:\/\/stats.stackexchange.com\/questions\/409922\/gam-discrete-time-survival-prediction-in-r","text":"# GAM discrete time survival prediction in R [closed]\n\nSo I've been trying to perform a discrete time survival analysis in R.\n\nI have been using the discSurv package to generate the augmented data matrix for the full dataset and performed an stratified train-test split.\n\nThis has just been an initial approach, but in order to test more complex procedures, I want to successfully understand what is happening each step.\n\nAfter i have generated the augmented dataset (using the dataLong function), I proceed to train a GAM model.\n\nHowever, here is where i come into trouble. First of all, i don't know how to select which variables to model via a spline function, nor the parameters that should accompany them.\n\nMoreover, which are the appropiate metrics to analyze the model with. How could I predict on the test dataset, and how could i visualize these results within R.\n\nI have worked previously with other machine learning algorithms, but i'm very new to GAM and I can't seem to find any examples related to survival prediction analysis.\n\n## closed as unclear what you're asking by Michael Chernick, mdewey, mkt, Siong Thye Goh, kjetil b halvorsenMay 27 at 12:38\n\nPlease clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it\u2019s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.\n\nThere are a lot of question rolled into one, each one potentially very difficult. Here are some suggestions:\n\n1.) How to select which variables to fit with a spline:\n\n\u2022 Each continuous variable should be considered to potentially have non-linear effects and modeled by splines, unless you have prior knowledge that informs the functional shape of the effect. Because GAMs fit penalized splines, covariates with linear effects will be estimated as such. The select argument in ?mgcv::gam in combination with the gamma parameter can help to keep the models sparse (See also my answer here; you can think of the gamma as a potential tuning parameter in machine learning lingo).\n\n\u2022 GAMs (for survival) can become arbitrary complex, e.g., you could have different baseline hazards in different groups (of a categorical variable), bivariate smooth effects, time-varying effects, etc... (see this and this for some examples).\n\n\u2022 When the model gets very complex, i.e., a lot of (non-linear) effects, you could consider boosting the GAM, e.g. using mboost package, which performs variable and smoothness selection simultaneously.\n\n2.) Prediction on the test data set: Prediction in theory is relatively straight forward. Your test data has to be formatted the same way as the training data (i.e. long format). Then you can call predict(model, newdata, type = \"response\") to get predictions (of the hazard). In practice, you have to decide at which time-points you want to make a prediction. To predict survival probability at $$t = 10$$, you also have to have all data-splits for $$t < 10$$, etc.\n\n3.) For evaluation of your predictions the most popular measures are the C-Index (similar to AUC in binary classification) or the Prediction Error Curve (Brier Score evaluated at multiple time-points). Unlike most machine learning evaluations, the evaluation of the model is performed at different time points of the follow-up (different models can potentially be better\/worse at the beginning\/the end of the follow-up). You can also calculate an overall measure though, e.g. Integrated PEC. There is a package pec that does all of this as described in this tutorial. There are also examples for visualization and interpretation. You can adapt it directly to your GAM\/boosted GAM, by writing a custom predictSurvProb function if it doesn't exist already (see examples in the tutorial). The discSurv package also provides functionality for this, see ?brierScore and ?concorIndex and examples therein.\n\n\u2022 Thank you very much, this is exactly what i needed! I greatly appreciate your time and effort to answer my questions. Wish you the best! \u2013\u00a0baseking May 27 at 7:58","date":"2019-06-17 01:31:06","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 2, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5931405425071716, \"perplexity\": 1023.2883797777006}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-26\/segments\/1560627998339.2\/warc\/CC-MAIN-20190617002911-20190617024911-00448.warc.gz\"}"}
null
null
On 22 May 2001, LDC has announced the merger with leaders of Huttepain Group based at La Chapelle d'Aubin (Sarthe). This alliance enabled the Group LDC to increase its production capacity to meet the growing demands of consumers but also to acquire knowledge in live poultry production, egg and food production. The Group wanted to bring together producers and consumers by its brand Le Gaulois and guaranty product traceability. The Upstream division and its breeding activity secure local supply for production tools of the Group. Our breeders are located nearby our production areas. To get a sense of what this division represents, we support more than 1 400 farmers all around the country. All of their poultry buildings represent nearly 2 million square meters, the equivalent of 280 football fields. Concerning our production factories, they provide every year more than 600,000 tons of food to feed the whole poultry. Huttepain Aliments, Le Mans (Sarthe), organization of poultry production specialized in chickens, turkeys, guinea fowl, linked to its food factory. Alimab, Sablé sur Sarthe (Sarthe), organization of poultry production specialized in chickens, turkeys and ducks, linked to its food factory. Bellavol, Nueil les Aubiers (Deux Sèvres), organization of poultry production specialized in chickens, turkeys, guinea fowl and ducks, linked to its food factory. Société Bressane de production, Louhans (Saône et Loire), organization of poultry production specialized in chickens, turkeys, guinea fowl/hen and ducks, linked to its food factory named Huttepain Soreal Aliments. Avipro, Bessé sur Braye (Sarthe), organization of poultry production specialized in chickens and turkeys, linked to its food factory named Richard Aliments. Ardévol, Felines (Ardèche), organization of poultry production specialized in range chickens "Label Rouge d'Ardèche". SYVOR, La Chapelle d'Andaine (Orne), organization of poultry production specialized in range chickens "Label Rouge de Normandie". Volailles de Bretagne, Ploufragan (Côtes d'Armor), organization of poultry production specialized in range chickens "Label Rouge de Bretagne". Our hatchery Betina, located at Elven (Morbihan) produces about 142 000 turkey poults every year. Last in in our division, this activity is enforcing our chain. We now control every step of it: from the birth of poultry to our consumer. Our upstream, is also breeding pigs, cattle and rabbits in relationship with our food plants Richard Aliments, Huttepain Bouix and Alimab. Jeusselin has been created in 1968 by Mr Jules Jeusselin and developed in the North of Sarthe and Perche. Once they are collected and stocked, cereals are then transported to our feed factories where they are used as raw material. Our egg activity regroups a layer farm SOVOPA of 200 000 hens and a packing center among the most efficient in France. Our packing center is sorting, packaging and sale the entire production of farmers eggs "Label Rouge de Loué" and of our laying farm SOVOPA. Our layer farm and partnerships with farmers allows to sell more than 500 million eggs every year. Our Upstream division strengthens values of our French chain Le Gaulois, by increasing quality guaranties provided to our clients. We believe in the sustainability of our breeding farms and our French production. Since 2002, LDC applies a French chain logo on each product Le Gaulois from our production sites. LDC provides a traceability portal on the website www.legaulois.fr which provides a complete information on farmer and farm who produces the poultry.
{ "redpajama_set_name": "RedPajamaC4" }
3,032
Алексей Васильевич Рыбалка (1921—1943) — участник Великой Отечественной войны, командир батальона 1033-го стрелкового полка 280-й стрелковой дивизии 60-й армии Центрального фронта, капитан. Герой Советского Союза. Биография Родился 30 марта 1921 года в селе Верхне-Зундово Донской области РСФСР (ныне хутор Верхнезундов Орловского района Ростовской области) в семье крестьянина. Русский. Окончил среднюю школу. В 1939 году призван в ряды Красной Армии. В боях Великой Отечественной войны — с июня 1941 года. В 1942 году окончил курсы «Выстрел». Член ВКП(б) — с 1942 года. Воевал на Южном, Брянском и Центральном фронтах. 22 июня 1941 года, командуя стрелковой ротой, А. В. Рыбалка мужественно отражал удары гитлеровцев, потом вывел роту из-под огня и от границы с Румынией привёл её в безопасную зону. Три месяца бойцы роты сдерживали напор наступающего противника. Рыбалка получил тяжёлое ранение, попал в госпиталь и только через многие месяцы вернулся в строй уже на Брянский фронт командиром стрелковой роты. В 1943 году Алексей Рыбалка командовал 3-м стрелковым батальоном на Курской дуге. Затем прошёл с боями до самой Украины. Батальон капитана А. В. Рыбалки пришёл к реке Днепр на рассвете 25 сентября 1943 года и стал готовиться к форсированию реки юго-западнее села Окуниново Козелецкого района Черниговской области. В предрассветном тумане следующего дня группа добровольцев батальона в количестве 25 человек с капитаном Рыбалкой на собранных плавучих средствах форсировала Днепр. Во время переправы плавсредство было обстреляно немецкой артиллерией, в результате чего в живых осталось лишь пятеро человек. В результате краткого боестолкновения группа захватила небольшую кромку правобережной земли на берегу, однако в живых остались только капитан А. В. Рыбалка и младший сержант А. Д. Юдин, которые своими силами в течение целого часа оборонялись от превосходящих сил противника. Когда к окопавшимся прибыла вторая группа, комбат повёл на штурм первой линии вражеских траншей. Продвигаясь далее на юго-запад, батальон капитана Рыбалки получил боевую задачу: форсировать реку Тетерев и атаковать опорный пункт гитлеровцев — село Пилява. Ночью вместе с батальоном Рыбалка переправился вброд через реку и штурмом овладел селом. 13 октября 1943 года противник попытался вернуть укреплённый пункт, но был отбит. Во время отражения контратаки гитлеровцев капитан А. В. Рыбалка был смертельно ранен. Похоронен в братской могиле в городе Остёр Козелецкого района Черниговской области (Украина). По другим данным — похоронен в селе Ротичи Чернобыльского района. Награды Указом Президиума Верховного Совета СССР от 17 октября 1943 года за мужество и героизм, проявленные при форсировании Днепра и удержании плацдарма на его правом берегу, капитану Рыбалке Алексею Васильевичу присвоено звание Героя Советского Союза (посмертно). Награждён орденами Ленина (1943), Красного Знамени (1941) и Красной Звезды. Память Имя Героя носит улица в городе Остёр. Ссылки Герои Советского Союза — уроженцы Дона . Централизованная библиотечная система г. Ростова-на-Дону. Родившиеся в Орловском районе (Ростовская область) Члены КПСС Командиры батальонов в Великой Отечественной войне Похороненные в Козелецком районе
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,834
EDA Store Find Newest Content Action of the Day 2008 Exit Poll Analysis Unprovable: Letter to the Media "Paper Ballots," President Obama Enroll as an I Count! Volunteer Save NY Levers, Bar the E-Machines Hacking the Primaries, 2008 Election Defense Radio Join a Working Group National E-Mail Network Now on Sale in EDA Store! Including essays by EDA's Jonathan Simon, Bruce O'Dell, Dave Griscom, Nancy Tobi and Paul Lehto, as well as Robert Kennedy, Jr., Bob Fitrakis, Brad Friedman, and others. With a preface by editor Mark Crispin Miller. Click here to watch the video of the publication event! Honest Elections: Priceless....but defending them isn't Please Invest in Election Integrity. Contribute to EDA. EDA Coordinators are highly-motivated volunteers defending free elections for all. But we do need some full-time staff to keep EDA on call at all times -- and they need a living wage to do it. Your donations are needed for: EDA staff support Website and phone costs Media messaging Please Click Here to Contribute Make a tax-deductible premium donation of $50 or more and receive a copy of Was The 2004 Presidential Election Stolen? by Steven Freeman and Joel Bleifuss. Events - Filter: (all)Events Select event type to filter by « January 08, 2008 - February 07, 2008 » Tell Pima Supervisors: Full Disclosure of Voting Data Start: 10:00 am End: 3:00 pm Bookmark for daily updates on Pima Election Integrity Trial: http://arizona.typepad.com/blog/pima.html The County Attorney has filed an apparently unauthorized notice of appeal, presumably purely on the word of County Administrator Chuck Huckelberry, or on the initiative of the County Attorney's office itself. Only the Board of Supervisors has the authority to decide whether to appeal the court's ruling, and they haven't yet met to make any decision on the matter. Tuesday morning we will present affidavits and letters from renowned computer scientists and election integrity activists stating that releasing these databases cannot harm national security as Chuck Huckelberry and his lawyers claim. Pima County Administration Building 130 W. Congress Street First Floor Meeting Room Election Trial item early on agenda Please arrive by 9:00 a.m. There are some 800 databases going back to May of 1998. Making them public could prove our votes were NOT counted inaccurately. If there is no problem, what is Pima County afraid of? Could these databases and audit logs prove the County committed election fraud? The Democrats have also taken issue with several of Judge Miller's findings of fact as not supported by the testimony, and some of his conclusions of law as erroneous. See Democrats Motion to Amend for more details. Bill Risner make a powerful case that Judge Miller would be hard-pressed to ignore that the Judge quite simply got several points wrong in his under-advisement opinion. Discussion of the lawsuit will be one of the very first items on the agenda, so try to at the Board's Chambers by 9 am. [R]Election Defense Radio Streams Tuesdays on the Internet Tuesdays 8 to 9 pm Eastern * 7pm to 8pm Central * 5 to 6 pm Pacific Streaming live over the Internet at http://www.toginet.net/ To Tune In:< Go to the Toginet page, and click a stream launch button in the upper right to connect via Windows Media or Real Player. EDA Radio shows are replayed Sundays, 1 - 2 pm Eastern at the same location on your Internet dial Download Podcast Archives of earlier Black Box America Live shows (hosted by Bev Harris) and the most recent shows produced by Election Defense Alliance hosted online at: EDA Radio Archives Voice of the Voters: NY HAVA Plan, Bo Lopari hosts New York State, US District Court, and the DOJ "Voice of the Voters!" Radio/Internet Wednesday, January, 9 8 PM Eastern, 5 PM Pacific Bo Lopari, Barbara Bartoletti and Doug Kellner Bo Lopari, Executive Director of New Yorkers for Verified Voting, leads a key discussion/analysis on the Department of Justice lawsuit, the plan submitted last week by the State Board of Elections, and what it means for the future of voting in New York State. New York's leading election integrity advocate Bo Lipari will guest host tonight's program focusing on the Department of Justice lawsuit and the recent Court hearing. He'll report on the December 20 hearing, Friend of the Court briefs submitted by citizen groups and election commissioners, Judge Sharpe's caustic comments, and the compliance plan submitted by the State Board of Elections. Bo's guests include: Barbara Bartoletti, Legislative Director of the League of Women Voters of New York State Barbara Bartoletti is well known in New York State representing the New York League in the Legislature, the Governor's office, and media all around the state. She's an experienced, dynamic advocate for open government and has 25 years of experience fighting for honest and accurate elections in New York. Doug Kellner, Co-Chair of the New York State Board of Elections Douglas A. Kellner has served as Co-Chair of the New York State Board of Elections since 2005. Prior to serving on the New York State Board, Doug served as a commissioner on the New York City Board of Elections from 1993 to 2005. Commissioner Kellner has been an outspoken advocate for improving the voting process in New York while insisting on transparency, verifiability, accuracy and uniformity in voting procedures. As always, the incomparable John Gideon of Voters Unite will be on hand to share up-to-the-minute voting news and analysis you won't hear anywhere else. [R]Coordinating Council Conference Call Biweekly conference calls on Wednesday evenings starting at 6:30 Pacific Time. Unless otherwise announced, plan on regular scheduled conference calls for EDA Coordinators and the 3 Members at Large biweekly on Wednesday evenings. Please mark on your home or business calendars and plan ahead. Phone number and pass code will be circulated by e-mail to the Coordinators at intervals several days before each meeting. You're encouraged to send in ideas for agenda items to the Coordinators mail list at any time. Include the word AGENDA in your subject line. VoteRescue Radio: Focus on New Hampshire This Sunday, January 13, 2 - 4pm Central ( 4 - 5 Pacific) we will "Focus on New Hampshire" The New Hampshire primary on Tuesday, January 8th, has created a reverberating series of stories pointing to possible, even likely election fraud, and both major parties may have been affected. We will have on as our returning guests Jim Condit Jr. and Walter Reddy, who launched the Primary Monitoring website, www.libertybroadcastnetwork.org just a couple of days before the Iowa Caucus and who were monitoring the New Hampshire primary. Walter was "on the ground" in New Hampshire, and they each have interesting data to contribute. Also calling in will be VoteRescuer Gregory Gory who was a citizen monitor in New Hampshire, videotaping events of interest. We will be welcoming callers in between or at the end of interviews. Anyone who was in New Hampshire as a citizen monitor or a voter, who has a New Hampshire story of interest, is especially welcome to call in. Items related to the New Hampshire primary that we'll be discussing: * Bev Harris's discovery from Sutton County Town Clerk about the "OOPS! Ron Paul actually got 31 votes, not zero!" and her research turning up the criminal history of one of the principals of LHS, the company that programmed all the Diebold machines in New Hampshire (which counted 81% of the votes) * Statistical analysis by Election Defense Alliance's Bruce O'Dell, and Theron Horton, of percentages won by Obama and Hillary - the curious comparison of the numbers between the hand-counted "towns" and the Diebold opscanned counted "towns" * Discussion of YouTube videos - possible manipulation of vote total reporting by media * "How could the Exit Polls be so wrong?" Discussion of how mainstream "spinmeisters" are creating convoluted hypotheses to justify Hillary trouncing Obama counter to eight exit polls' predictions "UNCOUNTED" West Coast Premiere Jan. 15 in Sacramento End: 11:30 pm "Uncounted" West Coast Premiere Jan. 15 at Sacramento Film & Music Festival "Uncounted" exposes how Americans were cheated during the 2004 and 2006 elections – and how "enraged" voters have turned their anger into citizen activism – to safeguard the vote. Show times are 5:30 and 8:00 p.m. Crest Theatre Tickets for the Sacramento screenings can be purchased online at http://www.tickets.com/browseother.cgi?minpid=6145879 and at the Crest Theatre box office. A question and answer session featuring filmmaker David Earnhardt, along with Brad Friedman of Bradblog.com and other special guests will follow both screenings. David Earnhardt is a producer/director of 30 years and a former broadcast executive. He is a winner of both Emmy and Iris awards. A national documentary on children's rights, a biographical documentary about jazz legend Helen Humes, and a comedy special featuring an up-and-coming Jay Leno are among Earnhardt's many credits. PRESS KIT: http://UncountedTheMovie.com/press.html PROMOTIONAL TRAILER: http://UncountedTheMovie.com/trailer.html CONTACT FILMMAKER: David[at]ep-video[dot]com For the full press release, see http://www.electiondefensealliance.org/uncounted_jan_15_sacramento For more information about the Sacramento Film and Music Festival go to http://www.sacfilm.com To learn more about the Peter B. Collins Show, visit http://www.PeterBCollins.com Voice of the Voters -- Holt, Freeman, O'Dell Rep. Rush Holt (NJ-12), Dr. Steve Freeman, and Bruce O'Dell * Will Emergency Bill Help Avert 2008 Election Disaster? * What Happened in the New Hampshire Primary? on "Voice of the Voters!" Wednesday, January 16, 8 pm Eastern Heard on 1360 AM Greater Philadelphia and on the Internet at http://www.voiceofthevoters.org Part I: Rep. Holt describes how emergency bill can help avert 2008 election disaster U.S. Congressman Holt this week will introduce an emergency bill (see fundamental provisions below) that he says will address the lack of hand-marked paper ballots in certain states including Pennsylvania. As it stands, in disputed elections in some areas, the only figures available are what comes out of the machines. Will this bill avoid a repeat of the nightmare scenarios of Ohio and Florida in Pennsylvania and many other paperless states? According to Rep. Holt, the bill gives counties the option to replace paperless DREs with hand-marked paper ballots and gives them the funding to do so. Officials and activists who support the bill fear it may be weakened (radically rewritten) in committee, while others believe the measure doesn't go far enough or too far Will it be changed significantly in committee? Can it pass in time? Who supports/opposes...why? .What about DREs with paper? Part 2: Was the 2008 New Hampshire Primary Stolen? Dr. Steve Freeman, author of "Was the 2004 Election Stolen?" will discuss the controversies surrounding the January 4 New Hampshire primary. Dr. Freeman, a University of Pennsylvania professor, will provide an up-to-date analysis of post-election data. and be joined in discussion by Bruce O'Dell on current status, next steps, and potential impact. "UNCOUNTED" Premieres at Grand Lake Theater, Oakland FOR IMMEDIATE RELEASE Jan. 14 2008 Election Irregularities Exposed: Bay Area Premiere of "UNCOUNTED" Thursday Jan. 17, 7:00 p.m. Grand Lake Theater, Oakland 3200 Grand Ave. Oakland Doors open at 6:00 p.m. Tickets: $10.00 Appearances by David Earnhardt, Brad Friedman, Ian Hoffman CONTACT: Glenna Johnson, 615.327.0600, glenna@ep-video.com NASHVILLE, TN (January 4, 2008)—The Bay Area theatrical premiere of the new film the nation is talking about, UNCOUNTED: The New Math of American Elections, occurs January 17 in Oakland. UNCOUNTED is a feature-length documentary that exposes a threat to the core of our democracy – our right to vote. The national theatrical premiere of UNCOUNTED in Nashville drew a sellout crowd in November. In addition, the film was recently selected to participate in the prestigious 2008 Durango Film Festival in Colorado. UNCOUNTED is written, produced, and directed by multiple Emmy award-winner David Earnhardt. The film examines how the fraud that changed the outcome of the 2004 election led to even greater election fraud in 2006 and now looms as an unbridled threat in 2008. In factual and logical ways, a variety of computer, statistical, and election experts show how the unethical and illegal manipulation of the vote can change the outcome of every election – whether on a local, state, or national level. Streaming live over the Internet at http://www.toginet.net To Tune In: Go to the Toginet page, and click a stream launch button in the upper right to connect via Windows Media or Flash Player. Download Podcast Archives of earlier Black Box America Live shows (hosted by Bev Harris) and the most recent shows produced by Election Defense Alliance hosted online at: EDA Radio Archives Vote Rescue Radio: What Happened in New Hampshire Vote Rescue Radio Sunday Jan. 27 2:00-4:00 pm CST On radio in the Austin area at 90.1 FM Streaming live on http://www.wtprn.com ("We the People Radio Network") Dial up on the telephone at 512-485-9010 This is a CALL-IN show. Call-in numbers are: 1-888-202-1984 Austin local area: 512-646-1984. Tomorrow, January 27th, from 2-4 CST we will cover the events this past week at the New Hampshire recount! I just returned after spending four days there, part of the time with Bev Harris and some other fine election reform activists. I was one of the drivers "chasing" the van (driven by "Butch" and "Hoppy") and the police car that were picking up the boxes of ballots from various New Hampshire towns for the recount at the State Archive building. I witnessed some fascinating events and met some courageous, committed people, especially the Republican candidate who filed for the recount, Albert Howard. Also there were Kathy Greenwell, chief source for the Bullitt County, Kentucky section of Bev's "Moonshine Elections" series; and Jeannie Dean, videographer extraordinaire from Sarasota, Florida (home of the 18,000 undervotes in 2006!) as well as Walter Reddy of Connecticut, creator of the election monitoring website "Liberty Broadcast Network" (.org), and Bob Schulz of "We the People Foundation", who filed the "National Clean Election Lawsuit" (NCEL) against all 50 Secretaries of State or Boards of Elections for allowing the use of electronic voting machines, in violation of the Constititution. We may get some of the players to call in tomorrow to help bring the coverage to life. Subscribe to EDA Newslist View Our Archived Powerful Alternative To Proposed Audits! UBS Verification Protocol Could End "Faith-Based" Voting 10,000 Election Simulation proves effectiveness of UBS ballot count validation system. See the Press Release Now on sale in the EDA Store Witness To A Crime "Irrefutable Evidence' that Bush/Cheney stole Ohio in '04 " -- Robert F. Kennedy, Jr Receive this book FREE with a $75 tax-deductible donation to EDA. Was 2004 Stolen? and receive the book "Was the 2004 Presidential Election Stolen?" signed by Steve Freeman or Joel Bleifuss. Click to Order. Recent EDA Blog Posts with EDA TV See how citizens in Riverside, CA are saving their votes. All content on this site © 2006 by each individual author, All Rights Reserved. Fair Use Policy | Original Site created by Jenni Simonis Who links to EDA?
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,146
{"url":"https:\/\/courses.grainger.illinois.edu\/ece417\/fa2021\/hw.html","text":"# Writtten Homework\u00b6\n\n## How to Submit Homework\u00b6\n\nThe homework will be distributed as web pages, linked from the list shown above.\n\nWrite your answers in pencil, photograph the pages, and submit them to Gradescope. If you prefer, you can submit answers written in LaTeX, Word, RST, or whatever format you like.\n\nNote: grading is for completeness only, not for correctness. However, in order for us to see that your submission is complete, we need to be able to read it (at least a little).\n\nSolutions will be posted ten days after the submission deadline.\n\nYou are allowed to submit any homework late, with a penalty: your homework grade will be multiplied by $$\\max(0.5,1-t\/28800)$$, where $$t$$ is the lateness of your submission, in minutes. For example, if you\u2019re late by one day, you can earn 95% credit; if you\u2019re late by any number of days greater than 10, you can still earn up to 50% credit.","date":"2022-06-25 22:34:16","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.44679197669029236, \"perplexity\": 1271.1195590266923}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-27\/segments\/1656103036176.7\/warc\/CC-MAIN-20220625220543-20220626010543-00706.warc.gz\"}"}
null
null
\section{Introduction} Motion planning for autonomous vehicle functions is a vast and long-researched area using a wide variety of approaches such as different optimization techniques, modern control methods, artificial intelligence, and machine learning. This article presents the achievements of the field from recent years focused on Deep Reinforcement Learning (DRL) approach. DRL combines the classic reinforcement learning with deep neural networks, and gained popularity after the breakthrough article from Deepmind \cite{Mnih2013PlayingLearning, Mnih2015Human-levelLearning}. In the number of research papers about autonomous vehicles and the DRL has been increased in the last few years (see Fig. \ref{Fig:wos}.), and because of the complexity of the different motion planning problems, it is a convenient choice to evaluate the applicability of DRL for these problems. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{Figures/WOSSTAT.pdf} \caption{Web of Science topic search for "Deep Reinforcement Learning" and "Autonomous Vehicles (2020.01.17.)"} \label{Fig:wos} \end{figure} \subsection{The Hierarchical Classification of Motion Planning for Autonomous Driving} Using deep neural networks for self-driving cars gives the possibility to develop "end-to-end" solutions where the system operates like a human driver: its inputs are the travel destination, the knowledge about the road network and various sensor information, and the output is the direct vehicle control commands, e.g., steering, torque, and brake. However, on the one hand, realizing such a scheme is quite complicated, since it needs to handle all layers of the driving task, on the other hand, the system itself behaves like a black box, which raises design and validation problems. By examining the recent advantages in the field, it can be said that most researches focus on solving some sub-tasks of the hierarchical motion planning problem. This decision-making system of autonomous driving can be decomposed into at least four layers, as stated in \cite{Paden2016} (see Fig.\ref{fig:motion_planning}.). Route planning, as the highest level, defines the way-points of the journey based on the map of the road network, with the possibility of using real-time traffic data. Though optimal route choice has a high interest among the research community, papers dealing with this level do not employ reinforcement learning. A comprehensive study on the subject can be found in \cite{Bast2016RouteNetworks}. \begin{figure*}[tbhp] \centering \includegraphics[width=\textwidth]{Figures/motion_planning.pdf} \caption{Layers of motion planning} \label{fig:motion_planning} \end{figure*} The Behavioral layer is the strategic level of autonomous driving. With the given way-points, the agent decides on the short term policy, by taking into consideration the local road topology, the traffic rules, and the perceived state of other traffic participants. Having a finite set of available actions for the driving context, the realization of this layer is usually a finite state-machine having basic strategies in its states (i.e., car following, lane changing, etc.) with well-defined transitions between them based on the change of the environment. However, even with the full knowledge of the current state of the traffic, the future intentions of the surrounding drivers are unknown, making the problem partially observable \cite{Brechtel2014ProbabilisticPOMDPs}. Hence the future state not only depends on the behavior of the ego vehicle but also relies on unknown processes; this problem forms a Partially Observable Markov Decision Process (POMDP). Different techniques exist to mitigate these effects by predicting the possible trajectories of other road users, like in \cite{Wiest2012ProbabilisticModels}, where the authors used gaussian mixture models, or in \cite{Dou2016LaneClassifiers}, where support vector machines and artificial neural networks were trained based on recorded traffic data. Since finite action POMDPs are the natural way of modeling reinforcement learning problems, a high amount of research papers deal with this level, as can be seen in the sections of the paper. To carry out the strategy defined by the behavioral layer, the motion planning layer needs to design a feasible trajectory consisting of the expected speed, yaw, and position states of the vehicle on a short horizon. Naturally, on this level, the vehicle dynamics has to be considered, hence classic exact solutions of motion planning are impractical since they usually assume holonomic dynamics. It has long been known that the numerical complexity of solving the motion planning problem with nonholonomic dynamics is Polynomial-Space Algorithm (PSPACE) \cite{Reif1979ComplexityGeneralizations}, meaning it is hard to elaborate an overall solution by solving the nonlinear programming problem in real-time \cite{Hegedus2018HybridNetworks}. On the other hand, the output representation of the layer makes it hard to directly handle it with "pure" reinforcement learning, only a few papers deal solely with this layer, and they usually use DRL to define splines as a result of the training \cite{Saxena2019DrivingLearning,Feher2019HybridPlanning}. At the lowest level, the local feedback control is responsible for minimizing the deviation from the prescribed path or trajectory. A significant amount of papers reviewed in this article deals with the aspects of this task, where lane-keeping, trajectory following, or car following is the higher-level strategy. Though at this level, the action space becomes continuous, and classical approaches of RL can not handle this. Hence discretization of the control outputs is needed, or - as in some papers - continuous variants of DRL are used. \subsection {Reinforcement Learning} As an area of Artificial Intelligence and Machine Learning, Reinforcement learning (RL) deals with the problem of a learning agent placed in an environment to achieve a goal. Contrary to supervised learning, where the learner structure gets examples of good and bad behavior, the RL agent must discover by trial and error how to behave to get the most reward \cite{Sutton2017ReinforcementIntroduction}. For this task, the agent must percept the state of the environment at some level and based on this information, and it needs to take actions that result in a new state. As a result of its action, the agent receives a reward, which aids in the development of future behavior. To ultimately formulate the problem, modeling the state transitions of the environment, based on the actions of the agent is also a necessity. This leads to the formulation of a POMDP defined by the functions of $(\mathcal{S}, \mathcal{A}, T, R, \Omega, O)$, where $\mathcal{S}$ is the set of environment states, $\mathcal{A}$ is the set of possible actions in that particular state, $T$ is the transition function between the states based on the actions, $R$ is the reward for the given $(\mathcal{S}, \mathcal{A})$ pair, while $\Omega$ is the set of observations, and $O$ is the sensor model. The agent in this context can be formulated by any inference model whose parameters can be modified in response to the experience gained. In the context of Deep Reinforcement Learning, this model is implemented by neural networks. \begin{figure*}[tbhp] \centering \includegraphics[width=\textwidth]{Figures/POMDP.pdf} \caption{The POMDP model for Deep Reinforcement Learning based autonomous driving} \label{fig:pomdp} \end{figure*} The problem in the POMDP scenario is that the current actions affect the future states, therefore the future rewards, meaning that for optimizing the behavior for the cumulative reward throughout the entire episode, the agent needs to have information about the future consequences of its actions. RL has two main approaches for determining the optimal behavior: value-based and policy-based methods. The original concept using a value-based method is the Deep-Q Learning Network (DQN) introduced in \cite{Mnih2013PlayingLearning}. Described briefly, the agent predicts a so-called Q value for each state-action pair, which formulate the expected immediate and future reward. From this set, the agent can choose the action with the highest value as an optimal policy or can use the values for exploration during the training process. The main goal is to learn the optimal Q function, represented by a neural network in this case. This can be done by conducting experiments, calculating the discounted rewards of the future states for each action, and updating the network by using the Bellman-equation \cite{Bellman1957DynamicProgramming} as a target. Using the same network for value evaluation and action selection results in unstable behavior and slow learning in noisy environments. Meta-heuristics, such as experience replay, can handle this problem, while other variants of the original DQN exist, such as Double DQN \cite{VanHasselt2016DeepQ-learning} or Dueling DQN \cite{Wang2015DuelingLearning}, separating the action and the value prediction streams, leading to faster and more stable learning. Policy-based methods target at choosing the optimal behavior directly, where the policy $\pi_\Theta$ is a function of $(\mathcal{S}, \mathcal{A})$. Represented by a neural network, with a softmax head, the agent generally predicts a normalized probability of the expected goodness of the actions. In the most natural implementation, this output integrates the exploration property of the RL process. In advanced variants, such as the actor-critic, the agent uses different predictions for the value and the action \cite{Silver2014DeterministicAlgorithms}. Initially, RL algorithms use finite action space, though, for many control problems, they are not suitable. To overcome this issue in \cite{Lillicrap2015ContinuousLearning} introduced the Deep Deterministic Policy Gradients (DDPG) agent, where the actor directly maps states to continuous actions. For complex problems, the learning process can still be long or even unsuccessful. It can be soluted in many ways: \begin{itemize} \item Curriculum learning describes a type of learning in which the training starts with only easy examples of a task and then gradually increase difficulty. This approach is used in \cite{Qiao2018AutomaticallyEnvironment, Bouton2019Cooperation-AwareTraffic, Kaushik2018OvertakingLearning}. \item Adversarial learning aims to fool models through malicious input. Papers using variants of this technique are: \cite{Ferdowsi2018RobustSystems, Ma2018ImprovedLearning} \item Model-based action choice, such as the MCTS based solution of Alpha-Go, can reduce the effect of the problem of distant rewarding. \end{itemize} Since reinforcement learning models the problem as a POMDP, a discrete-time stochastic control process, the solutions need to provide a mathematical framework for this decision making in situations where outcomes are partly random and partly under the control of a decision-maker, while the states are also partly observable \cite{Kaelbling1998PlanningDomains}. In the case of motion planning for autonomous or highly automated vehicles, the tuple $(\mathcal{S}, \mathcal{A}, T, R, \Omega, O)$ of the POMDP is illustrated in Fig. \ref{fig:pomdp} and can be interpreted as follows: $\mathcal{S}, \mathcal{A}, T,$ and $R$ describe the MDP, the modeling environment of the learning process. It can vary depending on the goals, though in our case it needs to model the dynamics of the vehicle, the surrounding static and dynamic objects, such as other participants of the traffic, the road topology, lane markings, signs, traffic rules, etc. $\mathcal{S}$ holds the current actual state of the simulation. $A$ is the possible set of actions of the agent driving the ego-car, while $T$, the so-called state-transition function updates the vehicle state and also the states of the traffic participants depending on the action of the vehicle. The different levels of abstraction are described in section \ref{ss_model}. Many research papers use different software platforms for modeling the environment. A brief collection of the used frameworks are presented in section \ref{ss_simulators}. $R$ is the reward function of the MDP, section \ref{ss_rewarding} gives a summary on this topic. $\Omega$ is the set of observations the agent can experience in the world, while $O$ is the observation function giving a possibility distribution over the possible observations. In more uncomplicated cases, the studies assume full observability and formulate the problem as an MDP, though in many cases, the vehicle does not possess all information. Another interesting topic is the representation of the state observation, which is a crucial factor for the architecture choice and performance of Deep RL agents. The observation models used in the literature are summarized in section \ref{ss_observation}. \section{Modeling for Reinforcement Learning} \subsection{Vehicle modeling}\label{ss_model} Modeling the movement of the ego-vehicle is a crucial part of the training process since it raises the trade-off problem between model accuracy and computational resource. Since RL techniques use a massive number of episodes for determining optimal policy, the step time of the environment, which highly depends on the evaluation time of the vehicle dynamics model, profoundly affects training time. Therefore during environment design, one needs to choose from the simplest kinematic model to more sophisticated dynamics models ranging from 2 Degree of Freedom (2DoF) lateral model to the more and more complex models with a higher number of parameters and complicated tire models. At rigid kinematic single-track vehicle models, which neglect tire slip and skip, lateral motion is only affected by the geometric parameters. Therefore, they are usually limited to low-speed applications. More details about the model can be found in \cite{Kong2015KinematicDesign}. The simplest dynamic models with longitudinal and lateral movements are based on the 3 Degrees of Freedom (3DoF) dynamic bicycle model, usually with a linear tire model. They consider $(V_x, V_y, \dot{\Psi})$ as independent variables, namely longitudinal and lateral speed, and yaw rate. A more complex model is the four-tire 9 Degrees of Freedom (9DoF) vehicle model, where amongst the parameters of the 3DoF, body roll and pitch $(\dot{\Theta}, \dot{\Phi})$ and the angular velocities of the four wheels $({\omega_{fl},\omega_{fr},\omega_{rl},\omega_{rr}})$ are also considered, to calculate tire forces more precisely. Hence the model takes into account both the coupling of longitudinal and lateral slips and the load transfer between tires. Though the kinematic model seems quite simplified, and as stated in \cite{Polack2017TheVehicles}, such a model can behave significantly different from an actual vehicle, though for the many control situations, the accuracy is suitable \cite{Kong2015KinematicDesign}. According to \cite{Polack2017TheVehicles}, using a kinematic bicycle model with a limitation on the lateral acceleration at around $0.5g$ or less provides appropriate results, but only with the assumption of dry road. Above this limit, the model is unable to handle dynamics. Hence a more accurate vehicle model should be used when dealing with higher accelerations to push the vehicle's dynamics near its handling limits. Regarding calculation time, based on the kinematic model, the calculation of the 3DoF model can be $10\dots 50$ times higher, and the precise calculation of a 9DoF model with nonlinear tire model can be $100\dots 300$ times higher, which is the main reason for the RL community to use a low level of abstraction. Modeling traffic and surrounding vehicles is often performed by using unique simulators, as described in section \ref{ss_simulators}. Some authors develop their environments, using cellular automata models \cite{You2019AdvancedLearning}. Some use MOBIL, which is a general model (minimizing overall braking induced by lane change) to derive lane-changing rules for discretionary and mandatory lane changes for a broad class of car-following models \cite{Kesting2007GeneralModels}; the Intelligent Driving Model (IDM), a continuous microscopic single-lane model \cite{Treiber2000CongestedSimulations}. \subsection{Simulators} \label{ss_simulators} Some authors create self-made environments to achieve full control over the model, though there are commercial and Open-source environments that can provide this feature. This section briefly identifies some of them used in recent researches in motion planning with RL. In modeling the traffic environment, the most popular choice is SUMO (Simulation of Urban MObility), which is a microscopic, inter- and multi-modal, space-continuous and time-discrete traffic flow simulation platform \cite{Krajzewicz2012RecentMObility}. It can convert networks from other traffic simulators such as VISUM, Vissim, or MATSim and also reads other standard digital road network formats, such as OpenStreetMap or OpenDRIVE. It also provides interfaces to several environments, such as python, Matlab, .Net, C++, etc. Though the abstraction level, in this case, is microscopic, and vehicle behavior is limited, its ease of use and high speed makes it an excellent choice for training agents to handle traffic, though it does not provide any sensor model besides the ground truth state of the vehicles. Another popular microscopic simulator that has been used commercially and for research also is VISSIM \cite{Fellendorf2010MicroscopicVISSIM}. In \cite{Ye2019AutomatedEnvironment} it is used for developing car-following behavior and lane changing decisions. Considering only vehicle dynamics, the most popular choice is TORCS (The Open Racing Car Simulator), which is a modern, modular, highly portable multi-player, multi-agent car simulator. Its high degree of modularity and portability render it ideal for artificial intelligence research \cite{Wymann2014TORCS:Simulator}. Interfacing with python, the most popular AI research environment is comfortable and runs at an acceptable speed. TORCS also comes with different tracks, competing robots, and several sensor models. It is assumed that for vehicle dynamics, the best choices would be the professional tools, such as CarSIM \cite{CarSIMCorporation} or CarMaker \cite{CarMakerAutomotive}, though the utilization of these softwares can not be found in the reinforcement learning literature. This may be caused by the fact that these are expensive commercial platforms, though more importantly, their lack of python interfaces and high precision, but resource-intensive models prevent them from running several episodes within a reasonable time. For more detailed sensor models or traffic, the authors usually use Airsim, Udacity Gazebo/ROS, and CARLA: AirSim, used by a recent research in \cite{An2019Decision-MakingDriving}, is a simulator initially developed for drones built on Unreal Engine now has a vehicle extension with different weather conditions and scenarios. Udacity, used in \cite{Wang2019LaneConstraints}, is a simulator that was built for Udacity's Self-Driving Car Nanodegree \cite{WelcomeSimulator} provides various sensors, such as high quality rendered camera image LIDAR and Infrared information, and also has capabilities to model other traffic participants. Another notable mention is CARLA, an open-source simulator for autonomous driving research. CARLA has been developed from the ground up to support the development, training, and validation of autonomous urban driving systems. In addition to open-source code and protocols, CARLA provides open digital assets (urban layouts, buildings, vehicles) that were created for this purpose and can be used freely. The simulation platform supports flexible specification of sensor suites and environmental conditions \cite{Dosovitskiy2017CARLA:Simulator}. Though this section provides only a brief description of the simulators, a more systematic review of the topic can be found in \cite{Rosique2019AResearch}. \subsection{Action Space}\label{ss_action} The choice of action space highly depends on the vehicle model and task designed for the reinforcement learning problem in each previous research. Though two main levels of control can be found: one is the direct control of the vehicle by steering braking and accelerating commands, and the other acts on the behavioral layer and defines choices on strategic levels, such as lane change, lane keeping, setting ACC reference point, etc. At this level, the agent gives a command to low-level controllers, which calculate the actual trajectory. Only a few papers deal with the motion planning layer, where the task defines the endpoints $(x,y,\theta)$, and the agent defines the knots of the trajectory to follow represented as a spline, as can be seen in \cite{Feher2019HybridPlanning}. Also, few papers deviate from vehicle motion restrictions and generate actions by stepping in a grid, like in classic cellular automata microscopic models \cite{Kashihara2017DeepJunction}. Some papers combine the control and behavioral layers by separating longitudinal and lateral tasks, where longitudinal acceleration is a direct command, while lane changing is a strategic decision like in \cite{Nageshrao2019AutonomousLearning}. The behavioral layer usually holds a few distinct choices, from which the underlying neural network needs to choose, making it a classic reinforcement learning task with finite actions. Though on the level of control, the actuation of vehicles, i.e., steering, throttle, and braking, are continuous parameters and many reinforcement learning techniques like DQN and PG can not handle this since they need finite action set, while some, like DDPG, works with continuous action space. To adapt to the finite action requirements of the RL technique used, most papers discretizes the steering and acceleration commands to 3 to 9 possibilities per channel. The low number of possible choices pushes the solution farther from reality, which could raise vehicle dynamics issues with uncontrollable slips, massive jerk, and yaw-rate, though the utilization of kinematic models sometimes covers this in the papers. A large number of discrete choices, however, ends up in an exponential growth in the possible outcomes in the POMDP approach, which slows down the learning process. \subsection{Rewarding} \label{ss_rewarding} During training, the agent tries to fulfill a task, generally consisting of more than one step. This task is called an episode. An episode ends if one of the following conditions are met: \begin{itemize} \item The agent successfully fulfills the task; \item The episode reaches a previously defined steps \item A terminating condition rises. \end{itemize} The first two cases are trivial and depend on the design of the actual problem. Terminal conditions are typically situations where the agent reaches a state from which the actual task is impossible to fulfill, or the agent makes a mistake that is not acceptable. Vehicle motion planning agents usually use terminating conditions, such as: collision with other participants or obstacles or leaving the track or lane, since these two inevitably end the episode. There are lighter approaches, where the episode terminates with failure before the accident occurred, with examples of having a too high tangent angle to the track or reaching too close to other participants. These "before accident" terminations speed up the training by bringing the information of failure forward in time, though their design needs caution \cite{Alizadeh2019AutomatedEnvironment}. Rewarding plays the role of evaluating the goodness of the choices the agent made during the episode giving feedback to improve the policy. The first important aspect is the timing of the reward, where the designer of the reinforcement learning solution needs to choose a mixture of the following strategies all having their pros and cons: \begin{itemize} \item Giving reward only at the end of the episode and discounting it back to the previous $(\mathcal{S}, \mathcal{A})$ pairs, which could result in a slower learning process, though minimizes the human-driven shaping of the policy. \item Giving immediate reward at each step by evaluating the current state, naturally discount also appears in this solution, which results in significantly faster learning, though the choice of the immediate reward highly affects the established strategy, which sometimes prevents the agent from developing better overall solutions than the one that gave the intention of the designed reward. \item An intermediate solution can be to give a reward in predefined periods or travel distance \cite{Feher2018Q-learningKeeping}, or when a good or bad decision occurs. \end{itemize} In the area of motion planning, the end episode rewards are calculated from the fulfillment or failure of the driving task. The overall performance factors are generally: time of finishing the task, keeping the desired speed or achieving as high average speed as possible, yaw or distance from lane middle or the desired trajectory, overtaking more vehicles, achieve as few lane changes as possible \cite{Bai2019DeepTraffic}, keeping right \cite{Wolf2018AdaptiveStates, Aradi2018PolicyDriving} etc. Rewarding systems also can represent passenger comfort, where the smoothness of the vehicle dynamics is enforced. The most used quantitative measures are the longitudinal acceleration \cite{Xu2018AHighways}, lateral acceleration \cite{Wang2018AManeuvers, Ronecker2019DeepDriving} and jerk \cite{Zhu2019SafeDriving, Saxena2019DrivingLearning}. In some researches, the reward is based on the deviation from a dataset \cite{Zhu2018Human-likeLearning}, or calculated as a deviation from a reference model like in \cite{Hoel2018AutomatedLearning}. These approaches can provide favorable results, though a bit tends from the original philosophy of reinforcement learning since a previously known strategy could guide the learning. \subsection{Observation Space} \label{ss_observation} The observation space describes the world to the agent. It needs to give sufficient information for choosing the appropriate action, hence - depending on the task - it contains the following knowledge: \begin{itemize} \item The state of the vehicle in the world, e.g., position, speed, yaw, etc. \item Topological information like lanes, signs, rules, etc. \item Other participants: surrounding vehicles, obstacles, etc. \end{itemize} The reference frame of the observation can be absolute and fixed to the coordinate system of the world, though as the decision process focuses on the ego-vehicle, it is more straightforward to choose an ego-centric reference frame pinned to the vehicle's coordinate system, or the vehicle's position in the world, and the orientation of the road. It allows concentrating the distribution of visited states around the origin in both position, heading, and velocity space, as other vehicles are often close to the ego-vehicle and with similar speed and heading, reducing the region of state-space in which the policy must perform. \cite{Leurent2018} \subsubsection{Vehicle state observation} For lane keeping, navigation, simple racing, overtaking, or maneuvering tasks, the most commonly used and also the simplest observation for the ego vehicle consists of the continuous variables of $(|e|, v, \theta_e)$ describing the lateral position from the center-line of the lane, vehicle speed, and yaw angle respectively. (see Fig. \ref{fig:lane_keeping_state}). This information is the absolute minimum for guiding car-like vehicles, and only eligible for the control of the classical kinematic car-like models, where the system implies the motion without skidding assumption. Though in many cases in the literature, this can be sufficient, since the vehicles remain deep in the dynamically stable region. \begin{figure}[htpb] \centering \includegraphics[width=\linewidth]{Figobservation/basicobserv.png} \caption{Observation for basic vehicle state (source: \cite{Paden2016})} \label{fig:lane_keeping_state} \end{figure} For tasks, where more complex vehicle dynamics is inevitable, such as racing situations, or where the stability of the vehicle is essential, this set of observable state would not be enough, and it should be extended with yaw, pitch, roll, tire dynamics, and slip. \subsubsection{Environment observation} Getting information about the surroundings of the vehicle and representing it to the learning agent shows high diversity in the literature. Different levels of sensor abstractions can be observed: \begin{itemize} \item sensor level, where camera images, lidar or radar information is passed to the agent; \item intermediate level, where idealized sensor information is provided; \item ground truth level, where all detectable and non-detectable information is given. \end{itemize} The structure of the sensor model also affects the neural network structure of the Deep RL agent since image like, or array-like inputs infer 2D or 1D CNN structures, while the simple set of scalar information results in a simple dense network. There are cases where these two kinds of inputs are mixed. Hence the network needs to have two different types of input layers. Image-based solutions usually use front-facing camera images extracted from 3D simulators to represent the observation space. The data is structured in a ($C$ x $W$ x $H$) sized matrix, where $C$ is the number of channels, usually one for intensity images and three for RGB, while $W$ and $H$ are the width and height resolution of the image. In some cases, for the detection of movement, multiple images are fed to the network in parallel. Sometimes it is convenient to down-sample the images - like ($1$x$48$x$27$) in \cite{Wolf2017LearningQ-Networks} or ($3$x$84$x$84$) in \cite{Jaritz2018End-to-EndLearning, Perot2017End-to-EndLearning} - for data and network compression purposes. Since images hold the information in an unstructured manner, i.e., the state information, such as object positions, or lane information are deeply encoded in the data, deep neural networks, such as CNN, usually need large samples and time to converge \cite{Li2019ReinforcementNotes}. This problem escalates, with the high number of steps that the RL process requires, resulting in a lengthy learning process, like $1.5M$ steps in \cite{Wolf2017LearningQ-Networks} or $100M$ steps in \cite{Jaritz2018End-to-EndLearning}. Many image-based solutions propose some kind of preprocessing of the data to overcome this issue. In \cite{Li2019ReinforcementNotes}, the authors propose a framework for vision-based lateral control, which combines DL and RL methods. To improve the perception accuracy, an MTL (Multitask learning) CNN model is proposed to learn the critical track features, which are used to locate the vehicle in the track coordinate, and trains a policy gradient RL controller to solve the continuous sequential decision-making problem. Naturally, this approach can also be viewed as an RL solution with structured features, though the combined approach has its place in the image-based solutions also. Another approach could be the simplification of the unstructured data. In \cite{Kotyan2019SelfAgent} Kotyan et al. uses the difference image as the background subtraction between the two consecutive frames as an input, assuming this image contains the motion of the foreground and the underlying neural network would focus more on the features of the foreground than the background. By using the same training algorithm, their results showed that the including difference image instead of the original unprocessed input needs approximately 10 times less training steps to achieve the same performance. The second possibility is, instead of using the original image as an input, it can be driven through an image semantic segmentation network as proposed in \cite{Xu2018AutonomousTranslation}. As the authors state: "Semantic image contains less information compared to the original image, but includes most information needed by the agent to take actions. In other words, semantic image neglects useless information in the original image." Another advantage of this approach is that the trained agent can use the segmented output of images obtained from real-world scenarios, since on this level, the difference is much smaller between the simulated and real-world data than in the case of the simulated and real-world images. Fig. \ref{Fig:segmentedimage} shows the $640x400$ resolution inputs used in this research. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{Figobservation/Xu2018_Segmented_image.png} \caption{ Real images from the driving data and their semantic segmentations (source:\cite{Xu2018AutonomousTranslation})} \label{Fig:segmentedimage} \end{figure} 2D or 3D Lidar like sensor models are not common among the recent studies, though they could provide excellent depth-map like information about the environment. Though the same problem arises as with the camera images, that the provided data - let them be a vector for 2D, and a matrix for 3D Lidars - is unstructured. The usage of this type of input only can be found in \cite{Lee2017AutonomousQ-learning}, where the observation emulates a 2D Lidar that provides the distance from obstacles in $31$ directions within the field-of-view of $150^{\circ}$, and agent uses sensor data as its state. A similar input structure, though not modeling a Lidar, since there is no reflection, which is provided by TORCS and used in \cite{Kaushik2018OvertakingLearning}, is to represent the lane markings with imagined beam sensors. The agent in the cited example uses readings from 19 sensors with a 200m range, presenting at every $10^\circ$ on the front half of the car returning distance to the track edge. Grid-based path planning methods, like the A* or various SLAM (Simultaneous Localization and Mapping) algorithms exist and are used widespread in the area of mobile robot navigation, where the environment is represented as a spatial map \cite{Elfes1989UsingNavigation}, usually formulated as a 2D matrix assigning to each 2D location in a surface grid one of three possible values: Occupied, free, and unknown \cite{Thrun2006Stanley:Challenge}. This approach can also be used representing probabilistic maneuvers of surrounding vehicles \cite{Deo2018Multi-ModalLSTMs}, or by generating spatiotemporal map from a predicted sequence of movements, motion planning in a dynamic environment can also be achieved \cite{Hegedus2019Graph-basedVehicles}. Though the previously cited examples didn't use RL techniques, they prove that grid representation holds high potential in this field. Navigation in a static environment by using a grid map as the observation, together with position and yaw of the vehicle with an RL agent, is presented in \cite{Folkers2019ControllingLearning} (See Fig.\ref{fig:occupancygrid}). Grid maps are also unstructured data, and their complexity is similar to the semantically segmented images, since the cells store class information in both cases, and hence their optimal handling is using the CNN architecture. \begin{figure}[htbp] \begin{subfigure}{0.31\linewidth} \includegraphics[width=\linewidth]{Figobservation/Folkers2019_grid.png} \caption{Sensors} \end{subfigure} \hspace*{\fill} \begin{subfigure}{0.31\linewidth} \includegraphics[width=\linewidth]{Figobservation/Folkers2019_gridb.png} \caption{Target state $z^t$} \end{subfigure} \hspace*{\fill} \begin{subfigure}{0.31\linewidth} \includegraphics[width=\linewidth]{Figobservation/Folkers2019_gridc.png} \caption{Perception Ø} \end{subfigure} \caption{The surrounding from the perspective of the vehicle can be described by a coarse perception map where the target is represented by a red dot (c) (source: \cite{Folkers2019ControllingLearning})} \label{fig:occupancygrid} \end{figure} Representing moving objects, i.e. surrounding vehicles in a grid needs not only occupancy, but other information hence the spatial grid's cell need to hold additional information. In \cite{Bai2019DeepTraffic} the authors used equidistant grid, where the ego-vehicle is placed in the center, and the cells occupied by other vehicles represented the longitudinal velocity of the corresponding car (See Fig. \ref{Fig:trafficgrid}). The same approach can also be found in \cite{Ronecker2019DeepDriving}. Naturally this simple representation can not provide information about the lateral movement of the other traffic participants, though they give significantly more than the simple occupancy based ones. \begin{figure}[htbp] \centering \begin{subfigure}{0.9\linewidth} \includegraphics[width=\linewidth]{Figobservation/Bai2019_grida.png} \caption{Mathematical model for the traffic} \end{subfigure} \hspace*{\fill} \begin{subfigure}{0.9\linewidth} \includegraphics[width=\linewidth]{Figobservation/Bai2019_gridb.png} \caption{Visualization of the Hyper Grid Matrix} \end{subfigure} \hspace*{\fill} \caption{The visualization of the HDM mapping process (source:\cite{Bai2019DeepTraffic})} \label{Fig:trafficgrid} \end{figure} Equidistant grids are a logical choice for generic environments, where the moving directions of the mobile robot are free, though, in the case of road vehicles, the vehicle mainly follows the direction of the traffic flow. In this case, the spatial representation could be chosen fixed to the road topology, namely the lanes of the road, regardless of its curvature or width. In these lane-based grid solutions, the grid representing the highway has as many rows as the actual lane count, and the lanes are discretized longitudinally. The simplest utilization of this approach can be found in \cite{You2019AdvancedLearning}, where the length of the cells is equivalent to the unit vehicle length, and also, the behavior of the traffic acts similar to the classic cellular automata-based microscopic models \cite{Esser1997MicroscopicAutomata}. This representation, similarly to the equidistant ones, can be used for occupancy, though they still do not hold any information on vehicle dynamics. \cite{Wang2019CooperativeLearning} is to fed multiple consecutive traffic snapshots into the underlying CNN structure, which inherently extracts the velocity of the moving objects. Representing speed in grid cells is also possible in this setup, for that example can be found in \cite{Wang2019LaneConstraints}, where the authors converted the traffic extracted from the Udacity simulator to the lane-based grid. Besides the position and the longitudinal speed of the surrounding vehicles are essential from the aspect of the decision making, other features (such as heading, acceleration, lateral speed) should be considered. Multi-layer grid maps could be used for each vital parameter to overcome this issue. In \cite{Saxena2019DrivingLearning} the authors processed the simulator state to calculate an observation tensor of size 4 x 3 x (2 x FoV + 1), where Fov stands for Field of View and represents the maximum distance of the observation in cell count. There is one channel (first dimension) each for on-road occupancy, relative velocities of vehicles, relative lateral displacements, and relative headings to the ego-vehicle. Fig.\ref{Fig:lanebasedgrid} shows an example of the simulator state and corresponding input observation used for their network. \begin{figure}[thpb] \centering \includegraphics[width=\linewidth]{Figobservation/Saxena2019Lanebasedgrid.png} \caption{ The simulator state (top, zoomed in) gets converted to a 4 x 3 x (2 x FoV + 1) input observation tensor (bottom) (source:\cite{Saxena2019DrivingLearning})} \label{Fig:lanebasedgrid} \end{figure} The previous observation models (image, lidar, or grid-based) all have some common properties: All of them are unstructured datasets, need a CNN architecture to process, which hardens the learning process since the agent simultaneously needs to extract the exciting features and form the policy for action. It would be obvious to pre-process the unstructured data and feed structured information to the agents' network. Structured data refers to any data that resides in a fixed field within a record or file. As an example, for navigating in traffic, based on the task, the parameters of the surrounding vehicles are represented on the same element of the input. In the simplest scenario of car following, the agent only focuses on the leading vehicle, and the input beside the state of the ego vehicle consists of $(d,v)$ as in \cite{Zhu2018Human-likeLearning} or $(d,v, a)$ as in \cite{Shi2019DrivingLearning}, where these parameters are the headway distance, speed, and acceleration of the leading vehicle. Contrary to the unstructured data, these approaches significantly reduce the amount of the input and can be handled with simple DNN structures, which profoundly affects the convergence of the agent's performance. For navigating in traffic, i.e., performing merging or lane changing maneuvers, not only the leading vehicle's, but the other surrounding vehicles' states also need to be considered. In a merging scenario, the most crucial information is the relative longitudinal position and speed $2 $x$ (dx, dv)$ of the two vehicles bounding the target gap, as used by \cite{Wang2017FormulationMerge}. Naturally, this is the absolute minimal representation of such a problem, but in the future, more sophisticated representations would be developed. In highway maneuvering situations, both ego-lane, and neighboring lane vehicles need to be considered, in \cite{Nageshrao2019AutonomousLearning} the authors used the above mentioned $6 $x$ (dx, dv)$ scalar vector is used for the front and rear vehicles in the three interesting lanes. While in \cite{Becsi2018HighwayLearning} the authors extended this information with the occupancy of the neighboring lanes right at the side of the ego-vehicle (See Fig. \ref{Fig:aradihighway}). The same approach can be seen in \cite{Alizadeh2019AutomatedEnvironment}, though extending the number of traced objects to nine. These researches lack lateral information, though, in \cite{Nageshrao2019AutonomousLearning}, the lateral positions and speeds are also involved in the input vector resulting in a $6$x$ (dx, dy, dvx, dvy)$ structure, logically representing longitudinal and lateral distance, and speed differences to the ego, respectively. \begin{figure}[htbp] \centering \includegraphics[width=0.6\linewidth]{Figobservation/aradi_sensormodel.png} \caption{Environment state on the highway \cite{Becsi2018HighwayLearning}} \label{Fig:aradihighway} \end{figure} In a special case of handling unsignalized intersection \cite{Bouton2019ReinforcementDriving} the authors also used this formulation scheme where the other vehicle's Cartesian coordinates, speed and heading were considered. \section{Scenario-based Classification of the Approaches} Though this survey focuses on Deep Reinforcement Learning based motion planning research, it is essential to mention that some papers try to solve some subtasks of automated driving through classic reinforcement techniques. One problem of these classic methods, that they can not handle unstructured data, such as images, mid-level radar, or lidar sensing. The other problem comes from the need of maintaining the Q-table for all $(\mathcal{S},\mathcal{A})$ state-action pairs. This results in space complexity explosion, since the size of the table equals the product of the size of all classes both in state and action. As an example, the Q-learning made in \cite{Loiacono2010LearningLearning} is presented. The authors trained an agent in TORCS, which tries to achieve a policy for the best overtaking maneuver, by taking advantage of the aerodynamic drag. There are only two participants in the scenario, the overtaking vehicle, and the vehicle in front on a long straight track. The state representation contains the longitudinal and lateral distance of the two vehicles and also the the lateral position of the ego-vehicle and the speed difference of the two. \begin{table}[htbp] \centering \caption{State representation discretization in \cite{Loiacono2010LearningLearning}} \label{tab:torcsclassic} \begin{tabular}{l l l} \hline Name & Size & Class bounds \\ \hline $dist_y[m]$ & 6 & \begin{tabular}[c]{@{}l@{}}\{0, 10, 20 ,30, 50, 100, 200\}\end{tabular} \\ $dist_x[m]$ & 10 & \begin{tabular}[c]{@{}l@{}}\{-25, -15, -5, -3 , -1, 0, 1, 3, 5, 15, 25\}\end{tabular} \\ $pos[m]$ & 8 & \begin{tabular}[c]{@{}l@{}}\{-10, -5, -2, -1, 0, 1, 2, 5, 10\}\end{tabular} \\ $\Delta speed[km/h]$ & 9 & \begin{tabular}[c]{@{}l@{}}\{-300, 0, 30, 60, 90, 120,\\ 150, 200, 250, 300\}\end{tabular} \\ \hline \end{tabular} \end{table} The authors discretized this state space to classes of size $(6, 10, 8, 9)$ respectfully (see table \ref{tab:torcsclassic}); and used the minimal lateral action set size of 3, where the actions are sweeping $1m$ to the left or right and maintaining lateral position. Together, this problem generates a Q-table with $6*10*8*9*3 = 12960$ elements. Though a table of this size can be easily handled nowadays, it is easy to imagine that with more complex problems with more vehicles, more sensors, complex dynamics, denser state and action representation, the table can grow to enormous size. A possible reduction is the utilization of the Multiple-Goal Reinforcement Learning Method and dividing the overall problem to sub-tasks, as can be seen in \cite{Ngai2007AutomatedFramework} for overtaking maneuver. In a latter research, the authors widened the problem and separated the driving problem to the tasks of collision avoidance, target seeking, lane following, Lane choice, speed keeping, and steady steering \cite{Ngai2011AManeuvers}. To reduce problem size, the authors of \cite{Desjardins2011CooperativeApproach} used strategic-level decisions to set movement targets for the vehicles concerning the surrounding ones, and left the low-level control to classic solutions, which significantly reduced the action space. An other interesting example of classic Q-learning is described in \cite{Gomez2012OptimalVehicles} where the authors designed an agent for the path planning problem of a ground vehicle considering obstacles with Ackermann steering by using $(v,x,y,\theta)$ (speed, positions and heading) as state representation, and used reinforcement learning as an optimizer (See Fig. \ref{Fig:gomez}). \begin{figure}[htbp] \centering \includegraphics[width=0.8\linewidth]{Figures/Gomez_pathplanning.png} \caption{Path planning results from \cite{Gomez2012OptimalVehicles}} \label{Fig:gomez} \end{figure} Though one would expect that machine learning could give an overall end-to-end solution to automated driving, the study of the recent literature shows that Reinforcement Learning research can give answers to certain sub-tasks of this problem. The papers in recent years can be organized around these problems, where a well-dedicated situation or scenario is chosen and examined whether a self-learning agent can solve it. These problem statements vary in complexity. As mentioned earlier, the complexity of reinforcement learning, and thus training time, is greatly influenced by the complexity of the problem chosen, the nature of the action space, and the timeliness and proper formulation of rewards. The simplest problems, such as lane-keeping or vehicle following, can generally be traced back to simple convex optimization or control problems. However, in these cases, the formulation of secondary control goals, such as passenger comfort, is more comfortable to articulate. At the other end of the imagined complexity scale, there are problems, like in the case of maneuvering in dense traffic, the efficient fulfillment of the task is hard to formulate, and the agent needs predictive "thinking" to achieve its goals. In the following, these approaches are presented. \subsection{Car following} Car following is the simplest task in this survey, where the problem is formulated as follows: There are two participants of the simulation, a leading and the following vehicle, both keeping their lateral positions in a lane, and the following vehicle adjusts its longitudinal speed to keep a safe following distance. The observation space consists of the $(v, dv, ds)$ tuple, representing agent speed, speed difference to the lead, and headway distance. The action is the acceleration command. Reward systems use the collision of the two vehicles as a failure naturally, while the performance of the agent is based on the jerk, TTC (time to collision) \cite{Zhu2019SafeDriving}, or passenger comfort \cite{YeAutomatedEnvironment}. Another approach is shown in \cite{Zhu2018Human-likeLearning}, where the performance of the car following agent is evaluated against real-world measurement to achieve human-like behavior. \subsection{Lane keeping} Lane-keeping or trajectory following is still a simple control task, but contrary to car following, this problem focuses on lateral control. The observation space in these studies us two different approaches: One is the "ground truth" lateral position and angle of the vehicle in lane \cite{Sallab2016End-to-EndAssist, Lee2017AutonomousQ-learning, Ma2018ImprovedLearning}, while the second is the image of a front-facing camera \cite{Wolf2017LearningQ-Networks, Xu2018AutonomousTranslation, Li2019ReinforcementNotes}. Naturally, for image-based control, the agents use external simulators, TORCS, and GAZEBO/ROS in these cases. Reward systems almost always consider the distance from the center-line of the lane as an immediate reward. It is important to mention that these agents hardly consider vehicle dynamics, and surprisingly does not focus on joined longitudinal control. \subsection{Merging} The ramp merge problem deals with the on-ramp highway scenario (see Fig. \ref{Fig:merge}), where the ego vehicle needs to find the acceptable gap between two vehicles to get on the highway. In the simplest approach, it is eligible to learn the longitudinal control, where the agent reaches this position, as can be seen in \cite{Wang2018AutonomousSpace, Wolf2018AdaptiveStates, Bouton2019Cooperation-AwareTraffic}. Other papers, like \cite{Wang2017FormulationMerge} use full steering and acceleration control. In \cite{Wolf2018AdaptiveStates}, the actions control the longitudinal movement of the vehicle accelerate and decelerate, and while executing these actions, the ego vehicle keeps its lane. Actions "lane change left" as well as "lane change right" imply lateral movement. Only a single action is executed at a time, and actions are executed in their entirety, the vehicle is not able to prematurely abort an action. \begin{figure}[] \centering \includegraphics[width=\linewidth]{Figures/rampmerge_wang_formulation.png} \caption{Ramp merge: (a) simulated scenario and (b) real-world location (source: \cite{Wang2017FormulationMerge})} \label{Fig:merge} \end{figure} An exciting addition can be examined in \cite{Bouton2019Cooperation-AwareTraffic}, where the surrounding vehicles act differently, as there are cooperative and non-cooperative drivers among them. They trained their agents with the knowledge about cooperative behavior, and also compared the results with three differently built MTCS planners. Full information MCTS naturally outperforms RL, though they are computationally expensive. The authors used a curriculum learning approach to train the agent by gradually increasing traffic density. As they stated: "When training an RL agent in dense traffic directly, the policy converged to a suboptimal solution which is to stay still in the merge lane and does not leverage the cooperativeness of other drivers. Such a policy avoids collisions but fails at achieving the maneuver." The most detailed description for this problem is given by \cite{Wang2017FormulationMerge}, where "the driving environment is trained as an LSTM architecture to incorporate the influence of historical and interactive driving behaviors on the action selection. The Deep Q-learning process takes the internal state from LSTM as the input to the Q-function approximator, using it for the action selection based on more past information. The Q-network parameters are updated with an experience replay, and a second target Q-network is used to relieve the problems of local optima and instability." With this approach, the researchers try to mix the possibilities from behavior prediction and learning, simultaneously achieving better performance. \subsection{Driving in traffic} The most complicated scenario examined in the recent papers are those where the autonomous agent drives in traffic. Naturally, this task is also scalable by the topology of the network, the amount and behavior of the surrounding vehicles, the application of traffic rules, and many other properties. Therefore almost all of the current solutions deal with highway driving, where the scenario lacks intersections, pedestrians, and the traffic flow in one direction in all lanes. Sub-tasks of this scenario were examined in the previous sections, such as lane-keeping, or car following. In the following, two types of highway driving will be presented. First, the hierarchical approaches are outlined, where the agents act on the behavioral layer, making decisions about lane changing or overtaking and performs these actions with an underlying controller using classic control approaches. Secondly, end-to-end solutions are presented, where the agents directly control the vehicle by steering and acceleration. As the problem gets more complicated, it is important to mention that the agents trained this would only be able to solve the type of situations that it is exposed to in the simulations. It is, therefore, crucial that the design of the simulated traffic environment covers the intended case \cite{Hoel2018AutomatedLearning}. Making decisions on the behavioral layer consists of at least three discrete actions: Keeping current lane, Change to the left, and Change to the right, as can be seen in \cite{Alizadeh2019AutomatedEnvironment}. In this paper, the authors used the ground truth information about the ego vehicle's speed and lane position, and the relative position and speed of the eight surrounding vehicles as the observation space. They trained and tested the agents in three categories of observation noises: noise-free, mid-level noise (\%5), and high-level noise (\%15), and showed that the training environments with higher noises resulted in more robust and reliable performance, also outperforming the rule-based MOBIL model, by using DQN with a DNN of ${64, 128, 128, 64}$ hidden layers with $tanh$ activation. In a quite similar environment and observation space, \cite{Hoel2018AutomatedLearning} used a widened set of actions to perform the lane changing with previous accelerations or target gap approaching, resulting in six different actions as can be seen in table \ref{tab:hoel}. They also achieved the result that the DQN agent - using two convolutional and one dense layer - performed on par with, or better than, a reference model based on the IDM \cite{Treiber2000CongestedSimulations}. and MOBIL \cite{Kesting2007GeneralModels} model. In the other publication from the same author \cite{Hoel2019CombiningDriving}, the action space is changed slightly by changing the acceleration commands to increasing and decreasing the ACC set-points and let the underlying controller perform these actions. \begin{table}[htb] \centering \caption{Action space in \cite{Hoel2018AutomatedLearning}} \label{tab:hoel} \begin{tabular}{l l} \hline $a_1$ & Stay in current lane, keep current speed \\ $a_2$ & Stay in current lane, accelerate with $-2 m/s^2$ \\ $a_3$ & Stay in current lane, accelerate with $-9 m/s^2$ \\ $a_4$ & Stay in current lane, accelerate with $2 m/s^2$ \\ $a_5$ & Change lanes to the left, keep current speed \\ $a_6$ & Change lanes to the right, keep current speed \\ \hline \end{tabular} \end{table} In \cite{Shi2019DrivingLearning}, a two-lane scenario is considered to distribute the hierarchical decisions further. First, a DQN makes a binary decision about "to or not to change lane", and afterward, the other Q network is responsible for the longitudinal acceleration, based on the previous decision. Hence the second layer, integrated with classic control modules (e.g., Pure Pursuit Control), outputs appropriate control actions for adjusting its position. In \cite{Xu2018AHighways}, the above mentioned two-lane scenario is considered, though the authors used an actor-critic like learning agent. An interesting question in automated driving is the cooperative behavior of the trained agent. In \cite {Wang2019CooperativeLearning} the authors considered a three-lane highway with a lane-based grid representation as observation space and a simple tuple of four for action space {left, right, speedup, none}, and used the reward function to achieve cooperative and non-cooperative behaviors. Not only the classic performance indicators of the ego vehicle is considered in the reward function, but also the speed of the surrounding traffic, which is naturally affected by the behavior of the agent. The underlying network uses two convolutional layers with 16 filters of patch size (2,2) and RELU activation, and two dense layers with 500 neurons each. To evaluate the effects of the cooperative behavior, the authors collected traffic data by virtual loops in the simulation and visualized the performance of the resulting traffic in the classic flow-density diagram (see Fig. \ref{Fig:Wang2019CooperativeLearning}.) It is shown that the cooperative behavior results in higher traffic flow, hence better highway capacity and lower overall travel time. \begin{figure}[] \centering \includegraphics[width=\linewidth]{Figures/wang2019CooperativeLearning.png} \caption{Flow-density relations detected by the virtual loops under different strategies (source:\cite{Wang2019CooperativeLearning})} \label{Fig:Wang2019CooperativeLearning} \end{figure} The realism of the models could still differentiate end-to-end solutions. For example, in \cite{Bai2019DeepTraffic}, instead of using the nonholonomic Ackermann steering geometry, the authors use a holonomic robot model for the action space, which highly reduces the complexity of the control problem. Their actions are Acceleration, Deceleration, Change lane to the left, Change lane to the right, and Take no action, where the first two apply maximal acceleration and deceleration, while the two lane-changing actions simply use constant speed lateral movements. They use Dueling DQN and prioritized experience replay with a grid-based observation model. A similar control method and nonholonomic kinematics is used in \cite{Nageshrao2019AutonomousLearning}. The importance of this research is that it considers safety aspects during the learning process. By using an MPC like safety check, the agent avoids actions that lead to a collision, which makes the training faster and more robust. Using nonholonomic kinematics needs acceleration and steering commands. In \cite{Becsi2018HighwayLearning, Aradi2018PolicyDriving}, the authors used a continuous observation space of the structured information of the surrounding vehicles and Policy-gradient RL structure to achieve end-to-end driving. Since the utilized method has discrete action-space, the steering and acceleration command needed to be quantized. The complexity of driving in traffic with an end-to-end solution can be well examined by the number of training episodes needed by the agent. While in simple lane-keeping scenarios, the agents finished the task in few hundreds of episodes, the agent used for these problems needed 300'000. \section{Future Challenges} The recent achievements on the field showed that different deep reinforcement learning techniques could be effectively used for different levels of autonomous vehicles' motion planning problems, though many questions remain unanswered. The main advantage of these methods is that they can handle unstructured data such as raw or slightly pre-processed radar or camera-based image information. Though using neural networks and deep learning techniques as universal function-approximators in automotive systems poses several questions. As stated in \cite{Falcini2017DeepSoftware}, function development for automotive applications realized in electronic control units (ECUs) is subject to proprietary OEM norms and several international standards, such as Automotive SPICE (Software Process Improvement and Capability Determination) \cite{2015AutomotiveModel} and ISO 26262 \cite{2011ISOVocabulary}. However, these standards are still far from addressing deep learning with dedicated statements, since verification and validation is not a solved issue in this domain. Some papers deal with these issues by using an underlying safety layer, which verifies the safety of a planned trajectory before the vehicle control system executes it. However, full functional safety coverage can not be guaranteed in complex scenarios this way. One of the main benefits of using deep neural networks trained by a reinforcement learning agent in motion planning is the relatively low computational requirements of the trained network. Though this property needs a vast amount of trials in the learning phase to gain enough experience, as mentioned before, for simple convex optimization problems, the convergence of the process is fast. However, for complex scenarios, the training can quickly reach millions of steps, meaning that one setup of hyper-parameters or reward hypothesis can last hours or even days. Since complicated reinforcement learning tasks need continuous iteration on the environment design, network structure, reward scheme, or even the used algorithm itself, designing such a system is a time-consuming project. Besides the appropriate result analysis and inference, the evaluation time highly depends on the computational capacities assigned. On this basis, it is not a surprise that most papers nowadays deal with minor subtasks of the motion planning problem, and the most complex scenarios, such as navigating in urban traffic, can not be found in the literature. By examining the observation element of the recent articles, it can be stated that most researches ignore complex sensor models. Some papers use "ground truth" environment representations or "ideal" sensor models, and only a few articles utilize sensor noise. On the one hand, transferring the knowledge acquired from ideal observations to real-world application poses several feasibility questions \cite{Szalay2018DevelopmentConsiderations}, on the other hand, using noisy or erroneous models could lead to actually more robust agents, as stated in \cite{Alizadeh2019AutomatedEnvironment}. The same applies to the environment, which can be examined best amongst the group of highway learners, where the road topology is almost always fixed, and the surrounding vehicles' behavior is limited. Validation of these agents is usually made in the same environment setup, which contradicts the basic techniques of machine learning, where the training and validation scenarios should differ in some aspects. As a reinforcement learning agent can generally act well in the situations that are close to those it has experience with, it is crucial to focus on developing more realistic and diverse environments, including the modeling level of any interacting traffic participant to achieve such agents that are easily transferable to real-world applications. This applies to vehicle dynamics, where more diverse and more realistic modeling would be needed. Naturally, these improvements increase the numerical complexity of the environment model, which is one of the main issues in these applications. Tending towards mixed or hierarchical system design would be a solution to this problem in the future, by mixing classic control approaches and deep RL. Also, the use of extended learning techniques, such as curriculum learning, transfer learning, or Alpha-Go like planning agents, would profoundly affect the efficiency of these projects. Overall it can be said that many problems need to be solved in this field, such as the detail of the environment and sensor modeling, the computational requirements, the transferability to real applications, robustness, and validation of the agents. Because of these issues, it is hard to predict whether reinforcement learning is an appropriate tool for automotive applications. \section*{Acknowledgment} The research reported in this paper was supported by the Higher Education Excellence Program of the Ministry of Human Capacities in the frame of Artificial Intelligence research area of Budapest University of Technology and Economics (BME FIKPMI/FM). \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
4,193
{"url":"https:\/\/puzzling.stackexchange.com\/questions\/12171\/hamlet-and-the-theatre\/12172","text":"# Hamlet and the theatre\n\nHamlet is preparing a play to find out the truth about his father's death. The theatre has $2015$ numbered seats and can contain all the members of King Claudius's court. He gives to each member a ticket with the number of their seat.\nKing Claudius (the first to sit) doesn't look at his ticket and chooses at random a seat. All the other guests arrive one by one and if their seat is available they sit according to the ticket, if it isn't they sit at random.\nOphelia is the last to enter the theatre and she occupies the only empty spot.\n\nWhat's the probability that she will sit in the seat assigned to her by her ticket?\n\n\u2022 I hope it's not a duplicate; if it is, let me know and I'll delete this. \u2013\u00a0leoll2 Apr 15 '15 at 19:10\n\u2022 But the real question is where do Rosencrantz and Gildenstern sit??? \u2013\u00a0Ian MacDonald Apr 15 '15 at 19:37\n\u2022 Doesn't matter, @IanMacDonald, they're dead. \u2013\u00a0Duncan Apr 15 '15 at 22:53\n\u2022 50\/50 - she'll either sit on it or she wont ;P \u2013\u00a0G.Rassovsky Apr 16 '15 at 9:05\n\u2022 @JoeZ. surely that's why Ophelia takes her seat last? Because she's (drumroll) ... late (ba-dum tish) \u2013\u00a0Joe Apr 17 '15 at 12:21\n\nIf the king sits in his own seat, then each guest will sit in their own seat and Ophelia will always sit in her own seat. This occurs with probability $1\/2015$.\n\nIf the king sits in Ophelia's seat, then each guest will sit in their own seat and Ophelia will end up sitting in the king's seat. This also occurs with probability $1\/2015$.\n\nIf the king sits in any other seat, then as soon as that guest arrives, he will choose another seat at random.\n\n\u2022 If the guest chooses the king's seat, then Ophelia will end up sitting in her own seat.\n\u2022 If the guest chooses Ophelia's seat, then Ophelia will end up sitting in the king's seat.\n\u2022 If the guest chooses another guest's seat, then as soon as the guest who was assigned that seat arrives, he will also have to choose one at random, and the cycle will continue until one of the above two scenarios happens.\n\nSince the probability is equal that each guest will choose either the king's seat or Ophelia's seat, the eventual overall probability of each event is $1\/2$. The above scenario happens with probability $2013\/2015$.\n\nSo the grand total probability that Ophelia ends up in her own seat is $1\/2015 + 0 + 1\/2 \\times 2013\/2015 = 1\/2$.\n\n\u2022 Brilliant! I knew there'd be a quick and simple way to see the answer to this one. \u2013\u00a0Rand al'Thor Apr 15 '15 at 19:25\n\nHere's a simpler way to reach the answer:\n\nIf anyone sits in the King's seat before Ophelia's seat is occupied, then the rest of the guests will all sit in their assigned seats, and Ophelia will be in her seat.\n\nIf anyone sits in the Ophelia's seat before the King's seat is occupied, then Ophelia obviously cannot be in her seat.\n\nBut none of the guests know which seat is the King's and which is Ophelia's! Therefore the two scenarios are equally likely, and Ophelia has a probability of 1\/2 of ending up in her assigned seat.\n\nYet another way to the answer;\n\nThere is exactly one guest(including the king) sitting in a seat that we do not know belongs to a guest that has already arrived.\n\nThis means that when a guest arrives he will find his seat taken with probability 1\/(Number of free seats+1). This continues until Ophelia arrives, so Ophelia has probability 1\/(1+1) of finding her seat taken.\n\nMy guess and an attempt at a proof:\n\nP(x) = 1 - [1\/2015 + \u03a3P(1\/(2015-x))] Where x is the seat # of the person who's seat was taken. Initially there is a 1 in 2015 chance the king will take Ophelia's seat. Then there is a 1\/(2015-X) chance that the person who's seat the king took will take Ophelia's seat...Continue the process for all the peoples who's seats were taken, add all the probabilities up and you should get the chance her seat is taken. One minus that probability is the chance the seat is empty. The end result should be 1\/2 as stated by Joe Z.\n\n\u2022 I'm guessing this will eventually evaluate to 1\/2. \u2013\u00a0Joe Z. Apr 15 '15 at 19:26\n\u2022 The king can take his own seat! It's rare, but he can! \u2013\u00a0leoll2 Apr 15 '15 at 19:40\n\u2022 I think I fixed it. :) \u2013\u00a0Mark N Apr 15 '15 at 19:42\n\u2022 I still can read this sentence in your post. \"Assume The king will always take 1 seat that wasn't his\" The king can assume any place! \u2013\u00a0leoll2 Apr 15 '15 at 19:56","date":"2020-07-05 16:12:40","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.41570690274238586, \"perplexity\": 1389.310003539467}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-29\/segments\/1593655887377.70\/warc\/CC-MAIN-20200705152852-20200705182852-00214.warc.gz\"}"}
null
null
{"url":"https:\/\/www.freemathhelp.com\/forum\/threads\/centroid-and-area-problem.114518\/","text":"# Centroid and Area Problem\n\n#### Turan jae\n\n##### New member\nHi, I am in Calculus II in college and I have been stuck on this problem for a long time. I would really appreciate if someone could break down how to solve this.\n\nR is the region bounded by y = x +x, y = 0, and x = 2.\nS is the solid obtained by rotating R about x = 5.\nT is the solid obtained by rotating R about y = 0.\n\n\u2022 Provide integrals for the x an y values of the centroid of R.\n\u2022 Provide an integral for the volume of S. (know how to do this one)\n\u2022 Provide an integral for the surface area of the boundary surface of T.\n\n#### HallsofIvy\n\n##### Elite Member\n\"y= x+ x\" seems very peculiar as it would normally be written \"y= 2x\". I am inclined to think that second x was supposed to be a specific number. In any case, the centroid of a plane region, R, is the point $$\\displaystyle \\left(\\overline{x}, \\overline{y}\\right)$$ where $$\\displaystyle \\overline{x}= \\int_R x dV$$ and $$\\displaystyle \\overline{y}= \\int_R y dV$$.","date":"2019-10-22 23:48:14","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8319380283355713, \"perplexity\": 349.01591129269264}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-43\/segments\/1570987826436.88\/warc\/CC-MAIN-20191022232751-20191023020251-00535.warc.gz\"}"}
null
null
{"url":"https:\/\/physics.stackexchange.com\/questions\/264725\/the-directions-of-the-frictional-force-acting-on-the-cylinder-while-ascending-an","text":"# The directions of the frictional force acting on the cylinder while ascending and descending cylinder\n\nA cylinder rolls up an inclined plane,reaches some height,and then rolls down without slipping throughout these motions.The directions of the frictional force acting on the cylinder\n\n(A)While ascending the incline\n\n(B)While descending the incline\n\nare?\n\nIn my book it is given friction acts up the incline in both cases.I find it difficult to understand.Can someone please explain?\n\nP.S:Explain with a digram if possible.I always had trouble in understand direction of rolling friction.Even my school teacher could'nt explain properly.Please help me.Thanks :-)\n\nWhile descending, as it is fixed for a moment, so it doesn't have initial inertia. Force $mg\\sin\\theta$ wants to pull the cylinder downwards the incline. So, the cylinder tends to translate downwards and hence, friction force tries to opposes its translation and is upwards again.","date":"2019-07-20 06:01:31","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.630351722240448, \"perplexity\": 1505.8081374515857}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-30\/segments\/1563195526446.61\/warc\/CC-MAIN-20190720045157-20190720071157-00526.warc.gz\"}"}
null
null
{"url":"http:\/\/marmota.app\/docs\/editing\/math\/","text":"# Show Math Formulas Using LaTeX Math\n\nYou can use LaTeX to add math formulas to your presentation:\n\nAll you have to do is add a multi-line code block with type math (math) and add your math formula there:\n\nmath\n(F f)(y) = \\frac{1}{\\sqrt{2 \\pi}^{n}} \\int_{\\R^n}{f(x) e^{-iyx}} dy\n\n`\n\nmarmota.app is using KaTeX to render those math formulas. Please refer to their documentation to find out which functions and symbols are supported.","date":"2022-05-28 09:39:34","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8689172863960266, \"perplexity\": 3762.0287796251505}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-21\/segments\/1652663016373.86\/warc\/CC-MAIN-20220528093113-20220528123113-00284.warc.gz\"}"}
null
null