text
stringlengths
14
5.77M
meta
dict
__index_level_0__
int64
0
9.97k
Q: How do you debug app in Xcode and Swift? I am working on a keyboard app and I come from a PHP background. I was asked to see if I can get long presses working on the keyboard (when you hold down on A for instance, all the accents should show). I have tried a few things and now when you press on A, it acts like something is firing off, but fails, and the keyboard switches to the default emoji board instead. The debug window in Xcode is blank. There doesn't seem to be an active console log like you would use with JavaScript, so I am completely stumped on how I can figure out what's happening. If I could see that it's failing on a certain function, then I would be able to know where to start fixing problems. Also, I tried using breakpoints but they never were hit.
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,422
package com.github.highcharts4gwt.generator.option.klass; import java.io.IOException; import com.github.highcharts4gwt.generator.option.Option; import com.github.highcharts4gwt.generator.option.OptionTree; import com.sun.codemodel.JClassAlreadyExistsException; public interface OptionClassWriter { OptionClassWriter setPackageName(String packageName); void write() throws IOException, JClassAlreadyExistsException; OptionClassWriter setOption(Option optionSpec); OptionClassWriter setTree(OptionTree tree); }
{ "redpajama_set_name": "RedPajamaGithub" }
5,157
No arrangement is more beautiful... Or more complicated Only a few years after completing his Ninth Symphony, while very sick and profoundly deaf, Beethoven created his late string quartets, works of such inner struggle and profound spiritual yearning that many consider them his supreme achievement. "A Late Quartet" stars Christopher Walken, Philip Seymour Hoffman, Catherine Keener and Mark Ivanir as a celebrated New York City chamber-music ensemble about to tackle Op. 131. It's a marathon composition played without pauses between movements. Over the course of the demanding piece, the players weaken and instruments lose pitch. Physical exhaustion becomes part of the music as Beethoven challenges the performers to struggle against their limitations. In "A Late Quartet," writer/director Yaron Zilberman finds telling parallels between the piece and the fraying relationships among the creative partners of the Fugue String Quartet. (The Brentano String Quartet provides the actual music.) Walken, the group's esteemed cellist and founder, mourns his wife's death and struggles with the debilitating effect of Parkinson's disease. Hoffman and Keener, the long-married second violin and viola, face his thwarted desire to move up to first chair and her disillusion with their sputtering relationship. First violinist Ivanir, an exacting egoist, tutors their lovely, talented daughter, Imogen Poots, a relationship fraught with perilous emotional undercurrents. The Fugue is wonderfully harmonious onstage, but their real lives are struggle and discord. While the story of artistic and romantic rivalries hits familiar notes, you could hardly hope for a more accomplished group to perform it. Zilberman renders the story's upscale, intellectual setting with a pictorial precision that counterpoints the raw immediacy of the performances. Keener and Hoffman, co-stars in "Capote" and "Synecdoche, New York," are flawless together, giving us an emotional CAT scan of a troubled couple. It's not just their verbal skirmishes that are wounding. Their very sighs and silences inflict -- and reflect -- serious pain. The usually outré Walken works in a subtle mode here. He's affecting as a man beginning his own dance of death while hoping to preserve his legacy. Ivanir, a background figure in countless TV shows and films, steps into his key role with confidence. He refuses to soften his uncompromising character's sharp edges, giving us a man who commands respect but mistrusts love. "A Late Quartet" is the essence of a prestige film, an auspicious feature debut for a director whose sensitivity to emotional harmonies is as rewarding as his reverence for timeless, transcendent music. Courtesy: Colin Covert, Minneapolis Star TribuneOfficial Trailer Yaron Zilberman Christopher Walken, Phillip Seymour Hoffman, Catherine Keener, Imogen Poots, Mark Ivanir Seth Grossman, Yaron Zilberman
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,522
The Magic Eye er en amerikansk stumfilm fra 1918 af Rae Berger. Medvirkende Henry A. Barrows - John Bowman Claire Du Brey - Bowman Zoe Rae - Shirley Bowman Charles Hill Mailes - Sam Bullard William A. Carroll - Jack Referencer Eksterne henvisninger Stumfilm fra USA Amerikanske film fra 1918
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,712
Q: Java-Thread Vs Runnable While reading through the significant difference between Thread and Runnable from here, I encountered a difference that is: When you extends Thread class, each of your thread creates unique object and associate with it. where as When you implement Runnable, it shares the same object to multiple threads.. There is code give : class ImplementsRunnable implements Runnable { private int counter = 0; public void run() { counter++; System.out.println("ImplementsRunnable : Counter : " + counter); } } class ExtendsThread extends Thread { private int counter = 0; public void run() { counter++; System.out.println("ExtendsThread : Counter : " + counter); } } public class ThreadVsRunnable { public static void main(String args[]) throws Exception { //Multiple threads share the same object. ImplementsRunnable rc = new ImplementsRunnable(); Thread t1 = new Thread(rc); t1.start(); Thread.sleep(1000); // Waiting for 1 second before starting next thread Thread t2 = new Thread(rc); t2.start(); Thread.sleep(1000); // Waiting for 1 second before starting next thread Thread t3 = new Thread(rc); t3.start(); //Creating new instance for every thread access. ExtendsThread tc1 = new ExtendsThread(); tc1.start(); Thread.sleep(1000); // Waiting for 1 second before starting next thread ExtendsThread tc2 = new ExtendsThread(); tc2.start(); Thread.sleep(1000); // Waiting for 1 second before starting next thread ExtendsThread tc3 = new ExtendsThread(); tc3.start(); } } The output is something like this: ImplementsRunnable : Counter : 1 ImplementsRunnable : Counter : 2 ImplementsRunnable : Counter : 3 ExtendsThread : Counter : 1 ExtendsThread : Counter : 1 ExtendsThread : Counter : 1 It proves differences given above. I make a slight modification in code given below: class ImplementsRunnable implements Runnable { private int counter = 0; public void run() { counter++; System.out.println("ImplementsRunnable : Counter : " + counter); } } class ExtendsThread extends Thread { private int counter = 0; public void run() { counter++; System.out.println("ExtendsThread : Counter : " + counter); } } public class ThreadVsRunnable { public static void main(String args[]) throws Exception { //Multiple threads share the same object. ImplementsRunnable rc = new ImplementsRunnable(); Thread t1 = new Thread(rc); t1.start(); Thread.sleep(1000); // Waiting for 1 second before starting next thread Thread t2 = new Thread(rc); t2.start(); Thread.sleep(1000); // Waiting for 1 second before starting next thread Thread t3 = new Thread(rc); t3.start(); //Modification done here. Only one object is shered by multiple threads here also. ExtendsThread extendsThread = new ExtendsThread(); Thread thread11 = new Thread(extendsThread); thread11.start(); Thread.sleep(1000); Thread thread12 = new Thread(extendsThread); thread12.start(); Thread.sleep(1000); Thread thread13 = new Thread(extendsThread); thread13.start(); Thread.sleep(1000); } } Now the output is : ImplementsRunnable : Counter : 1 ImplementsRunnable : Counter : 2 ImplementsRunnable : Counter : 3 ExtendsThread : Counter : 1 ExtendsThread : Counter : 2 ExtendsThread : Counter : 3 I understand the fact that, here same object (extendsThread) is shared by three threads. But I'm confused here that how it is different from implementing Runnable. Here, even if *ExtendsThread * extends Thread, we are still able to share the object of this class to other threads. In my thinking the above difference does not make any sense. Thanks. A: The principle difference in implementing Runnable is that you do not 'consume' your single inheritance. Consider these Class declarations: public class HelloRunnable implements Runnable extends AbstractHello public class HelloRunnable extends Thread You can do more with Runnable when it comes to inheritance. A: Here's what the javadoc states There are two ways to create a new thread of execution. One is to declare a class to be a subclass of Thread. This subclass should override the run method of class Thread. An instance of the subclass can then be allocated and started. For example, a thread that computes primes larger than a stated value could be written as follows: The other way to create a thread is to declare a class that implements the Runnable interface. That class then implements the run method. An instance of the class can then be allocated, passed as an argument when creating Thread, and started. The same example in this other style looks like the following: So the two ways public class MyThread extends Thread { // overriden from Runnable, which Thread implements public void run() { ... } } ... MyThread thread = new MyThread(); thread.start(); Or public class MyRunnable implements Runnable{ public void run() { ... } } ... Thread thread = new Thread(new MyRunnable()); thread.start(); Your counter field is an instance field. In your first case, each of the objects created here ExtendsThread tc1 = new ExtendsThread(); tc1.start(); Thread.sleep(1000); // Waiting for 1 second before starting next thread ExtendsThread tc2 = new ExtendsThread(); tc2.start(); Thread.sleep(1000); // Waiting for 1 second before starting next thread ExtendsThread tc3 = new ExtendsThread(); tc3.start(); will have their own copy (that's how instance variables work). So when you start each thread, each one increments its own copy of the field. In your second case, you are using your Thread sub class as a Runnable argument to the Thread constructor. ExtendsThread extendsThread = new ExtendsThread(); Thread thread11 = new Thread(extendsThread); thread11.start(); Thread.sleep(1000); Thread thread12 = new Thread(extendsThread); thread12.start(); Thread.sleep(1000); Thread thread13 = new Thread(extendsThread); thread13.start(); Thread.sleep(1000); It is the same ExtendsThread object that you pass, so its counter field gets incremented by all threads. It's pretty much equivalent to your previous usage of ImplementsRunnable. To add from the comments: First thing to understand is that the Thread class implements Runnable, so you can use a Thread instance anywhere you can use Runnable. For example, new Thread(new Thread()); // won't do anything, but just to demonstrate When you create a Thread with new Thread(someRunnable); and start it, the thread calls the given Runnable instance's run() method. If that Runnable instance happens to also be an instance of Thread, so be it. That doesn't change anything. When you create a custom thread like new ExtendsThread(); and start it, it calls run() on itself. A: @BalwantChauhan: One common usage of Runnable interface is that as we know that multiple inheritance is not possible in case of Java. Now suppose you have a scenario where you want extends a class and also you want to implement thread. So for those scenario if we go ahead for Thread then it is not possible to achieve it. For example : Suppose (in case of Java Swing), if you want to create a frame and also in that frame class you want to implement thread, then it is not possible to extends JFrame and Thread class so in that case we extends JFrame and implement Runnable. public class HelloFrame extends JFrame implements Runnable{ ... public void run(){ // thread code } ... }
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,963
Q: How to just import part of __init__.py? I have a package with some files and I'm having some importation issues. Let's say I have the following files: main.py Lib ├─ __init__.py ├─ file1.py └─ file2.py main.py: from Lib import ClassA foo = ClassA('anything') Lib/init.py: from .file1 import ClassA from .file2 import ClassB file1.py: import a_lot_of_things class ClassA: pass file2.py: import a_lot_of_other_things class ClassB: pass This code works, but Python will also import all the other classes in the file, as the ClassB. The problem is that Python takes a lot of time importing all the libraries of file2.py which I don't wanna use. I know it happens because Python is running init.py and it is importing all classes, even if I ask for just one. But I think it should do this only if I write: from Lib import * Is there a way to, inside init.py, check if I'm importing all or just one specific class to run just this file/importation? I also tried to structure my directory this way: main.py Lib ├─ __init__.py ├─ ClassA │ ├─ __init__.py │ └─ file1.py └─ ClassB ├─ __init__.py └─ file2.py So I cleaned Lib/init.py and put the importations into each init.py. ClassA/init.py: from .file1 import ClassA ClassB/init.py: from .file2 import ClassB But now I need to use it this way: main.py: from Lib import ClassA foo = ClassA.ClassA('anything') And I'd like to use it directly, as I wrote before. Is there a way to do this? A: If you import a module (or import something from within a module) everything that that module imports will also be imported. You can only avoid this by placing imports within a local context For example my_module: import some_stuff def do_stuff() pass def do_some_specific_things() import some_specific_stuff_only_relevant_to_this_function pass If you now call from my_module import do_stuff then import some_stuff will be executed, but import some_specific_stuff_only_relevant_to_this_function will not. By the way, structuring your code as follows does not make sense: main.py Lib ├─ __init__.py ├─ ClassA │ ├─ __init__.py │ └─ file1.py └─ ClassB ├─ __init__.py └─ file2.py A class is within a module(file1 and file2 are modules). A module cannot sit within a class.
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,510
You can book your chosen lesson here with our trainers and it is valid for 3 months. When you have booked you will receive a unique booking code and contact details for the trainer. You can book your lesson direct with the trainer at a mutually convenient time. Your training will be recorded as BD training towards regional team selection policies.
{ "redpajama_set_name": "RedPajamaC4" }
8,260
Q: Object instances share sub-properties? Given the following code: var Car = function() {}; Car.prototype = { wheels: { rims: 'steel' } } var volvo = new Car; var mercedes = new Car; volvo.wheels.rims = 'aluminium'; console.log(volvo.wheels.rims, mercedes.wheels.rims); // 'aluminium', 'aluminium' Can you explain why instance mercedes of Auto automatically inherits the sub-property definition for rims from volvo? Note the following code works as expected with same setup: volvo.wheels = 4; console.log(volvo.wheels, mercedes.wheels); // 4, Object { rims: 'steel' } A: You only ever create a single object for wheels. You assign this object to the prototype, so every instance inherits its value. Javascript will never automatically copy an object. Instead, you should create the object in the constructor so that you get a new object for each instance.
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,021
El Lotus 79 fue un monoplaza de Fórmula 1 diseñado a finales de 1977 por Colin Chapman, Geoff Aldridge, Martin Ogilvie, Tony Rudd y Peter Wright de Team Lotus. Historia El 79 demostró ser casi invencible durante la temporada 1978 y proporcionó un nivel de dominación sin precedentes. El coche obtuvo seis victorias más durante la temporada dando el Campeonato de Pilotos a Mario Andretti, y el Campeonato de Constructores a Team Lotus. Sus únicos rivales serios durante la temporada fueron el Ferrari 312T3, y la ventaja que dieron sus neumáticos Michelin en condiciones de clima cálido, y el "fan car" Brabham BT46B. El autoventilador solo corrió una vez, ganando el Gran Premio de Suecia de 1978, antes de que Brabham retirara voluntariamente el coche. Mientras tanto, los Ferrari solo ganaron cuando el Lotus no pudo terminar. Tan superior era el monoplaza, que la mayoría de las razas se convirtió en una chatarra para las colocaciones menores, ya que Andretti y Ronnie Peterson regularmente terminaron primero y segundo, la mayoría de las veces por un margen considerable por delante del resto del campo. En la rara ocasión en que el 79 no ganó o falló, uno u otro piloto usualmente estaba en el podio. El estadounidense fue Campeón Mundial cómodamente en 1978, y el sueco terminó la temporada como subcampeón, aunque póstumamente, ya que murió después de una caída en la línea de inicio en Monza, la carrera donde Andretti cerró el campeonato. Peterson no estaba en el 79 para esa carrera; condujo el 78 del año anterior debido a una caída severa en la práctica y al no poder encajar en el auto de repuesto del estadounidense. Jean-Pierre Jarier se hizo cargo del segundo Lotus por el resto de la temporada y lideraba la carrera tanto en Estados Unidos como en Canadá (donde se hizo con la pole position) hasta que el 79 sufrió fallas mecánicas en ambos. Sin embargo, demostró que incluso con un controlador menor, el 79 seguía siendo competitivo. En , el 79 iba a ser reemplazado por el Lotus 80, destinado a ser el siguiente paso en la evolución del efecto suelo. Martini Racing reemplazó a JPS como patrocinador en ese año, por lo que el coche apareció en las carreras británicas. El 80 resultó ser un fracaso total y Lotus se vio obligado a volver al 79, conducido por Andretti y el argentino Carlos Reutemann. Se marcaron varios puestos en el podio y el 79 estaba en disputa por la victoria en la primera parte de la temporada, pero la próxima generación de autos de efecto suelo lideró primero por el Ligier JS11, luego el Ferrari 312T4 y por último el Williams FW07, un monoplaza fuertemente basado en el 79. Aunque el coche se actualizó con la carrocería revisada y un nuevo alerón trasero, Lotus cayó al cuarto lugar en el Campeonato de Constructores y el monoplaza se retiró al final de la temporada 1979, sin ganar ninguna carrera más. Sin embargo, el 79 le proporcionó a Nigel Mansell su primera prueba de Fórmula 1 en diciembre de 1979 en Paul Ricard. En su vida, el 79 obtuvo 7 victorias, 10 pole positions, 121 puntos y ganó los últimos Campeonatos Mundiales de Pilotos y Constructores para Lotus. Este monoplaza tuvo el mérito de empujar a la Fórmula 1 a la era de la aerodinámica, y su influencia aún se siente con fuerza en los coches modernos. Resultados Fórmula 1 1 Este total incluye puntos obtenidos con el Lotus 78. 2 Este total incluye puntos obtenidos con el Lotus 80, usado por Andretti en tres carreras. Referencias 79 Monoplazas de la temporada 1978 de Fórmula 1 Monoplazas de la temporada 1979 de Fórmula 1
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,638
{"url":"https:\/\/odziezuzywana-wroclaw.pl\/rocks\/May\/4295\/","text":"No.169, Science (Kexue) Avenue,National HI-TECH Industry Development Zone, Zhengzhou, China [email\u00a0protected] 0086-371-86549132\n\nONLINE MESSAGE\nMOBILE CEUCHERS\n\n# the egg shell and mineral metabolism :\n\n##### Regulation of bone-renal mineral and energy metabolism ...\n\nRegulation of bone-renal mineral and energy metabolism: the PHEX, FGF23, DMP1, MEPE ASARM pathway. ... One new innovation, the egg shell, contained an ancestral protein (ovocleidin-116) that likely first appeared with the dinosaurs and was preserved through the theropod lineage in modern birds and reptiles. Ovocleidin-116 is an avian homolog of ...\n\n##### Trace mineral metabolism in the avian embryo -\n\n2001-1-1\u2002\u2002A second potential route of trace mineral transfer to the egg involves the oviduct and its synthesis of egg albumen, shell and the shell membrane. Estrogen in conjunction with other gonadal steroids (progesterone and testosterone) is involved in the induction of albumen protein synthesis by the tubular gland cells of the magnum portion of the ...\n\n##### Effect of 1alpha-hydroxyvitamin D3 and egg-shell calcium ...\n\nEgg-shell calcium (Ca) is one of the effective Ca sources for bone metabolism. In the present study, we investigated whether egg-shell Ca had similar effects compared with calcium carbonate (CaCO3) when vitamin D3 (1alpha(OH)D3) treatment was given to an osteoporotic rat model. In both 1alpha(OH)D3-\n\n##### PoultryWorld - Improving shell quality and egg quantity ...\n\n2021-7-16\u2002\u2002Egg formation. Hibbert continues: \u201cThere are many different ways that trace minerals have an interaction with the quality of egg shells. Zinc, copper and manganese are the 3 most important trace minerals and they have both catalytic and structural properties. There are various enzymatic reactions required to make the components of the shell.\n\n##### Short-term effects of a chicken egg shell powder enriched ...\n\nBased on the high calcium content, chicken egg shells are an interesting source of calcium. We studied the short-term effects on bone mineral density (BMD) of the lumbar spine and hip in 9 women and one man (mean age +\/- SD, 63.9 +\/- 8.1 years) with osteoporosis or osteopenia. Also the effects on pa\n\n##### Effect of 1\u03b1-hydroxyvitamin D 3 and egg-shell calcium on ...\n\nEgg-shell calcium (Ca) is one of the effective Ca sources for bone metabolism. In the present study, we investigated whether egg-shell Ca had similar effects compared with calcium carbonate (CaCO 3) when vitamin D 3 (1\u03b1(OH)D 3) treatment was given to an osteopo-rotic rat model.In both 1\u03b1(OH)D 3-supplemented and -unsupplemented rats, the bone mineral density (BMD) of the lumber spine in the ...\n\n##### Effect of Types of Egg Shell Calcium Salts and Egg Shell ...\n\nThis study was carried out to investigate the effect of egg shell calcium salt types and egg shell membrane on calcium metabolism in rats. Sprague-Dawley male rats, 4 weeks of age, were fed on ...\n\n##### Dieldrin, Ca and P balance, and characteristics of the egg ...\n\nA study was made of the Ca and P balance, Ca and P content in the femur, physical characteristics of the egg, mineral structure of the shell, and the number of eggs in quails treated with dieldrin (20 mg\/kg of diet) for 48 days. The diet contained 3.24% Ca and 0.72% P.\n\n##### The Chemistry of Eggs Egg Shells \u2013 Compound Interest\n\n2016-3-26\u2002\u2002As the egg shell contains thousands of pores that allow carbon dioxide gas to diffuse out of the egg, the egg white\u2019s pH rises from around 7.6 to around 9.2 after around a week of storage. The cooked albumen adheres more strongly to the inside of the shell at the lower pH, meaning fresh eggs make for a more frustrating egg\n\n##### The Avian Shell Gland: A Study in Calcium Translocation ...\n\nA segment of the avian oviduct exemplifies such a natural model, and this chapter is an effort to codify and interpret current research findings that elucidate the cellular and molecular details by which the tissues of the avian shell gland channel the movement of calcium from the blood to ultimate crystallization as an egg shell in the form of ...\n\n##### (PDF) Effect of dietary xylanase on energy, amino acid and ...\n\nThe effect of dietary xylanase on energy, amino acid and mineral metabolism and egg 1 . ... Egg and egg shell quality analyses were performed on a total of 270 eggs which had been 113 .\n\n##### Effect of Agaricus blazei ??-Glucan and Egg Shell Calcium ...\n\n[Show full abstract] study, the effect of the egg-shell Ca on bone metabolism was determined in ovariectomized rats by using the same technique to evaluate the effect of several other Ca sources ...\n\n##### (PDF) Effect of dietary xylanase on energy, amino acid and ...\n\nThe effect of dietary xylanase on energy, amino acid and mineral metabolism and egg 1 . ... Egg and egg shell quality analyses were performed on a total of 270 eggs which had been 113 .\n\n##### Effect of 1\u03b1-hydroxyvitamin D 3 and egg-shell calcium on ...\n\nEgg-shell calcium (Ca) is one of the effective Ca sources for bone metabolism. In the present study, we investigated whether egg-shell Ca had similar effects compared with calcium carbonate (CaCO 3) when vitamin D 3 (1\u03b1(OH)D 3) treatment was given to an osteopo-rotic rat model.In both 1\u03b1(OH)D 3-supplemented and -unsupplemented rats, the bone mineral density (BMD) of the lumber spine in the ...\n\n##### Effect of Types of Egg Shell Calcium Salts and Egg Shell ...\n\nThis study was carried out to investigate the effect of egg shell calcium salt types and egg shell membrane on calcium metabolism in rats. Sprague-Dawley male rats, 4 weeks of age, were fed on ...\n\n##### Novel Biomarkers for Calcium and Phosphorus\n\n2018-8-9\u2002\u2002Egg fertility and hatchability are heavily influenced by the thickness of the egg shell, the mineral calcium carbonate shell of the egg necessary for protecting the embryo growing inside. Many factors affect egg shell quality including age of the hen, diet, environmental conditions, genetic strain, stress, disease, and nutrition.\n\n##### Regulation of bone-renal mineral and energy metabolism ...\n\n2012-1-1\u2002\u2002Regulation of bone-renal mineral and energy metabolism: the PHEX, FGF23, DMP1, MEPE ASARM pathway. ... One new innovation, the egg shell, contained an ancestral protein (ovocleidin-116) that likely first appeared with the dinosaurs and was preserved through the theropod lineage in modern birds and reptiles. Ovocleidin-116 is an avian homolog of ...\n\n##### Dieldrin, Ca and P balance, and characteristics of the egg ...\n\nA study was made of the Ca and P balance, Ca and P content in the femur, physical characteristics of the egg, mineral structure of the shell, and the number of eggs in quails treated with dieldrin (20 mg\/kg of diet) for 48 days. The diet contained 3.24% Ca and 0.72% P.\n\n##### Effect of Agaricus blazei ??-Glucan and Egg Shell Calcium ...\n\nThis study was designed to evaluate the effect of Agaricus blazei and egg shell calcium complex on bone metabolism in ovariectomized (OVX) rats.\n\n##### Egg Shell Quality III: Calcium and phosphorus\n\nEgg shell quality. II. Effect of dietary manipulations of protein, amino acids, energy, and calcium in young hens on egg weight, shell weight, shell quality and egg production. Poultry Science 59: 2047 \u2013 2054.CrossRef Google Scholar\n\n##### The Chemistry of Eggs Egg Shells \u2013 Compound Interest\n\n2016-3-26\u2002\u2002As the egg shell contains thousands of pores that allow carbon dioxide gas to diffuse out of the egg, the egg white\u2019s pH rises from around 7.6 to around 9.2 after around a week of storage. The cooked albumen adheres more strongly to the inside of the shell at the lower pH, meaning fresh eggs make for a more frustrating egg\n\n##### Water and Mineral Metabolism - Biology Discussion\n\n2021-7-26\u2002\u2002Mineral Metabolism: Living beings have organic and inorganic types of chemical constituents. The organic constituents i.e. proteins, carbohydrates, fats etc. are made up of C, H, O and N. The inorganic constituents described as \u2018minerals\u2019 comprise of the elements present in the body other than C, H, O and N.\n\n##### Trace mineral metabolism in the avian embryo. Semantic ...\n\nTrace mineral metabolism in the developing avian embryo begins with the formation of the egg and the trace mineral stores contained within it. Vitellogenin, the yolk precursor protein, serves as a trace mineral transporting protein that mediates the transfer of these essential nutrients from stores within the liver of the hen to the ovary and developing oocyte, and hence, to the yolk of the egg.\n\n##### COMPARISON OF CHANGES IN THE BONE MINERAL\n\n2020-8-4\u2002\u2002Changes in the hens\u2019 bone mineral content and egg shell weight and egg shell ratio were monitored four-weekly, between 20 and 72 weeks of age. The bone mineral content of the birds was always determined in vivo by means of computer tomography (CT) at the Institute of Diagnostic Imaging and Radiation Oncology of the Kaposv\u00e1r University.\n\n##### (PDF) Mineral, amino acid, and hormonal composition of ...\n\nMineral, amino acid, and hormonal composition of chicken eggshell powder and the evaluation of its use in human nutrition.pdf Available via license: CC BY-NC-ND 4.0 Content may be subject to ...\n\n##### Stress and its effect on mineral metabolism in the ...\n\nWhite Leghorn hens were given supplements of Ca, P, thyroprotein, acetylsalicylic acid, ethylenediamine tetra-acetic acid or ascorbic acid individually or in different combinations. Increase of Ca from 3 to 4% or thyroprotein, but not ascorbic acid, improved quality of egg shell. With more than 0.6% P and 2.77 or 4.23% Ca, egg production and shell thickness were reduced.\n\n##### Regulation of bone-renal mineral and energy metabolism ...\n\n2012-1-1\u2002\u2002Regulation of bone-renal mineral and energy metabolism: the PHEX, FGF23, DMP1, MEPE ASARM pathway. ... One new innovation, the egg shell, contained an ancestral protein (ovocleidin-116) that likely first appeared with the dinosaurs and was preserved through the theropod lineage in modern birds and reptiles. Ovocleidin-116 is an avian homolog of ...\n\n##### Novel Biomarkers for Calcium and Phosphorus\n\n2018-8-9\u2002\u2002Egg fertility and hatchability are heavily influenced by the thickness of the egg shell, the mineral calcium carbonate shell of the egg necessary for protecting the embryo growing inside. Many factors affect egg shell quality including age of the hen, diet, environmental conditions, genetic strain, stress, disease, and nutrition.\n\n##### Trace mineral metabolism in the avian embryo. Semantic ...\n\nTrace mineral metabolism in the developing avian embryo begins with the formation of the egg and the trace mineral stores contained within it. Vitellogenin, the yolk precursor protein, serves as a trace mineral transporting protein that mediates the transfer of these essential nutrients from stores within the liver of the hen to the ovary and developing oocyte, and hence, to the yolk of the egg.\n\n##### METABOLISM AND NUTRITION Effects of Reducing\n\nMETABOLISM AND NUTRITION Effects of Reducing Dietary Protein, Methionine, Choline, Folic Acid, and Vitamin B 12 During the Late Stages of the Egg Production Cycle on Performance and Eggshell Quality1 K. Keshavarz2 Department of Animal Science, Cornell University, Ithaca, New York 14853\n\n##### Egg Shell Quality III: Calcium and phosphorus\n\nEgg shell quality. II. Effect of dietary manipulations of protein, amino acids, energy, and calcium in young hens on egg weight, shell weight, shell quality and egg production. Poultry Science 59: 2047 \u2013 2054.CrossRef Google Scholar\n\n##### The Chemistry of Eggs Egg Shells \u2013 Compound Interest\n\n2016-3-26\u2002\u2002As the egg shell contains thousands of pores that allow carbon dioxide gas to diffuse out of the egg, the egg white\u2019s pH rises from around 7.6 to around 9.2 after around a week of storage. The cooked albumen adheres more strongly to the inside of the shell at the lower pH, meaning fresh eggs make for a more frustrating egg\n\n##### Prevent Osteoporosis with Egg Shell Calcium, Vitamins D ...\n\nPrevent Osteoporosis with Egg Shell Calcium, Vitamins D and K2, and Other Nutrients Osteoporosis kills women and men It\u2019s a well known fact that one out of two women older than age 50 suffers an osteoporosis-related fracture during her lifetime.\n\n##### Effect of Types of Egg Shell Calcium Salts and Egg Shell ...\n\nEffect of Types of Egg Shell Calcium Salts and Egg Shell Membrane on Calcium Metabolism in Rats \ub09c\uac01 \uce7c\uc298\uc5fc\uc758 \uc885\ub958\uc640 \ub09c\ub9c9\uc758 \uc874\uc7ac\uc720\ubb34\uac00 \ud770\uc950\uc758 ...\n\n##### Water and Mineral Metabolism - Biology Discussion\n\n2021-7-26\u2002\u2002Mineral Metabolism: Living beings have organic and inorganic types of chemical constituents. The organic constituents i.e. proteins, carbohydrates, fats etc. are made up of C, H, O and N. The inorganic constituents described as \u2018minerals\u2019 comprise of the elements present in the body other than C, H, O and N.\n\n##### Quality and mineral composition of eggs from hens ...\n\n2020-6-19\u2002\u2002Table 2 also contains quality indicators of egg shells. In the group of laying hens fed with CHL, hens were laying eggs with a much heavier shell and larger volume, which ex-plained the greater weight of egg. However, the proportion of the shell in the egg mass and the shell strength recorded in this study did not change as a result of the ...\n\n##### Effect of Agaricus blazei \u03b2-Glucan and Egg Shell Calcium ...\n\nAbstract. This study was designed to evaluate the effect of Agaricus blazei $\\beta-glucan$ and egg shell calcium complex on bone metabolism in ovariectomized (OVX) rats. Forty Sprague-Dewley female rats, 10 weeks of age $(248{\\pm}1.7g)$, were divided into 4 groups and fed on the experimental diets for 6 weeks: sham operated control treated with normal diet containing 0.5% calcium (Sham-C), OVX ...\n\n##### Critical points on egg production: causes, importance and ...\n\nShell defects, irregularities in shell shape, texture and surface are commonly observed during a regular egg laying cycle and the causes are varied. The incidence of downgraded eggs still represents an important source of economic loss for the egg industry due to\n\n### QUALITY SERVICE\n\nPre-sale service\n\nShuttle bus\n\nFactory visit\n\nConference reception\n\nSale service\n\nOn-the-spot investigation\n\nConceptual design\n\nDetailed quotation\n\nSign a contract\n\nAfter-sale service\n\nPackaging delivery\n\nInstallation and debugging\n\nOperation training\n\nRegular visit\n\n1.Liming Heavy Industry, a professional manufacturer, can provide perfect quality assurance system for the products.","date":"2021-09-16 21:07:15","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.20240892469882965, \"perplexity\": 10032.797591038956}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-39\/segments\/1631780053759.24\/warc\/CC-MAIN-20210916204111-20210916234111-00007.warc.gz\"}"}
null
null
Home Miscellaneous Education & Research Royal Geographical Society unlocks its archives with online catalogue Geospatial Applications Royal Geographical Society unlocks its archives with online catalogue One of the world's largest collections of geographical knowledge opens to the public for the first time at a new study centre at the Royal Geographical Society (with the Institute of British Geographers) in London. The resources include two million items – maps, photographs, books, artefacts and documents – that tell the story of 500 years of geographical research and exploration. The Heritage Lottery Fund-supported project is entitled 'Unlocking the Archives'. The £7.1 million scheme gives full public access to study the Society's heritage resources for the first time in its 174-year history. A new study centre has been created as an extension to the Society's building on Exhibition Road, Kensington, London; a free catalogue of the heritage resources is on the Internet. There are many important records from the so-called golden age of geographical exploration in the 19th and early 20th centuries. These include the South Polar Times edited by Shackleton during Captain Scott's 1902-03 expedition to Antarctica and Dr. Livingstone's watercolour sketches of the Victoria Falls in Africa (image available). From more recent times, there are maps used for the Normandy D-day landings 60 years ago, and the diaries of Lord Hunt who led the first successful ascent of Everest. The computerisation of substantial parts of the card catalogues for the first time enables users to search the heritage holdings via the Internet. More than 210,000 card records have been painstakingly transferred to an electronic catalogue so that people can discover what the Society holds in its archives. Education for schools, universities and adult learners is a key part of the project. Maps, documents and photographs offer insights into the histories of communities who migrated to the UK from the Caribbean, Africa and Asia. Dr Rita Gardner CBE, Director of the Royal Geographical Society (with IBG), said: "The transparency of the glass pavilion symbolises the opening up of the Society visually, intellectually and physically to all those interested in learning about our geographical heritage and its relevance to understanding and managing the modern world. I believe that everyone should have the opportunity to understand the world in which they live and the impact of their lives upon it." Previous articleGalileo marries GPS Next articleIntergraph selected to streamline cadastral registration by Kosovo Government
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,884
Journal of Historical Review (Історичний огляд) — щоквартальний історичний журнал, який з 1980 по 1986, потім з 1987 по 2002 роки в Торрансі (штат Каліфорнія) видавав Інститут перегляду історії. З 2002 року Інститут перегляду історії став поширювати свої публікації на своєму офіційному сайті і через e-mail. Однак, видані номери продовжують поширюватися і продаватися. В основному тематика опублікованих у журналі статей стосується історії Німеччини часів націонал-соціалізму, періоду Другої світової війни, Голокосту і його заперечення, хоча частина статей присвячена більш раннім періодам історії. Посилання Зміст усіх номерів з 1986 по 2002 роки Офіційний сайт Інституту перегляду історії Див. також Ревізіонізм Голокосту Barnes Review Заперечення Голокосту Друковані видання, засновані 1980 Журнали США Історичні журнали Щоквартальні журнали
{ "redpajama_set_name": "RedPajamaWikipedia" }
316
BRUSSELS - European NATO allies are to urge President Barack Obama to remove all remaining US nuclear weapons from European soil, as domestic pressure grows to rid its soil of outdated Cold War-era aerial bombs. Belgium, Germany, Luxembourg, The Netherlands and Norway will call "in the coming weeks" for more than 200 American warheads, mostly stocked in Italy and Turkey, to be taken back, a spokesman for Prime Minister Yves Leterme told AFP. A joint proposal by the five NATO members will demand "that nuclear arms on European soil belonging to other NATO member states are removed," Dominique Dehaene said. Only the United States has nuclear arms stored in other NATO member states in Europe, he added. The proposal does not refer to the distinct, and more modern British and French nuclear arsenals. "It's a question of launching the debate at the heart of NATO," Dehaene stressed, underlining it would form part of broader disarmament talks also focused on conventional weapons. Former NATO chief Willy Claes and three more senior Belgian political figures urged such a call in Friday's Belgian press, citing "Obama's pledge to work to eliminate all nuclear weapons." A statement from Leterme stressed that "Belgium is in favour of a world without nuclear weapons and advocates this position at the heart of NATO," in preparation for a New York conference in May on global nuclear arms non-proliferation efforts. Leterme said an initiative would be launched under a strategic NATO rethink due to be adopted by leaders of NATO countries in Lisbon in November. Spokesman Dehaene said that plan includes addressing what to do about some 220 aerial atomic bombs held on military bases in Belgium, Germany, Italy and Turkey. According to experts, Italy and Turkey house about 90 of these nuclear warheads each. There are about 20 each in Germany, where 130 atomic bombs were withdrawn in 2004, and Belgium. These bombs are considered by military experts to be outdated because they are essentially dropped by pilots. "The Cold War is over. It's time to adapt our nuclear policy to the new circumstances," wrote Claes, fellow former Belgian foreign minister Louis Michel and former prime ministers Jean-Luc Dehaene and Guy Verhofstadt. It was agreed at the end of last year, after Germany sought the withdrawal of the warheads there, that all calls for the removal of these weapons be done on a NATO-wide basis, and not unilaterally. Allied diplomats stressed that the removal of these arms from Europe would neither represent the end of the US nuclear deterrent on behalf of its allies, nor the denuclearisation of NATO. The call coincides with a new threat by Russia to base missiles in its western exclave Kaliningrad, which borders the European Union, amid growing controversy over a new US missile shield plan. Moscow in September said it scrapped plans to place short-range Iskander missiles in Kaliningrad after the United States shelved a controversial missile shield plan for central Europe. But it expressed concern after Romania said this month it would hold talks with Washington on hosting US missile interceptors and Bulgaria showed an interest in taking part in a US missile shield.
{ "redpajama_set_name": "RedPajamaC4" }
6,749
Stary cmentarz żydowski w Wyszogrodzie Nowy cmentarz żydowski w Wyszogrodzie
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,458
Windows 7 and 8 have built in firewalls. They're not necessary to have if you have a good firewall in place such as Bitdefender. WF slows down net browsing by doing it's own thing in the background – that's my experience. Manually switching it off leads to it restarting when the computer is restarted. Very pesky indeed. To stop it restarting requires going into the Advanced Security Settings. The video below explains how to do that. See also this link. Labels: captain walker , computing , Did you know?
{ "redpajama_set_name": "RedPajamaC4" }
6,711
{"url":"http:\/\/gatkforums.broadinstitute.org\/gatk\/discussion\/2218\/benefits-of-running-unifiedgenotyper-on-multiple-samples-at-the-same-time","text":"The current GATK version is 3.7-0\nExamples: Monday, today, last week, Mar 26, 3\/26\/04\n\n#### Howdy, Stranger!\n\nIt looks like you're new here. If you want to get involved, click one of these buttons!\n\n#### \u261e Did you remember to?\n\n1. Search using the upper-right search box, e.g. using the error message.\n3. Include tool and Java versions.\n4. Tell us whether you are following GATK Best Practices.\n5. Include relevant details, e.g. platform, DNA- or RNA-Seq, WES (+capture kit) or WGS (PCR-free or PCR+), paired- or single-end, read length, expected average coverage, somatic data, etc.\n6. For tool errors, include the error stacktrace as well as the exact command.\n7. For format issues, include the result of running ValidateSamFile for BAMs or ValidateVariants for VCFs.\n8. For weird results, include an illustrative example, e.g. attach IGV screenshots according to Article#5484.\n9. For a seeming variant that is uncalled, include results of following Article#1235.\n\n#### \u261e Formatting tip!\n\nSurround blocks of code, error messages and BAM\/VCF snippets--especially content with hashes (#)--with lines with three backticks ( ` ) each to make a code block.\nGATK 3.7 is here! Be sure to read the Version Highlights and optionally the full Release Notes.\n\n# Benefits of running UnifiedGenotyper on multiple samples at the same time\n\nMember Posts: 1\n\nThe best practice guide states to call variants across all samples simultaneously. Besides the ease of working with one multi-sample VCF, what advantages are there to calling the variants at the same time? Does GATK leverage information across all samples when making calls? If so, what assumptions is the UnifiedGenotyper making about the relationship of these samples to each other, and what are the effects on the variant calls?\n\nthanks,\nJustin\n\nTagged:","date":"2017-02-26 23:39:47","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.1778801828622818, \"perplexity\": 9156.42951290448}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-09\/segments\/1487501172156.69\/warc\/CC-MAIN-20170219104612-00641-ip-10-171-10-108.ec2.internal.warc.gz\"}"}
null
null
Joining us in the Queen City for the third time, the legendary pianist Garrick Ohlsson will perform Beethoven's innovative and virtuosic Fourth Piano Concerto at Beethoven's Fifth, October 5-7. P ianist Garrick Ohlsson has established himself worldwide as a musician of magisterial interpretive and technical prowess. Although long regarded as one of the world's leading exponents of the music of Chopin, Mr. Ohlsson commands an enormous repertoire ranging over the entire piano literature and he has come to be noted for his masterly performances of the works of Mozart, Beethoven and Schubert, as well as the Romantic repertoire. To date he has at his command more than 80 concertos, ranging from Haydn and Mozart to works of the 21st century. A native of White Plains, N.Y., Garrick Ohlsson began his piano studies at the age of 8, at the Westchester Conservatory of Music; at 13 he entered The Juilliard School, in New York City. He has been awarded first prizes in the Busoni and Montreal Piano competitions, the Gold Medal at the International Chopin Competition in Warsaw (1970), the Avery Fisher Prize (1994), the University Musical Society Distinguished Artist Award in Ann Arbor, MI (1998), and the Jean Gimbel Lane Prize in Piano Performance from the Northwestern University Bienen School of Music (2014).
{ "redpajama_set_name": "RedPajamaC4" }
6,418
\section{Introduction} For a complex homogeneous form $F$ of degree $d$, the \defining{Waring rank} $r(F)$ is the least $r$ such that there exist linear forms $\ell_1,\dotsc,\ell_r$ and scalars $c_1,\dotsc,c_r$ satisfying $F = c_1 \ell_1^d + \dotsb + c_r \ell_r^d$. For example, \[ xyz = \frac{1}{24} \Big\{ (x+y+z)^3 - (x+y-z)^3 - (x-y+z)^3 - (-x+y+z)^3 \Big\} \] which shows $r(xyz) \leq 4$; and one can show in fact $r(xyz) = 4$. For extensive introductions to Waring rank, including several different proofs that $r(xyz)=4$, and including discussions of the history and applications of Waring rank, see for example \cite{MR1735271,MR2865915,Teitler:2014gf,MR2447451,Geramita,Reznick:2013uq}. By the Alexander--Hirschowitz theorem \cite{MR1311347} a general form $F$ of degree $d > 1$ in $n$ variables has rank $r(F)$ equal to \[ \left\lceil \frac{1}{n} \binom{n+d-1}{n-1} \right\rceil, \] except if $d=2$ (then $r(F) = n$) or $(n,d) = (3,4), (4,4), (5,4), (5,3)$ (then $r(F)$ is $1$ more than the above expression). This value is called the \defining{generic rank}. We denote it $r_{\mathrm{gen}}(n,d)$. It is an open question what is the maximum Waring rank of forms of degree $d$ in $n$ variables for each $(n,d)$, known only in some small cases. We write $r_{\mathrm{max}}(n,d)$ for the maximum Waring rank. Of course the maximum rank must be greater than or equal to the rank of a general form: $r_{\mathrm{max}}(n,d) \geq r_{\mathrm{gen}}(n,d)$. Several upper bounds are known, such as $r_{\mathrm{max}}(n,d) \leq 2r_{\mathrm{gen}}(n,d)$ \cite{MR3368091} (see also \cite{MR2383331}, \cite{MR3196960}, \cite{Ballico:2013sf}). For $d=2$ it is known that $r_{\mathrm{max}}(n,d) = r_{\mathrm{gen}}(n,d) = n$. For $n=2$, $d \geq 3$, it is known that $r_{\mathrm{max}}(n,d) = d > r_{\mathrm{gen}}(n,d) = \lfloor \frac{d+2}{2} \rfloor$. For larger values $n, d \geq 3$ much less is known. One might ask whether the difference between the maximum Waring rank and the generic rank is unbounded. But it is not even known whether this difference is positive, i.e., the maximum Waring rank is strictly greater than the generic rank. We focus on the latter question: for each $n,d \geq 3$ does there exist a form with rank strictly greater than the generic rank? The answer is known for some small cases. For plane cubics $r_{\mathrm{max}}(3,3) = 5$ and $r_{\mathrm{gen}}(3,3) = 4$, see for example \cite[\textsection96]{MR0008171}, \cite{comonmour96}, \cite[\textsection8]{Landsberg:2009yq}. For plane quartics $r_{\mathrm{max}}(3,4) = 7$ and $r_{\mathrm{gen}}(3,4) = 6$, see \cite{Kleppe:1999fk,Paris:2013fk}. For cubic surfaces $r_{\mathrm{max}}(4,3) = 7$ while $r_{\mathrm{gen}}(4,3) = 5$, see \cite[\textsection97]{MR0008171}. (See \cite{Holmes:2014hl} for the form $F = x_1 x_2^2 + x_3 x_4^2$ of degree $d=3$ in $n=4$ variables which has rank $6$.) To our knowledge, the maximum Waring rank is not known up to now for any other values of $(n,d)$. For $n=3$ and $d \geq 5$, while the maximum Waring rank is not yet known, it is known that there exist forms with strictly greater than the generic rank. The greatest Waring rank of a form in $3$ variables previously known is attained by monomials, see \cite{Carlini20125}. Explicitly, if $d$ is odd, the monomial $x y^{(d-1)/2} z^{(d-1)/2}$ has rank $r(x y^{(d-1)/2} z^{(d-1)/2}) = ((d+1)/2)^2$; if $d$ is even, the monomial $x y^{(d-2)/2} z^{d/2}$ has rank $r(x y^{(d-2)/2} z^{d/2}) = d(d+2)/4$. For $d \geq 5$ these are the greatest known ranks of forms in $3$ variables, until now. In particular, for $d \geq 5$ their ranks are strictly greater than generic ranks. See Table~\ref{table_ranks_in_3_variables}. As far as we know, these monomials in $3$ variables are the only forms in $n \geq 3$ variables known to have greater than the generic rank, except in the cases $(n,d) = (3,3), (3,4), (4,3)$ discussed above, and one more example with $(n,d) =(5,3)$, see~\cite{MR3333949}. We give a lower bound for Waring rank and some new examples of forms whose Waring ranks are strictly greater than previously known examples. \begin{table}[htb] \textsc{Forms in $3$ variables}\\[0.25\baselineskip] \begin{tabular}{ll rrr rrrrr rr} \toprule \multicolumn{2}{l}{degree} & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 \\ \midrule \multicolumn{2}{l}{Generic rank} & 4 & 6 & 7 & 10 & 12 & 15 & 19 & 22 & 26 & 31 \\ \hline \multicolumn{2}{l}{Greatest rank of monomial} & 4 & 6 & 9 & 12 & 16 & 20 & 25 & 30 & 36 & 42 \\ \hline \multirow{2}{*}{Maximum rank}&lower bound&\multirow{2}{*}{5} &\multirow{2}{*}{7}&\multirow{2}{*}{10} & 12 & 17 & 20 & 26 & 30 & 37 & 42\\ &upper bound & & & & 19 & 24 & 30 & 40 & 44 & 60 & 62 \\ \bottomrule \end{tabular} \caption{Generic, maximum, and monomial ranks in $n=3$ variables. The upper bound on maximum rank is provided by \cite{Ballico:2013sf,MR3368091,MR3349108}. The lower bound on maximum rank is mostly provided by monomials (even degrees $d \ge 6$), and Theorem~\ref{thm: n=3 greater than monomial} (odd degrees $d\ge 5$).}\label{table_ranks_in_3_variables} \end{table} \begin{theorem}\label{thm: n=3 greater than monomial} Let $d \geq 3$ be odd. There exist forms of degree $d$ in $n = 3$ variables of rank strictly greater than $((d+1)/2)^2$, the maximum rank of a monomial: $r_{\mathrm{max}}(3,d) > ((d+1)/2)^2$. \end{theorem} In particular, De~Paris had previously shown that for forms of degree $d=5$ in $n=3$ variables the maximum Waring rank is either $9$ or $10$, see \cite{MR3349108}. The monomial $xy^2z^2$ has $r(xy^2z^2) = 9$, and De Paris shows the upper bound $r_{\mathrm{max}}(3,5) \leq 10$. We show that $r_{\mathrm{max}}(3,5) > 9$, i.e., there exists a form of rank $10$, so the maximum rank is $10$. Explicitly we show that $F = xyz^3 + y^4z$ has $r(F) = 10$. \begin{table}[htb] \textsc{Forms in $4$ variables}\\[0.25\baselineskip] \begin{tabular}{ll rrr rrrrr} \toprule \multicolumn{2}{l}{degree} & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ \midrule \multicolumn{2}{l}{Generic rank} & 5 & 10 & 14 & 21 & 30 & 42 & 55 & 72 \\ \hline \multicolumn{2}{l}{Greatest rank of monomial} & 4 & 8 & 12 & 18 & 27 & 36 & 48 & 64 \\ \hline \multirow{2}{*}{Maximum rank}&lower bound& \multirow{2}{*}{7} &10& 15 & 21 & 31 & 42 & 56 & 72 \\ &upper bound& & 17 & 28 & 42 & 60 & 84 & 110 & 144 \\ \bottomrule \end{tabular} \caption{Generic, maximum, and monomial ranks in $n=4$ variables. The upper bound on maximum rank is provided by \cite{Ballico:2013sf,MR3368091}. The lower bound on maximum rank is provided by generic rank (even degrees), and Theorem~\ref{thm: n=4 greater than generic} (odd degrees).}\label{table_ranks_in_4_variables} \end{table} And we show the following: \begin{theorem}\label{thm: n=4 greater than generic} Let $d \geq 3$ be odd. There exist forms of degree $d$ in $n = 4$ variables of rank strictly greater than the generic rank: $r_{\mathrm{max}}(4,d) > r_{\mathrm{gen}}(4,d)$. \end{theorem} These are the first cases with $n \geq 4$, except for $(n,d) = (4,3)$ or $(5,3)$ mentioned previously. The key idea for the lower bound that we use has been observed independently by Carlini, Catalisano, Chiantini, Geramita, and Woo, and applied by them to show new cases of the Strassen Additivity Conjecture \cite{Carlini:2015sp}. \section*{Acknowledgements} We are grateful to Enrico Carlini, Luca Chiantini, and Alessandro De Paris for helpful comments. The authors would like to thank Simons Institute for the Theory of Computing and the organizers of the thematic semester ``Algorithms and Complexity in Algebraic Geometry'' for providing an excellent environment for scientific activity. The article is written as a part of ``Computational complexity, generalised Waring type problems and tensor decompositions'', a project within ``Canaletto'', the executive program for scientific and technological cooperation between Italy and Poland, 2013-2015. The paper is also a part of the activities of the AGATES research group. \section{Preliminaries} We work over the complex numbers $\mathbb{C}$. Fix $S = \mathbb{C}[x_1,\dotsc,x_n]$ and the dual ring $T = \mathbb{C}[\alpha_1,\dotsc,\alpha_n]$ acting on $S$ by letting each $\alpha_i$ act as $\partial/\partial x_i$; this is called the \defining{apolarity} action. We denote it by the symbol $\aa$, as in $\alpha_i \aa x_i^k = k x_i^{k-1}$. In small dimensions we may take variables $x, y, z$ and dual variables $\alpha, \beta, \gamma$. In any case, elements of $S$ are denoted by Roman letters and elements of $T$ are denoted by Greek letters. For example, $\alpha^2 \beta^3 \aa x^4 y^5 = 240 x^2 y^3$. For a vector space $V$ we denote by $\mathbb{P} V$ the projective space of lines in $V$. For a nonzero vector $v \in V$ we write $[v] \in \mathbb{P} V$ for the line in $V$ spanned by $v$. For an ideal $I\subset T$ or form $\Theta \in T$ we write $V(I)$ or $V(\Theta)$ for the affine scheme or variety in $S_1 = T_1^*$ defined by $I$ or $\Theta$. When $I$ or $\Theta$ is homogeneous we write $\mathbb{P} V(I)$ or $\mathbb{P} V(\Theta)$ for the corresponding projective scheme or variety. For $F \in S$ let $F^\perp = \{ \Theta \in T \mid \Theta \aa F = 0 \}$, the \defining{apolar} or \defining{annihilating ideal} of $F$. Recall the Apolarity Lemma, that for a scheme $Z \subset \mathbb{P}^{n-1} = \mathbb{P} S_1$ with saturated homogeneous ideal $I$, $[F]$ lies in the linear span of the Veronese image $v_d(Z)$ if and only if $I \subset F^\perp$; see for example \cite[Lemma 1.15]{MR1735271}. When $Z = \{[\ell_1],\dotsc,[\ell_r]\}$ is reduced, this says there are scalars $c_i$ such that $F = \sum c_i \ell_i^d$ if and only if $I \subset F^\perp$. We may replace each $\ell_i$ by $c_i^{1/d} \ell_i$ and write simply $F = \sum \ell_i^d$; so $F = \sum \ell_i^d$, up to scaling, if and only if $I = I(\{[\ell_1],\dotsc,[\ell_r]\}) \subset F^\perp$. Hence the Waring rank $r(F)$ is the least length of a reduced saturated homogeneous one-dimensional ideal $I \subset F^\perp$. A scheme $Z$ or ideal $I$ is called \defining{apolar to $F$} if $I \subset F^\perp$, equivalently if $[F]$ lies in the span of the $d$'th Veronese image of $Z$; so the Waring rank of $F$ is equal to the least length of a zero-dimensional reduced apolar scheme to $F$. A typical approach to giving lower bounds for $r(F)$ is to analyze reduced apolar schemes to $F$. This is the approach we take here. Some related notions are worth mentioning. The \defining{cactus rank} $cr(F)$, or \defining{scheme length}, of $F$ is the least length of a saturated homogeneous one-dimensional ideal $I \subset F^\perp$ (not necessarily reduced). The \defining{smoothable rank} $sr(F)$ is the least length of a smoothable zero-dimensional apolar scheme (recall that a scheme is smoothable if it lies in an irreducible family whose general member is smooth). The $r$'th \defining{secant variety} of the Veronese variety is the Zariski closure of the locus of forms of Waring rank $r$. The \defining{border rank} $br(F)$ is the least $r$ such that $[F]$ lies in the $r$'th secant variety, that is, $F$ is a limit of forms of rank $r$. Evidently $cr(F) \leq sr(F) \leq r(F)$ and $br(F) \leq r(F)$. In fact $br(F) \leq sr(F)$. All these inequalities may be strict, or may be equalities. For examples with $cr(F) < br(F)$, see for instance \cite{MR2996880}. For examples with $cr(F) > br(F)$, see \cite{MR3333949}. Let $A^F = T/F^\perp$, the \defining{apolar algebra} of $F$. Let $\Diff(F) = T \aa F = \{ \Theta \aa F \mid \Theta \in T \} \subset S$; note $\Diff(F) \cong A^F$ as $\mathbb{C}$-vector spaces. Recall (see for example \cite{Derksen:2014hb}) that for any $F \in S$, $\Theta \in T$ we have \begin{equation}\label{equ: colon ideal and derivation} F^\perp : \Theta = (\Theta \aa F)^\perp \end{equation} and we have the short exact sequence \[ 0 \to T/(F^\perp : \Theta) \overset{\Theta}{\longrightarrow} T/F^\perp \to T/(F^\perp + \Theta) \to 0. \] In particular $\length(T/(F^\perp + \Theta)) = \dim \Diff(F) - \dim \Diff(\Theta \aa F)$. Let $\al(F) = \length(A^F) = \dim \Diff(F)$, the \defining{apolar length} of $F$, so \[ \length(T/(F^\perp + \Theta)) = \al(F) - \al(\Theta \aa F) = \length(A^F/\Theta A^F) . \] The following was essentially observed in \cite{Derksen:2014hb}. \begin{proposition}\label{prop: no point bound} Let $F \in S$ be a homogeneous form of degree $d$, let $\alpha \in T_1$ be a linear form, and let $I \subseteq F^\perp$ be a saturated homogeneous one-dimensional apolar ideal. Suppose that the zero-dimensional scheme $\mathbb{P} V(I)$ has no point of support on the hyperplane $\mathbb{P} V(\alpha)$; equivalently, $I = I:\alpha$. Then $\deg I \geq \al(F) - \al(\alpha \aa F)$. \end{proposition} \begin{proof} From $I + \alpha \subseteq F^\perp + \alpha$ we get $\Spec(T/(F^\perp + \alpha)) \subseteq V(I) \cap V(\alpha)$, a proper intersection by hypothesis, having length equal to $\deg I$. Thus $\deg I \geq \length(T/(F^\perp + \alpha)) = \al(F) - \al(\alpha \aa F)$. \end{proof} This does not require $I$ to be reduced, so it leads to a bound for cactus rank $cr(F)$. The hypothesis that $\mathbb{P} V(I)$ has no point of support on $\mathbb{P} V(\alpha)$ can be realized by, for example, taking $\alpha$ general: for $\alpha$ general, $cr(F) \geq \al(F) - \al(\alpha \aa F)$; this is Theorem 3.1 of \cite{Derksen:2014hb}. Here are the new observations which form the starting point for this paper. \begin{proposition}\label{prop: bound points off alpha} Let $F \in S$ be a homogeneous form of degree $d$, let $\alpha \in T_1$ be a linear form, and let $I \subseteq F^\perp$ be a reduced saturated homogeneous one-dimensional apolar ideal. Then $\deg(I:\alpha) \geq \al(\alpha \aa F) - \al(\alpha^2 \aa F)$. In particular $Z = \mathbb{P} V(I)$ has at least $\al(\alpha \aa F) - \al(\alpha^2 \aa F)$ points of support off of the hyperplane $\mathbb{P} V(\alpha)$. \end{proposition} \begin{proof} Note $I:\alpha$ is a saturated homogeneous ideal and $I:\alpha \subset F^\perp:\alpha = (\alpha \aa F)^\perp$. And $\mathbb{P} V(I:\alpha)$ has no point of support on $\mathbb{P} V(\alpha)$. The result follows by Proposition~\ref{prop: no point bound}. \end{proof} \begin{remark} If $Z$ is a zero-dimensional scheme with multiplicity at most $k$ at each support point in $\mathbb{P} V(\alpha)$ then $Z - (Z \cap \mathbb{P} V(\alpha))$ has length at least $\al(\alpha^k \aa F) - \al(\alpha^{k+1} \aa F)$. \end{remark} \begin{remark} In particular this ignores multiplicities (or reducedness) of $Z$ outside of $\mathbb{P} V(\alpha)$. At this time we do not know how to exploit reducedness of $Z$ outside of $\mathbb{P} V(\alpha)$ to give an improved bound. \end{remark} A somewhat more general version of the next statement was observed independently by Carlini, Catalisano, Chiantini, Geramita, and Woo \cite{Carlini:2015sp}. \begin{theorem}\label{thm: waring bound} Let $F \in S$ be a homogeneous form of degree $d$ and let $\alpha \in T_1$ be a linear form. Then $r(F) \geq \al(\alpha \aa F) - \al(\alpha^2 \aa F)$. \end{theorem} \begin{proof} Let $I \subset F^\perp$ be a reduced apolar ideal of degree $\deg I = r(F)$. Then $r(F) = \deg I \geq \deg(I:\alpha) \geq \al(\alpha \aa F) - \al(\alpha^2 \aa F)$ by Proposition~\ref{prop: bound points off alpha}. \end{proof} The next statement does not seem to have been previously observed, to our knowledge. \begin{cor}\label{corollary: improve by 1} If $\al(F) - \al(\alpha \aa F) > \al(\alpha \aa F) - \al(\alpha^2 \aa F)$ then $r(F) > \al(\alpha \aa F) - \al(\alpha^2 \aa F)$. \end{cor} \begin{proof} Let $I \subset F^\perp$ be a reduced apolar ideal of degree $\deg I = r(F)$. By Proposition~\ref{prop: bound points off alpha} $\mathbb{P} V(I)$ has at least $\al(\alpha \aa F) - \al(\alpha^2 \aa F)$ points off of $\mathbb{P} V(\alpha)$. If $r(F) = \al(\alpha \aa F) - \al(\alpha^2 \aa F) = \deg(I)$ then $\mathbb{P} V(I)$ has no support on $\mathbb{P} V(\alpha)$. In this case Proposition~\ref{prop: no point bound} yields $r(F) = \deg I \geq \al(F) - \al(\alpha \aa F)$, as claimed. \end{proof} \begin{example}\label{example: rank of special form} Let $F = G(x)H(y) + K(y)$, where $x$ and $y$ denote tuples of independent variables, and suppose $\alpha \in T_1$ is differentiation by one of the $x$ variables, so that $\alpha \aa H = \alpha \aa K = 0$. Then $\alpha \aa F = (\alpha \aa G)H$ and $\Diff(\alpha \aa F) \cong \Diff(\alpha \aa G) \otimes \Diff(H)$. Here $\otimes$ denotes the usual tensor product of complex vector spaces, and the isomorphism follows since $\alpha \aa G$ and $H$ are polynomials in independent variables. Similarly $\alpha^2 \aa F = (\alpha^2 \aa G)H$ and $\Diff(\alpha^2 \aa F) \cong \Diff(\alpha^2 \aa G) \otimes \Diff(H)$. Then \[ r(F) \geq (\dim \Diff(\alpha \aa G) - \dim \Diff(\alpha^2 \aa G))(\dim \Diff(H)) = (\al(\alpha \aa G) - \al(\alpha^2 \aa G)) \al(H) . \] In particular $r(x^a H(y) + K(y)) \geq \al(H)$. Especially, let $F = x_1^{a_1} \dotsm x_n^{a_n}$, $1 \leq a_1 \leq \dotsb \leq a_n$, $\alpha = \alpha_1$. Then we obtain $r(F) \geq \al(x_2^{a_2} \dotsm x_n^{a_n}) = (a_2+1) \dotsm (a_n+1)$. This recovers the theorem of Carlini--Catalisano--Geramita on Waring ranks of monomials \cite{Carlini20125}, see also \cite{MR2842085,MR3017012}. (In fact the proof given by Carlini--Catalisano--Geramita is quite close to the idea of Theorem~\ref{thm: waring bound}.) \end{example} \begin{example}\label{example: complete intersection} It is shown in \cite{MR2842085} that if $F^\perp$ is a complete intersection generated in degrees $d_1 \leq \dotsb \leq d_n$ then $cr(F) = d_1 \dotsm d_{n-1} \leq r(F) \leq d_2 \dotsm d_n$. Suppose $F^\perp = (\phi_1,\dotsc,\phi_n)$ is a complete intersection with $\deg \phi_i = d_i$ for each $i$, where $d_1 \leq \dotsb \leq d_n$, and suppose $\alpha \in T_1$ is such that $\alpha^2 \mid \phi_1$. Note that $F^\perp : \alpha = (\phi_1/\alpha,\phi_2,\dotsc,\phi_n)$ and $F^\perp : \alpha^2 = (\phi_1/\alpha^2,\phi_2,\dotsc,\phi_n)$. Hence $r(F) \geq \al(\alpha \aa F) - \al(\alpha^2 \aa F) = d_2 \dotsm d_n \geq r(F)$. This generalizes the example of monomials. Compare Theorem~4.14 of \cite{Carlini:2015sp}. \end{example} \section{Forms with higher than general rank} We adopt a slightly modified form of notation of \cite{MR1735271}: \begin{definition} Fix integers $n,d,s$. Recall that $T = \mathbb{C}[\alpha_1,\dotsc,\alpha_n]$. Let $H(n,d)$ be the function $H(n,d)(i) = \min\{\dim_\mathbb{C} T_i, \dim_\mathbb{C} T_{d-i}\}$. Let $H(n,d,s)$ be the function $H(n,d,s)(i) = \min\{\dim_\mathbb{C} T_i, \dim_\mathbb{C} T_{d-i}, s\}$. \end{definition} (In \cite{MR1735271} these are written $H(d,n)$ and $H(s,d,n)$ respectively, although \cite{MR1735271} uses $j$ in place of $d$ and $r$ in place of $n$.) As usual we may write these functions by writing their sequences of values for $i = 0, 1, \dotsc$: thus, for example, $H(3,6,8) = 1,3,6,8,6,3,1$, all subsequent values being zero. Recall the following well-known facts. \begin{proposition}\label{prop_Hilbert_function_of_general_form} The Hilbert functions of apolar algebras behave as follows. \begin{enumerate} \item \textup{(}\cite[Prop.~3.12]{MR1735271}\textup{)} Fix integers $n$ and $d$. Let $G \in S_d$ be general. Then the Hilbert function of $A^G$ is $H(n,d)$. \item \textup{(}\cite[Lemma 1.17]{MR1735271}\textup{)}\label{item_Hilb_function_of_general_sum_of_powers} Fix integers $n, d, s$. Let $\ell_1,\dotsc,\ell_s \in S_1$ be general linear forms and $G = \ell_1^d + \dotsb + \ell_s^d$. Then the Hilbert function of $A^G$ is $H(n,d,s)$. \end{enumerate} \end{proposition} In the first case the algebra $A^G$ is called \defining{compressed} (see \cite{MR1735271} for a more general notion of compressed algebras which are not necessarily Gorenstein or graded). These statements hold also in positive characteristic by taking $G$ to be a DP-form, see \cite{MR1735271}. \begin{lemma}\label{lem_apolar_length_of_general_form} \begin{enumerate} \item \label{item_apolar_length_of_general_form_in_S_d} Fix integers $n$ and $d$. Let $G \in S_d$ be any form such that $A^G$ has Hilbert function $H(n,d)$. Then the apolar length of $G$ is $\al(G) = \binom{n + \lfloor (d-1)/2 \rfloor}{n} + \binom{n + \lceil (d-1)/2 \rceil}{n}$. \item \label{item_apolar_length_of_general_form_in_sigma_s} Fix integers $n,d,s$. Let $G \in S_d$ be any form such that $A^G$ has Hilbert function $H(n,d,s)$. Suppose $\dim T_i \leq s < \dim T_{i+1}$, where $i < d/2$. Then the apolar length of $G$ is $\al(G) = 2\binom{n+i}{i} + s(d-2i-1)$. \end{enumerate} \end{lemma} The proof is an easy computation which we leave to the reader. We write $\algen(n,d)$ for the apolar length of a general form in $n$ variables of degree $d$; that is, $\algen(n,d) = \binom{n+\lfloor (d-1)/2 \rfloor}{n} + \binom{n + \lceil (d-1)/2 \rceil}{n}$. Before we produce forms with strictly greater rank than previously known examples, we carry out some preliminary computations that involve producing new forms with rank at least as great as previously known examples. By Theorem~\ref{thm: waring bound} (or Example~\ref{example: rank of special form}), $r(x_1 H(x_2,\dotsc,x_n)+K(x_2,\dotsc,x_n)) \geq \al(H)$, independent of the choice of $K$. In particular if $H$ is general this shows that \begin{equation}\label{eqn: bound with H general} r_{\mathrm{max}}(n,d) \geq \algen(n-1,d-1). \end{equation} An easy computation by hand shows for $d$ odd, \begin{equation}\label{eqn: general apolar length n=3} \algen(3,d-1) = r_{\mathrm{gen}}(4,d) \end{equation} (the left hand side is a binomial formula in Lemma~\ref{lem_apolar_length_of_general_form}.(\ref{item_apolar_length_of_general_form_in_S_d}); the right hand side is given by the Alexander-Hirschowitz Theorem). It is also easy to see that $\algen(3,d-1) < r_{\mathrm{gen}}(4,d)$ for $d$ even; $\algen(n-1,d-1) < r_{\mathrm{gen}}(n,d)$ for $n \geq 5$ and $d \gg 0$; on the other hand $\algen(2,d-1) > r_{\mathrm{gen}}(3,d)$ for $d \geq 5$. \begin{example} Let $H(y,z)$ be a general binary form of degree $d-1$ and let $K(y,z)$ be an arbitrary binary form of degree $d$. Then $r(xH + K) \geq \al(H)$. Since $H$ is general we compute $\al(H) = (d^2+2d)/4$ if $d$ is even, $(d+1)^2/4$ if $d$ is odd. In any case $\al(H) \approx d^2/4$. By the Alexander--Hirschowitz theorem the general rank of a form of degree $d$ in $3$ variables is \[ \left\lceil \frac{1}{3} \binom{d+2}{2} \right\rceil = \left\lceil \frac{(d+2)(d+1)}{6} \right\rceil \approx \frac{d^2}{6}, \] or one more than this if $d=4$. Thus the forms $xH+K$ have higher than general rank for $d$ large enough; $d \geq 5$ will do. Note that this is independent of the choice of $K$! The ternary monomials considered in \cite{Carlini20125} are given by $H = y^{\lfloor (d-1)/2 \rfloor} z^{\lceil (d-1)/2 \rceil}$, $K = 0$. \end{example} \begin{example} In $n=4$ variables, with $d \geq 3$ odd, for $H(x_2,x_3,x_4)$ general of degree $d-1$ and $K(x_2,x_3,x_4)$ arbitrary of degree $d$, $F = x_1 H + K$ has rank $r(F) \geq \al(H) = \algen(3,d-1) = r_{\mathrm{gen}}(4,d)$. So these forms have rank at least as great as general rank. \end{example} So taking $H$ general, and $K$ arbitrary, shows explicitly that $F$ realizes the obvious inequality $r_{\mathrm{max}}(4,d) \geq r_{\mathrm{gen}}(4,d)$ for $d$ odd; and $r_{\mathrm{max}}(3,d)$ is greater than or equal to the maximum rank of a ternary monomial. Now the idea is that we can improve \eqref{eqn: bound with H general} by choosing $H$ to be not general, and $K$ meeting certain conditions. \begin{lemma}\label{lemma: surjectivity of differentiation} For any $a \geq 0$ and any nonzero $\Psi \in T_b$ the linear map $D = D_{\Psi,a} : S_{a+b} \to S_a$, $F \mapsto \Psi \aa F$, is surjective. \end{lemma} \begin{proof} Fix a monomial order $<$, such as lexicographic order. Let $\alpha^{m_0}$ be the $<$-last monomial in $\Psi$. For any monomial $x^m$ of degree $a$, $x^m$ is the leading monomial of $\Psi \aa x^{m+m_0}$. This gives a triangular system of linear equations whose solution expresses each monomial $x^m$ as an element of the image of $D_{\Psi,a}$. \end{proof} \begin{theorem}\label{thm: general case higher rank} Let $n \geq 3$ and $d = 2k+1 \geq 3$. There exists a form of degree $d$ in $n$ variables of rank strictly greater than the apolar length of a general form of degree $d-1$ in $n-1$ variables: $r_{\mathrm{max}}(n,d) > \algen(n-1,d-1)$. \end{theorem} \begin{proof} We use the $n$ variables $x_1, x_2,\dotsc,x_n$; for convenience we write $x = x_1$ and $\alpha = \alpha_1$. Let $s = \binom{n+k-2}{k} - 1$ and let $G(x_2,\dotsc,x_n)$ be a general sum of $s$ $(d-1)$st powers of linear forms in variables $x_2,\dotsc,x_n$. Let $F = xG + K(x_2,\dotsc,x_n)$, with $K$ a form of degree $d$ to be determined later. Eventually, $K$ will be a general form of degree $d$ in $n-1$ variables, however, for the sake of argument, we do not assume anything on $K$ yet. Since $\alpha \aa F = G$ and $\alpha^2 \aa F = 0$ we get $r(F) \geq \al(G)$. By construction and Proposition~\ref{prop_Hilbert_function_of_general_form}~(\ref{item_Hilb_function_of_general_sum_of_powers}) $A^G$ has Hilbert function $H(n-1,d-1,s)$ and $\al(G) = \algen(n-1,d-1)-1$. That is, \[ \begin{split} r(F) &\geq \al(G) = \binom{n-1 + \lfloor (d-2)/2 \rfloor}{n-1} + \binom{n-1 + \lceil (d-2)/2 \rceil}{n-1} - 1 \\ &= \binom{n+k-2}{n-1} + \binom{n+k-1}{n-1} - 1. \end{split} \] This holds regardless of the choice of $K$. We have $G^\perp = F^\perp : \alpha$ by \eqref{equ: colon ideal and derivation}, so $F^\perp \subseteq G^\perp$. From the Hilbert function of $A^G$ we see that $G^\perp$ has the minimal generator $\alpha$, a single minimal generator in degree $k$, and all other minimal generators must be in degrees $k+1$ or higher. It follows that for degrees $2 \leq i \leq k-1$ we have $(F^\perp)_i \subseteq (G^\perp)_i = (\alpha)_i$. But if $\alpha \Theta \in F^\perp$ for some $\Theta \in T_{i-1}$ then $\Theta \in (G^\perp)_{i-1} = (\alpha)_{i-1}$, so $\alpha \Theta \in (\alpha^2)$. This shows $(F^\perp)_i = (\alpha^2)_i$ for $2 \leq i \leq k-1$. Now, let $K$ be chosen so that $(F^\perp)_k = (\alpha^2)_k$. We will show later that there exists an open dense subset of such $K$, in fact satisfying an additional constraint that we will describe. From this we can compute the apolar length of $F$: \[ \begin{split} \al(F) &= 2\left\{ 1 + n + \left( \tbinom{n+1}{2} - 1 \right) + \left( \tbinom{n+2}{3} - n \right) + \dotsb + \left( \tbinom{n+k-1}{k} - \tbinom{n+k-3}{k-2} \right) \right\} \\ &= 2\left\{ \binom{n+k-2}{k-1} + \binom{n+k-1}{k} \right\} \\ &= 2 \al(G) + 2 \\ &= 2 \algen(n-1,d-1). \end{split} \] By Corollary~\ref{corollary: improve by 1} we get \[ r(F) \geq \binom{n+k-2}{n-1} + \binom{n+k-1}{n-1} = \al(G) + 1 = \algen(n-1,d-1). \] So far, this is the same value we would get by taking $G$ to be general. Now we will show that we can increase the bound on $r(F)$ by $1$. We claim that $r(F) \geq \al(G) + 2$. So, suppose to the contrary that $r(F) = r = \binom{n+k-2}{n-1} + \binom{n+k-1}{n-1} = \al(G)+1$. Let $F = \ell_1^d + \dotsb + \ell_r^d$ and let $I = I(\{[\ell_1],\dotsc,[\ell_r]\})$. By Proposition~\ref{prop: bound points off alpha}, there must be at least $\al(G)$ points off of the hyperplane $\mathbb{P} V(\alpha)$. If all of the points $[\ell_i]$ are off of $\mathbb{P} V(\alpha)$ then by Proposition~\ref{prop: no point bound} we have in fact $r(F) \geq \al(F) - \al(\alpha \aa F) = \al(F) - \al(G) = \al(G)+2$, giving the claimed improvement. (Here the fact $\al(G)$ is $1$ less than generic means $\al(F) - \al(G)$ is $1$ more than we would have if $G$ were generic; this is where the non-genericity of $G$ gives an improvement in the bound for $r(F)$.) Otherwise there is exactly one $[\ell_i]$ lying on $\mathbb{P} V(\alpha)$. Without loss of generality $[\ell_r]$ lies on $\mathbb{P} V(\alpha)$ and the others lie off of it. That is, $\ell_r = \ell_r(x_2,\dotsc,x_n)$ does not depend on $x$. Let $F' = F - \ell_r^d = \ell_1^d + \dotsb + \ell_{r-1}^d = xG + (K - \ell_r^d)$. We will choose $K$ in such a way that $(F^\perp)_k = (F'^\perp)_k = (\alpha^2)_k$. Then the above arguments will apply to $F'$ and give us $r(F') \geq \al(G)+1$. That is, $r-1 \geq r(F') \geq \al(G) + 1$. Thus $r(F) \geq \al(G) + 2$, as claimed. What is left is to show that there exists some $K$ such that $(F^\perp)_k = (\alpha^2)_k$ and for any linear form $\ell = \ell(x_2,\dotsc,x_n)$, $((F - \ell^d)^\perp)_k = (\alpha^2)_k$. Let $\Psi \in (G^\perp)_k$ be the minimal generator of $G^\perp$ of degree $k$. Since $\alpha \in G^\perp$ we can take $\Psi$ to only involve $\alpha_2,\dotsc,\alpha_n$. Recall that $T_{k-1} \aa G \subseteq S_{k+1}$ is the subspace consisting of $(k-1)$st derivatives of $G$; we have $\dim T_{k-1} \aa G = \binom{n+k-3}{n-2}$ by the Hilbert function of $A^G$. Let $S' \subset S$ be the subring $\mathbb{C}[x_2,\dotsc,x_n]$. Since $G \in S'$, its derivatives also do not involve $x_1$, in particular $T_{k-1} \aa G \subseteq S'_{k+1}$. But $\dim S'_{k+1} = \binom{n+k-1}{n-2}$. So $T_{k-1} \aa G \subsetneqq S'_{k+1}$. Let $K \in \mathbb{C}[x_2,\dotsc,x_n]_d$ be any form so that $\Psi \aa K \notin T_{k-1} \aa G$. There exist a plethora of such forms by Lemma~\ref{lemma: surjectivity of differentiation}. With such a choice we claim $(F^\perp)_k = (\alpha^2)_k$. Suppose $\Theta = \Theta(\alpha,\alpha_2,\dotsc,\alpha_n) \in (F^\perp)_k$. We may discard all terms containing $\alpha^2$, so we may write $\Theta = \alpha \phi + \psi$ where $\phi, \psi$ only involve $\alpha_2,\dotsc,\alpha_n$, and $\phi \in T_{k-1}$, $\psi \in T_k$. Then $0 = \Theta F = x (\psi \aa G) + \phi \aa G + \psi \aa K$, so $\psi \aa G = \phi \aa G + \psi \aa K = 0$. Thus $\psi \in (G^\perp)_k$. Since $\psi$ only involves $\alpha_2,\dotsc,\alpha_n$, $\psi = c \Psi$ for some $c \in \mathbb{C}$. So $0 = \phi \aa G + \psi\aa K = \phi \aa G + c \Psi \aa K$. Since $\Psi \aa K \notin T_{k-1} \aa G$ it must be $c = 0$ and $\phi \aa G = 0$, so $\Theta = \alpha \phi$ where $\phi \in (G^\perp)_{k-1} = (\alpha)_{k-1}$. Then $\Theta \in (\alpha^2)_k$. Now the idea is to choose $K \in \mathbb{C}[x_2,\dotsc,x_n]_d$ a form so that not only $\Psi\aa K \notin T_{k-1}\aa G$, but in fact $\Psi\aa (K - \ell^d) \notin T_{k-1}\aa G$ for all $\ell = \ell(x_2,\dotsc,x_n)$. The linear map $D = D_{\Psi,k+1} \colon S'_d \to S'_{k+1}$ is surjective, so $D^{-1}(T_{k-1} \aa G)$ has codimension equal to the codimension of $T_{k-1} \aa G$, which is $\binom{n+k-1}{n-2} - \binom{n+k-3}{n-2}$. The projective Veronese variety $\{[\ell^d] : \ell = \ell(x_2,\dotsc,x_n)\}$ has dimension $n-2$. We have $\binom{n+k-1}{n-2} - \binom{n+k-3}{n-2} = \binom{n+k-3}{n-3} + \binom{n+k-2}{n-3} \geq \binom{n-2}{n-3}+\binom{n-1}{n-3} > n-2$. So a general translate of the Veronese variety is disjoint from $\mathbb{P}(D^{-1}(T_{k-1} \aa G))$. This shows that for general $K$, $\Psi\aa (K-\ell^d) \notin T_{k-1} \aa G$ as claimed. By the above calculation, $((F-\ell^d)^\perp)_k = (\alpha^2)_k$ so $r(F-\ell^d) \geq \al(G)+1$ for all $\ell = \ell(x_2,\dotsc,x_n)$. As discussed above, then $r(F) \geq \al(G)+2$. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm: n=3 greater than monomial}] The apolar length of a general binary form of degree $d-1=2k$ is $(k+1)^2 = ((d+1)/2)^2$. So there exists a form of degree $d$ in $3$ variables of rank strictly greater than $((d+1)/2)^2$, as claimed. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm: n=4 greater than generic}] There exists a form in $4$ variables of degree $d$ of rank strictly greater than the apolar length of a general form in $3$ variables of degree $d-1=2k$, which is $\binom{k+2}{3} + \binom{k+3}{3}$, which is equal to the generic rank of a form in $4$ variables of degree $d$. \end{proof} The genericity conditions in the proof of Theorem \ref{thm: general case higher rank} are very explicit and can be easily applied in practice. We illustrate this in the case of ternary quintics. De Paris has shown that every ternary quintic (form of degree $d=5$ in $n=3$ variables) has Waring rank at most $10$, see \cite{MR3349108}. It is well known $r(x y^2 z^2) = 9$. But it is left open by De Paris whether the maximum rank of a ternary quintic is $9$ or $10$. \begin{theorem} There exists a ternary quintic form of rank $10$. Explicitly, $F = xyz^3 + y^4z$ has $r(F) = 10$. \end{theorem} \begin{proof} Here is an explicit expression showing $r(F) \leq 10$: $F = (xyz^3 - 2y^2z^3 - (1/5)z^5) + (y^4z + 2y^2z^3 + (1/5)z^5)$. Here $(y^4z + 2y^2z^3 + (1/5)z^5)^\perp = (\beta^2 - \gamma^2 , \beta \gamma^4)$, and since $\beta^2-\gamma^2$ has distinct roots, this binary form has $r(y^4z + 2y^2z^3 + (1/5)z^5) = 2$. And compute $(xyz^3 - 2y^2z^3 - (1/5)z^5)^\perp = (\alpha^2 , 4 \alpha \beta + \beta^2 , \beta^2 \gamma^2 - \gamma^4 )$, which is a complete intersection generated in degrees $2, 2, 4$ with the first generator divisible by (equal to) the square of a linear form; by Example~\ref{example: complete intersection} $r(xyz^3 - 2y^2z^3 - (1/5)z^5) = 2 \cdot 4 = 8$. Thus $r(F) \leq 2 + 8 = 10$. However the more important point is to show $r(F) \geq 10$. We compute: \begin{align*} \alpha^2 \aa F &= 0, & \beta^2 \aa F &= 12 y^2 z, \\ \alpha \beta \aa F &= z^3, & \beta \gamma \aa F &= 3xz^2 + 4y^3, \\ \alpha \gamma \aa F &= 3yz^2, & \gamma^2 \aa F &= 6xyz. \end{align*} Observe that the nonzero derivatives listed above are linearly independent: in fact no monomial appears in more than one of them. So $(F^\perp)_2$ is spanned by $\alpha^2$. This shows that the Hilbert function of $A^F$ is $1,3,5,5,3,1$. In particular $\al(F) = 18$. Observe also that $\alpha \aa F= y z^3$, $\alpha^2 \aa F = 0$. By Theorem~\ref{thm: waring bound}, $r(F) \geq \dim \Diff(y z^3) = 8$. If $r(F) = 8$ then $r(F) \geq \dim \Diff(F) - \dim \Diff(y z^3) = 18 - 8 = 10$, by Corollary~\ref{corollary: improve by 1}. So $r(F) > 8$. Now we rule out the possibility $r(F) = 9$. Suppose to the contrary $F = \ell_1^5 + \dotsb + \ell_9^5$. Proposition~\ref{prop: bound points off alpha} shows at least $8$ of the $[\ell_i]$ lie off of the hyperplane $\mathbb{P} V(\alpha)$; but if all $9$ lie off of the hyperplane, then by Proposition~\ref{prop: no point bound} $r(F) \geq \dim \Diff(F) - \dim \Diff(\alpha \aa F) = 10$. So say $\ell_1,\dotsc,\ell_8$ lie off of $V(\alpha)$ and $\ell_9 = ay+bz$ lies on $V(\alpha)$. Let $G = F - (ay+bz)^5 = \ell_1^5 + \dotsb + \ell_8^5$, so that $r(G) = 8$. Note $\alpha \aa G = \alpha \aa F = y z^3$. We compute again: \begin{align*} \alpha^2 \aa G &= 0, & \beta^2 \aa G &= 12 y^2 z - 20a^2(ay+bz)^3, \\ \alpha \beta \aa G &= z^3, & \beta \gamma \aa G &= 3xz^2 + 4y^3 - 20ab(ay+bz)^3, \\ \alpha \gamma \aa G &= 3yz^2, & \gamma^2 \aa G &= 6xyz - 20b^2(ay+bz)^3. \end{align*} If $a \neq 0$ then $\alpha \beta \aa G$, $\alpha \gamma \aa G$, $\beta^2 \aa G$ are linearly independent as $\beta^2 \aa G$ is the only one with a nonzero $y^3$ term. If $a = 0$ then the same three derivatives are still linearly independent as they are distinct monomials. And $\beta \gamma \aa G$, $\gamma^2 \aa G$ are linearly independent modulo the other derivatives because they involve different monomials with $x$. In conclusion, the nonzero derivatives of $G$ listed above are linearly independent, so $(G^\perp)_2$ is spanned by $\alpha^2$. It follows that $A^G$ has Hilbert function $1, 3, 5, 5, 3, 1$, the same as $A^F$. Now the same argument applies to $G$: $\alpha \aa G = \alpha \aa F = y z^3$, $\alpha^2 \aa G = 0$, so $r(G) \geq \dim \Diff(yz^3) = 8$, and if $r(G) = 8$ then $r(G) \geq \dim \Diff(G) - \dim \Diff(\alpha \aa G) = 10$, hence $r(G) > 8$, by Corollary~\ref{corollary: improve by 1}. This contradicts the construction of $G$ which shows $r(G) = 8$. It follows that $r(F) > 9$, so $r(F) = 10$. \end{proof} \begin{remark} The result of Theorem~\ref{thm: n=3 greater than monomial} is the best possible for degrees $d = 3,5$: the result $r_{\mathrm{max}}(3,d) \geq 1+((d+1)/2)^2$ is equality for these degrees. For other degrees, and for $n > 3$, one may ask if this bound can be improved. Two potential routes for improvement suggest themselves. First, Carlini, et al, show a more general and potentially stronger version of Theorem~\ref{thm: waring bound}, see \cite[Corollary~3.4]{Carlini:2015sp}. Second, one might try modifying the proof of Theorem~\ref{thm: general case higher rank} by taking $G$ of apolar length $2$ less than the general apolar length, and showing that in appropriate cases $\mathbb{P}(D^{-1}(T_{k-1} \aa G))$ is disjoint from not only a general translate of the Veronese but in fact from a general translate of the secant variety of the Veronese. \end{remark} \bigskip \renewcommand{\MR}[1]{{}} \bibliographystyle{amsplain} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
{ "redpajama_set_name": "RedPajamaArXiv" }
8,724
/* jshint node:true, asi:true */ "use strict"; var _ = require('underscore') var nid = require('nid') var common = require('./common') var allcmds = ['save','load','list','remove','close','native'] /* Standard meta-query parameters: sort$: {fieldname: +/-1}; sort by single fieldname, -1 => descending, +1 => ascending limit$: size (integer); number of results to return skip$: size (integer); number of results to skip over fields$: array of field names to include these can all be used together native$: anything; pass value to database connection as store specific query everything else is ignored each store needs to document this value format */ // TODO: what if an entity object is passed in as a query param? convert to id? var wrap = { list: function( cmdfunc ) { var outfunc = function( args, done ) { if( _.isString(args.sort) ) { var sort = {} if( '-' == args.sort[0] ) { sort[args.sort.substring(1)] = -1 } else { sort[args.sort] = +1 } args.sort = sort } return cmdfunc.call(this,args,done) } for( var p in cmdfunc ) { outfunc[p] = cmdfunc[p] } return outfunc } } exports.cmds = allcmds.slice(0) /* opts.map = { canon: [cmds] } * canon is in string format zone/base/name, with empty or - indicating undefined * opts.taglen = length of instance tag, default 3 */ exports.init = function(si,opts,store,cb) { /* jshint loopfunc:true */ // TODO: parambulator validation var entspecs = [] if( opts.map ) { for( var canon in opts.map ) { var cmds = opts.map[canon] if( '*' == cmds ) { cmds = allcmds } entspecs.push({canon:canon,cmds:cmds}) } } else { entspecs.push({canon:'-/-/-',cmds:allcmds}) } var tagnid = nid({length:opts.taglen||3,alphabet:'ABCDEFGHIJKLMNOPQRSTUVWXYZ'}) var tag = tagnid() var storedesc = [store.name,tag] for( var esI = 0; esI < entspecs.length; esI++ ) { var entspec = entspecs[esI] storedesc.push(entspec.canon) var zone,base,name // FIX: should use parsecanon var m = /^(\w*|-)\/(\w*|-)\/(\w*|-)$/.exec(entspec.canon) if( m ) { zone = m[1] base = m[2] name = m[3] } else if( (m = /^(\w*|-)\/(\w*|-)$/.exec(entspec.canon)) ) { base = m[1] name = m[2] } else if( (m = /^(\w*|-)$/.exec(entspec.canon)) ) { name = m[1] } zone = '-'===zone ? void 0 : zone base = '-'===base ? void 0 : base name = '-'===name ? void 0 : name var entargs = {} if( void 0!== name ) entargs.name = name; if( void 0!== base ) entargs.base = base; if( void 0!== zone ) entargs.zone = zone; _.each(entspec.cmds, function(cmd){ var args = _.extend({role:'entity',cmd:cmd},entargs) var cmdfunc = store[cmd] if( wrap[cmd] ) { cmdfunc = wrap[cmd](cmdfunc) } if( cmdfunc ) { if( 'close' != cmd ) { si.add( args, cmdfunc ) } } else throw si.fail('store_cmd_missing',{cmd:cmd,store:storedesc}) if( 'close' == cmd ) { si.add('role:seneca,cmd:close',function( close_args, done ) { var closer = this if( !store.closed$ ) { cmdfunc.call(closer,close_args,function(err){ if( err ) closer.log.error('close-error',close_args,err); store.closed$ = true closer.prior(close_args,done) }) } else return closer.prior(close_args,done); }) } }) } // legacy if( cb ) { cb.call(si,null,tag,storedesc.join('~')) } else return { tag:tag, desc:storedesc.join('~') } }
{ "redpajama_set_name": "RedPajamaGithub" }
3,021
Richard Craven Professional Support Lawyer rcraven@mayerbrown.com London +44 20 3130 3656 Contentious: over 25 years' experience of resolving building and civil engineering disputes in arbitration, litigation and adjudication and advising on contractual construction matters from defects to delays and final account claims many of the matters have been substantial and complex, requiring the efficient management of very substantial volumes of documents disputes have involved a wide range of projects, both international and in the UK, from hospitals, hotels, shopping centres, housing, office and industrial developments to pipeline dredging and airport construction acted for all sides of the industry, employers, contractors, sub-contractors, consultants and insurers, giving an invaluable perspective. Non-Contentious: substantial experience of the drafting and negotiation of project documentation, acting for employers, contractors, sub-contractors and consultants on a variety of projects and procurement routes, including PFI/PPP, traditional, design and build and construction management, and in the preparation of sub-contracts, appointments, warranties and associated contract documents. Commonwealth Games Stadium in Manchester – advising the contractor acting as Construction Manager and one of the major Trade Contractors on the terms of its two contracts A13 Thames Gateway DBFO Road Project – advising the consortium contractor on sub-contract documentation Arbitration proceedings in respect of final account and delay claims against the main contractor in connection with the construction of a major hotel, office and retail development in Singapore Advising a livery company on substantial delay and loss and expense claims by the design and build contractor responsible for refurbishment of a listed office block Acting for main contractor in negotiating design and build contracts for refurbishment of office blocks Advising a major financial services group on the defence of substantial delay and loss and expense claims by the design and build contractor responsible for construction of its headquarters building PFI contracts - advising on construction aspects as part of the Projects Group team Construction & Engineering London Legal Update: Issue 65 The contract, the whole contract and nothing but.....? Good intentions can be difficult to interpret Group's Training and Client Seminar Programmes Runs the Group's training and client seminar programmes Original Issue Discount World Trade Institute Seminars on New Financial Products University of Oxford, Merton College Chartered Institute of Arbitrators
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,226
\section{Introduction} Recent work in language modelling and word embeddings has led to a sharp increase in use of context-dependent models such as ELMo \cite{PetersEtAl18ELMO} and BERT \cite{devlin2019bert}. These models, by providing representations of words which depend on the surrounding context, allow us to take account of the effects not only of discrete differences in word sense but of the more graded effects of context. However, evaluation of these models has generally been in terms of either their performance as language models, or their effect on downstream tasks such as sentiment classification \cite{PetersEtAl18ELMO}: there are few resources available which allow evaluation in terms of the properties of the embeddings themselves, or in terms of their ability to model human perceptions of meaning. There are established methods to evaluate word embedding models intrinsically via their ability to reflect human similarity judgements (see e.g.\ WordSim-353 \cite{finkelstein2002placing} and SimLex-999 \cite{HillEtAl15SimLex}) or model analogies \cite{mikolov2013efficient}; however, these have generally ignored context and treated words in isolation. The few that do provide context (e.g.\ SCWS \cite{huang2012improving} and WiC \cite{pilehvar2019wic}) focus on word sense and discrete effects, thus missing some of the effects that context has on words in general, and some of the benefits of context-dependent models. To evaluate current models, we need a way to evaluate their ability to reflect similarity judgements \emph{in context}: how well do they model the effects that context has on word meaning? In this paper we present our ongoing efforts to define and build a new dataset that tries to fill that gap: \textbf{CoSimLex}. CoSimLex builds on the familiar pairwise, graded similarity task of SimLex-999, but extends it to pairs of words as they occur in context, and specifically provides two different shared contexts for each pair of words. This will provide a dataset suitable for intrinsic evaluation of state-of-the-art contextual word embedding models, by testing their ability to reflect human judgements of word meaning similarity in context, and crucially, the way in which this varies as context is changed. It goes beyond other existing context-based datasets by taking the \emph{gradedness} of human judgements into account, thus applying not only to polysemous words, or words with distinct senses, but to the phenomenon of context-dependency of word meaning in general. In addition the new dataset is multi-lingual, and includes four less-resourced European languages: Croatian, Estonian, Finnish and Slovene. The dataset will be used as the gold standard for the final evaluation of a currently running task at SemEval2020: \textbf{Task 3 Graded Word Similarity in Context}.\footnote{https://competitions.codalab.org/competitions/20905} \section{Background}\label{sec:bg} From the outset, our main motivation for the development of this dataset came from an interest in the cognitive and psychological mechanisms by which context affects our perception of the meaning of words. There have been many different ways in the literature to look at this phenomenon, which lie in the intersection of several different fields of research, and a detailed discussion of the different approaches to this problem is out of the scope of this paper; here, we present two of the most prominent ideas that helped define what we were trying to capture, and made an impact in the design of the dataset and its annotation process. We then look at previous datasets that deal with similarity in context. \subsection{Contextual Modulation} Within the field of lexical semantics, \newcite{cruse1986lexical} proposed an interesting compromise between those linguists that saw words as associated with a number of discrete senses and those that thought that the perceived discreteness of lexical senses is just an illusion. He distinguishes two different manners in which sentential context modifies the meaning of a word. First, the context can select for different discrete senses; if that is the case, the word is described as \emph{ambiguous}, and the process is referred as \textbf{contextual selection of senses}. The second way works within the scope of a single sense, modifying it in an unlimited number of ways by \emph{highlighting} certain semantic traits and \emph{backgrounding} others. This process is called \textbf{contextual modulation of meaning}, and the word is said to be \emph{general} with respect to the traits that are being modulated. This effect is by nature not discrete but continuous and fluid, and since every word is \emph{general} to some extent: it can be argued that a word has a different meaning in every context in which it appears. Some examples can help to see the different ways in which these phenomena work in real life: \newcounter{list_counter} \begin{enumerate} \item We finally reached the bank. \item At this point, the bank was covered with brambles. \item Sue is visiting her pregnant cousin. \item Arthur poured the butter into a dish. \setcounter{list_counter}{\value{enumi}} \end{enumerate} In the first sentence the context doesn't really help us to select a sense for the word \emph{bank}. This creates some tension: because bank is such an ambiguous word, we need to select a sense in order for the sentence to properly work. This is an example of \emph{ambiguity} as opposed to \emph{generality}. In the second sentence one of the senses is clearly more \emph{normal} than the other. \newcite{cruse1986lexical} sees the evaluation of \emph{contextual normality} as the main mechanism for sense selection. In the third sentence, the word \emph{cousin} could in principle refer to a male or a female. The context is clearly telling us that we are talking about a female cousin, however in this case \emph{cousin} is a \emph{general} word that includes male, female, but as well tall, short, happy and sad cousins. The meaning of \emph{cousin} is being \emph{modulated} by the context to promote the ``female'' trait; but notice that the sentence ``Sue is visiting her cousin'' doesn't create any tension: \emph{cousin} is not ambiguous in the true sense. The last sentence is another example of \emph{contextual modulation} highlighting the ``liquid'' trait for \emph{butter}. It is interesting to notice that in this case not only "liquid" is highlighted, related traits like "warm" can be highlighted as a consequence. These two processes happen very commonly together, with the same context forcing a sense and then modulating its expression. Many different explanations have been proposed for the emergence of these discrete senses, and some may have their origins in very commonly modulated meaning but, according to Cruse, once a discrete sense is established it become some different to \emph{contextual modulation} and follows different rules: \begin{enumerate} \setcounter{enumi}{\value{list_counter}} \item John prefers bitches to dogs. \item ? John prefers bitches to canines. \item ? Mary likes mares better than horses. \end{enumerate} Here the first sentence works because one of the discrete senses associated to the word \emph{dog} refers only to male dogs. This cannot be explained by \emph{contextual modulation}. If that was the case the second sentence, which replaces \emph{dog} with \emph{canine}, would work and \emph{canine} would be modulated in the same way than \emph{dog} was. The fact that neither \emph{canine} nor \emph{horse} can be modulated in this same way indicates that meaning modulation and sense selection are two, strongly interconnected, but distinctive mechanisms of contextual variability. Given this, it seems clear that the contextual selection of senses would modify human judgements of similarity. For example, the word \emph{bank}, when used in a context which selects its financial institution sense, should be scored as more similar to other kinds of financial institution (e.g.\ \emph{building society}) than when in a context which selects the geographic sense of the word. However, we should also expect that a word like \emph{butter}, when contextually modulated to highlight its ``liquid'', ``hot'' and ``frying'' traits, should score more similar to \emph{vegetable oil} than when contextually modulated to highlight its ``animal sourced'', ``dairy'', and ``creamy'' traits. This kind of hypothesis would be testable given a new context-dependent similarity dataset. Interestingly, Cruse doesn't find the contrast between polysemy and homonymy particularly helpful, and dislikes the use of these terms because they promote the idea that the primary semantic unit is some common lexeme and each of the different senses are just variants of it. He instead believes the primary semantic unit should be the \emph{lexical units}, a union of a single sense and a lexical form, and finds it more useful to look at the contrast between discrete and continuous semantic variability. It is true that homonymous words will always fall into the discrete category, but most common understandings of polysemy would include both discrete and continuous variations. \subsection{Salience Manipulation} Until now we have looked at contextual variability as an exclusively linguistic phenomenon, a point of view rooted in lexical semantics. We looked at how the context of the sentence affects the meaning of the word. In contrast, cognitive linguistics, and the more specific cognitive semantics, look at language and meaning as an expression of human cognition more generally \cite{evans2018}. This approach champions concepts, more specifically \emph{conceptual structures}), as the true recipient of meaning, replacing words or lexical units. These linguistic units no longer refer to objects in an external world but to concepts in the mind of the speaker. Words get their meaning only by association with \emph{conceptual structures} in our minds. The process by which we construct meaning is called conceptualisation, an embodied phenomenon based in social interaction and sensory experience. Cognitive linguists gravitate to themes that focus on the flexibility and the ability of the interaction between language and conceptual structures to model continuous phenomena, like prototyping effects, categories, metaphor theory and new ways to look at polysemy. Within the cognitive tradition, the idea of \emph{conceptual spaces}, characterised by \emph{conceptual dimensions}, has been especially influential \cite{gardenfors2000,gardenfors2014geometry}. These dimensions can range from concrete ones like weight, temperature and brightness, to very abstract ones like awkwardness or goodness. Once a domain, or selection of dimensions is established, a concept is defined as a region (usually a convex one) of the conceptual space. An example would be to define the colour \emph{brown} as a region of a space made of the dimensions \emph{Red}, \emph{Green} and \emph{Blue}. This geometric approach lends itself perfectly to model phenomena like prototyping (central point of the region), similarity (distance), metaphor (projection between different dimensions) and, more importantly for our concerns here, fluid changes in meaning due to the effects of context. \newcite{warglien2015meaning} use conceptual spaces to look at \emph{meaning negotiation} in conversation. They investigate the mechanisms, consciously or unconsciously, employed by the people involved in conversation to negotiate meaning of vague predicates, in order to satisfy the coordination needed for communication. These tools help them to decide areas in which they don't agree as well. All these processes work by manipulating the conceptual dimensions in which meaning is represented. We will refer to them as \textbf{salience manipulation} because their main role is to dynamically rise or lower the perceived importance of certain conceptual dimensions. The main mechanism by which speakers can modify salience of conceptual dimensions are the automatic \emph{priming} effects described by, for example, \newcite{pickering2004toward}: mentioning specific words early in the conversation can make the dimensions associated with such words more relevant. Speakers can also explicitly try to remove dimensions from the domain in order to promote agreement, or bring in new dimensions by using \emph{metaphoric projections}. Because metaphors can be understood as mappings that transfer structure from one domain to another, they can introduce new dimensions and meaning to the conversation. \begin{quote}The lion Ulysses emphasizes Ulysses' courage but hides his condition of a castaway in Ogiya. Thus metaphors act by orienting communication and selecting dimensions that may be more or less favorable to the speaker. By suggesting that a storm hit the financial markets, a bank manager can move the conversation away from dimensions pertaining to his own responsibilities and instead focus on dimensions over which he has no control. \cite{warglien2015meaning}\end{quote} From this perspective, then, the change in meaning is no longer a change in the meaning of a specific word, but a change in the mind of the hearer (or reader), a change in their \emph{mental state} triggered by their interaction with the context. In addition, the expectation that priming is the main mechanism for modifying salience has its own implications: \newcite{branigan2000syntactic} found that priming effects are much stronger in the context of as natural dialog as possible, when speakers had no time constraints and could respond at their own pace. This has implications for the design of our dataset and annotation methodology: it is crucial for us to create an annotation process in which the annotator interacts with the context, and does so in as natural a way as possible, before they rate the similarity. Because priming is an automatic process, them knowing that they should be annotating similarity in context becomes a lot less important. One last interesting consequence of looking at this type of contextual effect is that because the change is in the mind of the annotator, the words that we are rating don't need to be part of the context. From the classical lexical semantics perspective, meaning change comes from the interaction between the word and the rest of the context; but the cognitive approach suggests that if the context triggers changes in the salience of conceptual dimensions related to particular words being annotated, we should see change in the scoring of similarity even if those words are not explicitly present in the context. Our goal in this dataset is therefore to create an annotation process that allows us to capture both of these possible contextual phenomena. \begin{figure*}[h!] \begin{tabular}{|llr|} \hline \bf Word1: population & \bf Word2: people & \bf SimLex: $\mu$ 7.68 $\sigma$ 0.80 \\ \hline \bf Context1 & & \bf Context1: $\mu$ 6.49 $\sigma$ 1.40 \\ \multicolumn{3}{|p{\linewidth}|}{ Disease also kills off a lot of the gazelle \textbf{population}. There are many \textbf{people} and domesticated animals that come onto their land. If they pick up a disease from one of these domesticated species they may not be able to fight it off and die. Also, a big reason for the decline of this gazelle population is habitat destruction. }\\ \hline \bf Context2 & & \bf Context2: $\mu$ 7.73 $\sigma$ 1.77 \\ \multicolumn{3}{|p{\linewidth}|}{ But the discontent of the underprivileged, landless and the unemployed sections remained even after the reforms. The crumbling industries give rise to extreme unemployment, in addition to the rapidly growing \textbf{population}. These \textbf{people} mostly belong to the SC/ST or the OBC. In most cases, they join the extremist organizations, mentioned earlier, as an alternative to earn their livelihoods. }\\ \hline \end{tabular} \caption{Example from the English pilot, showing a word pair with two contexts, mean and standard deviation of human similarity judgements and the original SimLex equivalent values for comparison.}\label{fig:example} \end{figure*} \subsection{Existing Datasets} There are a few examples of datasets which take context into account. However, so far these have been motivated by discrete \emph{sense disambiguation}, and therefore take a view of word meaning as discrete (taking one of a finite set of senses) rather than continuous; they are therefore not suited for the more graded effects we are interested to look into. The \textbf{Stanford Contextual Word Similarity (SCWS)} dataset \cite{huang2012improving} does contain graded similarity judgements of pairs of words in the context of organically occurring sentences (from Wikipedia). However it was designed to evaluate a discrete multi-prototype model, so the focus was on the contexts selecting for one of the word senses. This resulted in them presenting each of the two words of the pair in their own distinct context. From our point of view this approach has some drawbacks: First, even in the cases where they annotated the same pair twice, we find ourselves with four different contexts, each affecting the meaning of each of the instances of the words independently, and it is not possible to produce a systematic comparison of contextual effects on pairwise similarity. Second, beyond the independent lexical semantics of each word being affected by their independent \emph{local context}, the annotator is being presented with two completely independently occurring contexts at the same time. Even if the two context did organically occur on their own, this combination of the two didn't, and we have seen before how crucial we think keeping the interaction with the context as natural as possible is. There is no easy way to know how this newly assembled \emph{global context} affects the cognitive state of the annotators and their perception of similarity. The same goes for the contextually-aware models trying to predict their results. Joining the contexts before feeding them to the model could create conflicting, difficult to predict effects, but feeding each context independently is fundamentally different to what humans annotators were presented with. In addition to these limitations of the independent contexts approach, the scores found in SCWS show a worryingly low inter-rater agreement (IRA), measured as the Spearman correlation between different annotators. As pointed out by \cite{pilehvar2019wic}, the mean IRA between each annotator and the average of the rest, which is considered a human-level upper bound for model's performance, is 0.52; while the performance of a simple context-independent model like word2vec \cite{mikolov2013efficient} is 0.65. Examining the scores more in detail, we find that many scores show a very large standard deviation, with annotators rating the same pair very differently. One possible reason for this may lie in the annotation design: the task itself does not directly enforce engagement with the context, and the words were presented to annotators highlighted in boldface, making it easy to pick them out from the context without reading it; thus potentially leading to a lack of engagement of the annotators with the context. A lot of these limitations were addressed by the more recent \textbf{Words-in-Context (WiC)} dataset \cite{pilehvar2019wic}. With a more direct and straightforward take on word sense disambiguation, each entry of the dataset is made of two lexicographer examples of the same word. The entry is completed with a positive value (T) if the word sense in the two examples/context is the same, or with a negative value (F) if the contexts point to different word senses. One advantage of this design is that it forces engagement with the context; another is that it creates a task in which context-independent models like word2vec ``would perform no better than a random baseline''. Human annotators are shown to produce healthy inter-rater agreement scores for this dataset. However the dataset is again focused in looking at discrete word senses and cannot therefore capture continuous effects of context in the judgements of similarity between different words. These datasets are also available only in English, and do not allow models to be evaluated across different languages. \section{Dataset and Task Design} CoSimLex will be based on pairs of words from SimLex-999 \cite{HillEtAl15SimLex}; the reliability and common use of this dataset makes it a good starting point and allows comparison of judgements and model outputs to the context-independent case. For Croatian, Estonian and Finnish we are using existent translations of Simlex-999 \cite{Mrksic.etal2017,venekoski2017finnish,KittaskThesis2019}. In the case of Slovene, we have produced our own new translation, following the methodology used by \newcite{Mrksic.etal2017} for Croatian. The English dataset consists of 333 pairs; the Croatian, Estonian, Finnish and Slovene datasets of 111 pairs each. Each pair is rated within two different contexts, giving a total of 1554 scores of contextual similarity. This poses a difficult task: to find suitable, organically occurring contexts for each pair; this task is more pronounced for languages with less resources, and as a result the selection of pairs is different for each language. Each line of CoSimLex will be made of a pair of words selected from Simlex-999; two different contexts extracted from Wikipedia in which these two words appear; two scores of similarity, each one related to one of the contexts; and two scores of standard deviation. Please see Figure~\ref{fig:example} for an example from our English pilot. \paragraph{Evaluation Tasks and Metrics} The first practical use of CoSimLex will be as a gold standard for the public SemEval 2020 task 3: \textit{Graded Word Similarity in Context}. The goal of this task is to evaluate how well modern context-dependent embeddings can predict the effect of context in human perception of similarity. In order to do so we define two subtasks and two metrics: \paragraph{Subtask 1 - Predicting Changes:} In subtask 1, participants must predict the \emph{change} in similarity ratings between the two contexts. In order to evaluate it we calculate the difference between the scores produced by the model when the pair is rated within each one of the two contexts. We do the same with the average of the scores produced by the human annotators. Finally we calculate the uncentered Pearson correlation. A key property of this method is that any context-independent model will predict no change and get strongly penalised in this task. \paragraph{Subtask 2 - Predicting Ratings:} In subtask 2, participants must predict the absolute similarity rating for each pair in each context. This will be evaluated using Spearman correlation with gold-standard judgements, following the standard evaluation methodology for similarity datasets \cite{HillEtAl15SimLex,huang2012improving}. Good context-independent models could theoretically give competitive results in this task, however we still expect context-dependent models to have a considerable advantage. \begin{figure*}[ht] \begin{tabular}{|llr|} \hline \bf Word1: čovjek (adult male) & \bf Word2: dijete (child) & \bf \\ \hline \bf Context1 & & \bf Context1: $\mu$ 2.5 $\sigma$ 1.76 \\ \multicolumn{3}{|p{0.975\linewidth}|}{ Špinat ima dosta željeza, ali i oksalne kiseline. Oksalna kiselina veže kalcij i čini ga neupotrebljivim za ljudski organizam. Prema novijim istraživanjima, špinat se ne preporuča kao česta hrana mlađim osobama i \textbf{djeci}, ali je izvrsna hrana za starije \textbf{ljude}.}\\ \multicolumn{3}{|p{0.975\linewidth}|}{ (Spinach has plenty of iron but also oxalic acid. Oxalic acid binds calcium and renders it unusable for the human body. According to recent research, spinach is not recommended as a common food for young people and \textbf{children}, but it is an excellent food for older \textbf{people}.)}\\ \hline \bf Context2 & & \bf Context2: $\mu$ 4.25 $\sigma$ 0.95 \\ \multicolumn{3}{|p{0.975\linewidth}|}{ Nakon što su \textbf{ljudi} u selu saznali da je trudna, počinju sumnjati na dr. Richardsona jer je on proveo najviše vremena s njom. Kako vrijeme prolazi, pritisak glasina na kraju prisiljava liječnika da se preseli. Odluči se oženiti s Belindom i uzeti \textbf{dijete} sa sobom.}\\ \multicolumn{3}{|p{0.975\linewidth}|}{ (After \textbf{people} in the village find out she is pregnant, they begin to suspect Dr. Richardson because he spent the most time with her. As time goes on, the pressure of the rumors eventually forces the doctor to move. She decides to marry Belinda and take her \textbf{child} with her.)}\\ \hline \end{tabular} \caption{Example from the Croatian pilot (translated to English using Google Translate), showing the word pair with two contexts, mean and standard deviation of human similarity judgements. This example showed one of the most significant contextual effects in the pilot; it went in the opposite direction to the one predicted by the expert annotator. Note the effect of stemming: the target word \textit{čovjek} appears in both cases via its irregular plural, \textit{ljudi} (nominative) or \textit{ljude} (accusative); and \textit{dijete} appears in Context 1 in its dative plural form \textit{djeci}.}\label{fig:croatian} \end{figure*} \section{Annotation Methodology} As starting point for our annotation methodology, we adapted the annotation instructions used for SimLex-999. This way we benefit from its tested method of explaining how to focus on \emph{similarity} rather than \emph{relatedness} or \emph{association} \cite{HillEtAl15SimLex}. For English we adopted a modified version of their crowd-sourcing process: we use \emph{Amazon Mechanical Turk}, with the same post-processing and cleaning of the data (a necessary step when working with this kind of crowd-sourcing platform), and achieve similarly good inter-annotator agreement. For the less-resourced languages, crowdsourcing is not a viable option due to lack of available speakers, and we recruit annotators directly. This means fewer annotators (for Croatian and Slovene, 12 annotators vs 27 in English), however the average quality of annotation is a lot higher and the data requires less post-processing - see Section~\ref{sec:status} for details. \subsection{Finding Suitable Contexts } For each word pair we need to find two suitable contexts. These contexts are extracted from each language's Wikipedia. They are made of three consecutive sentences and they need to contain the pair of words, appearing only once each. English is by far the easiest language to work with, not only because of the amount and quality of the text contained in the English version of Wikipedia but because the other four languages are highly inflected (Croatian, Estonian, Finnish and Slovene). In order to overcome this we work with data from \cite{11234/1-1989}\footnote{http://hdl.handle.net/11234/1-1989} which contains tokenised and lemmatised versions of Wikipedia for 45 languages. We first find all the possible candidate contexts for each word pair, and then select those candidates that are most likely to produce different ratings of similarity. The differences are expected to be small, especially in words that don't present several senses and are not highly polysemous, so we need a process that has the most chances of finding contexts that make a difference. We use a dual process in which we use ELMo and BERT to rate the similarity between the target pair within each of the candidate contexts. Then we select the 2 contexts in which ELMo scored the pair as the most similar, and the 2 contexts in which it scored them as most different. We do the same using BERT scores. This gives us 4 contexts in which our target words are scored as very similar by the models and 4 contexts in which they are scored as very different. The final selection of two contexts is made by expert human annotators, one per language. We construct online surveys with these 8 contexts and ask them to select the two in which they think the word pair is the most and the least similar, trying to maximise the potential contrast in similarity. In addition, we ask them how much potential for a difference they see in the contexts selected. This gives us not only the contexts we need, but a predicted performance and direction of change for use in later analysis. In the case of less resourced languages, the smaller size and lower quality of the Wikipedia text resources require some extra steps to ensure the quality of the final annotation. For these languages we run the contexts through a set of heuristic filters to try to remove badly constructed ones. In addition we produce 16 candidates instead of 8 for the expert annotators to choose from, and we add the possibility for them to delete parts of the context in order to make them easier to read. Adding text is not allowed, in order to ensure that contexts are natural. \subsection{Contextual Similarity Annotation} The next step is to obtain the contextualised similarity annotations. Our goal is to capture the kind of contextual phenomena discussed in Section~\ref{sec:bg}: lexical meaning modulation and conceptual salience manipulation. In order to maximise our chances we define three goals: \begin{itemize} \item We want the interaction with the context to be as natural as possible, so as to maximise priming effects and capture the potential change in the salience of conceptual dimensions. \item We need a way in which annotators have the chance to account for lexical modulation within the sentence. \item We need to avoid the apparent lack of engagement we saw in the SCWS annotators. \end{itemize} With these goals in mind we designed a two-step mixed annotation process. Our online survey interface is composed of two pages per pair of words and context (each annotator scores only one of the contexts). In the first page the annotators are presented with the context, and asked to read it and come up with two words ``inspired by it''. Once this is complete, the second page shown presents the context again, but with the pair of words now highlighted in bold; they are now asked to rate the similarity of the pair of words within the sentence. The second page is the main scoring task; it is designed to capture changes in scores of similarity due both to lexical modulation and --- because we hope the annotators are still primed by their recent previous engagement with the context --- the changes in the salience of conceptual dimensions. The separate task on the first page is intended to make annotators engage fully with the whole context, while maintaining a natural interaction with it to maximise any priming effects. One of the possible problems we identified in the the SCWS annotation process is the fact that the words were always highlighted in bold, making it easy for annotators (Amazon Mechanical Turk workers) to just look at the pair of words in isolation and to not read the rest of the contexts. Our initial task is designed to prevent this (the words are not bold in the first page). In English, given the resources available, we follow SimLex-999 closely: we will use Amazon Mechanical Turk to get 27 annotators per pair and context. Annotators do not score the same pair twice: 27 annotators score the pair within one context and another 27 in the other. This means the whole dataset can be annotated at the same time. Reliability of annotations will be ensured by an adapted version of SimLex-999's post-processing, which includes rating calibration and the filtering of annotators with very low correlation to the average rating. In addition, we will use responses to the first annotation question to check annotator engagement with the context text and thus filter low quality raters. For Croatian, Estonian, Finnish and Slovene we recruit annotators directly: this means we have less of them (12 vs 27) but we expect the quality of the annotation to be better (and pilots confim this -- see below). It also means, howeve, that we must use the same annotators to rate the two contexts of each pair. This has an avantage, because it controls for the variation in the particular judgement of different annotators, but means that we introduce a week's delay in between annotations in order to make sure they don't remember, and are influenced by, their own previous score. \section{Current Status}\label{sec:status} \paragraph{Methodology prototyping} We have run three pilots with 13 pairs of words each to confirm the annotation design and methodology. Each study tested a slight variation: in the first pilot, annotators rated \emph{relatedness} in addition to similarity; the second focused on similarity, and tested the use of contexts related to the target words but not containing them; the third experimented with marking the target words in the context paragraphs using boldface font. The first pilot confirmed that (as with SimLex) similarity is a more useful metric for this task than relatedness, displaying a higher inter-annotator agreement and more variation between contexts; we therefore use similarity as the basis of our dataset, as described above. The results of the second pilot saw significant contextual effects in many examples, including some in which the target words weren't included in the contexts. This indicates that our method seems suitable for capturing priming effects and salience manipulation, or at least some kind of cognitive effects different from lexical contextualisation. The third pilot showed much lower agreement and lower difference between contexts: we take this as confirmation of our suspicion (from analysis of SCWS) that marking the target words makes it easy for annotators to ignore the rest of the context paragraph, and therefore use the two-stage annotation methodology described above, in which target words are \emph{not} initially marked. \paragraph{Results} The results from tests so far are very promising in terms of both the difference in judgements between contexts, and inter-annotator agreement. In the English pilot with the closest design to the current one (the second pilot described above), we collected 27 different ratings for each pair and each context: see Table~\ref{tab:eng-pilot} for detailed results. In addition to the English pilots we have run two pilots in Croatian and Slovene. Please see Table~\ref{tab:cro-pilot} and Figure~\ref{fig:croatian} for the general results of the Croatian pilot and one of the best examples that came from it respectively. Inter-rater agreement (IRA) was measured as Spearman correlation between each rating and the average: for the English, pilot, the mean was $\rho$ = 0.79, with average standard deviation $\sigma$ = 1.6; these compare well to other related datasets (SimLex-999 $\rho$ = 0.78, SCWS $\rho$ = 0.52). IRA was very high for the Slovene pilot $\rho$ = 0.82; significantly lower but still reasonable for the Croatian one $\rho$ = 0.68. In the English pilot, about a third of the pairs show a significant difference in the ratings between contexts, as assessed by a Mann-Whitney U test at $p<0.05$. The Slovene and Croatian pilots are very small (6 annotators per pair/context) and it is currently difficult to know how significant their results are (but see Table~\ref{tab:cro-pilot} for indications as to the most likely differences); they have however provided invaluable feedback on methods required for the particularities of these highly inflected, less-resourced languages. At the moment of writing this paper we are preparing to run a second round of pilots in Croatian and Slovene to test the design presented in the previous section. In the pilots so far, annotators were not asked explicitly to rate the words ``within the contexts''; while this should have encouraged pure priming effects, minimizing lexical modulation effects, and the fact that we obtained significant differences is encouraging, we expect that larger and more reliable differences will be obtained if annotators are explicitly told to consider the contexts. Our new pilots therefore use a more explicit question about similarity ``in the context of the sentence'' in order to promote strong lexical effects. \begin{table*}[ht] \begin{center} \begin{tabular}{|l|l|r|r|r|r|r|r|} \hline Word1 & Word2 & SimLex & Context1 & Context2 & STDev\_SL & STDev\_C1 & STDev\_C2 \\ \hline \rowcolor{mylightgray} captain & sailor & 5.00 & 5.20 & 6.44 & 1.43 & 1.93 & 1.77 \\ \hline corporation & business & 9.02 & 9.24 & 9.51 & 1.44 & 0.78 & 0.69 \\ \hline god & spirit & 7.30 & 5.65 & 5.30 & 1.63 & 2.47 & 1.90 \\ \hline \rowcolor{mylightgray} guilty & ashamed & 6.38 & 7.78 & 6.14 & 0.47 & 1.88 & 1.73 \\ \hline \rowcolor{mylightgray} lawyer & banker & 1.88 & 1.62 & 2.54 & 1.18 & 1.51 & 2.01 \\ \hline leader & manager & 7.27 & 8.08 & 7.65 & 1.43 & 1.19 & 1.34 \\ \hline \rowcolor{mylightgray} population & people & 7.68 & 6.49 & 7.73 & 0.80 & 2.37 & 1.92 \\ \hline rabbi & minister & 7.62 & 7.85 & 8.11 & 1.35 & 2.29 & 1.21 \\ \hline sheep & cattle & 4.77 & 4.37 & 4.47 & 0.47 & 2.36 & 2.04 \\ \hline task & woman & 0.68 & 0.15 & 0.15 & 0.34 & 0.42 & 0.40 \\ \hline \rowcolor{mylightgray} wealth & prestige & 6.07 & 5.20 & 6.67 & 1.55 & 2.05 & 1.74 \\ \hline \end{tabular} \caption{Results from the second English pilot including mean ratings and standard deviation for each context, and the original SimLex values for comparison. Rows shown shaded show a significant difference between ratings for Context1 and Context 2 (Mann-Whitney U test $p<0.05$).} \label{tab:eng-pilot} \end{center} \end{table*} \begin{table*}[ht] \begin{center} \begin{tabular}{|l|l|l|r|r|r|r|} \hline Word1 & Word2 & Predicted Potential & Context1 & Context2 & STDev\_C1 & STDev\_C2 \\ \hline bog & duh & Noticeable difference & 3.75 & 2.50 & 0.96 & 2.17 \\ \hline \rowcolor{mylightgray} čovjek & dijete & Small difference & 2.50 & 4.25 & 1.76 & 0.96 \\ \hline \rowcolor{mylightgray} ideja & slika & Noticeable difference & 3.33 & 2.00 & 2.16 & 0.82 \\ \hline nedavan & nov & Big difference & 4.17 & 3.25 & 1.47 & 2.22 \\ \hline područje & regija & Small difference & 5.50 & 5.33 & 0.58 & 0.82 \\ \hline presudan & važan & Small difference & 5.33 & 5.00 & 0.82 & 0.82 \\ \hline \rowcolor{mylightgray} rijeka & dolina & Noticeable difference & 0.33 & 0.75 & 0.82 & 0.50 \\ \hline \rowcolor{mylightgray} škola & pravo & Noticeable difference & 1.75 & 0.50 & 2.22 & 0.84 \\ \hline sunce & nebo & Small difference & 1.50 & 2.50 & 1.87 & 1.73 \\ \hline uništiti & izgraditi & Small difference & 0.25 & 0.83 & 0.50 & 1.60 \\ \hline \rowcolor{mylightgray} velik & težak & Noticeable difference & 3.75 & 1.67 & 1.71 & 2.66 \\ \hline znati & vjerovati & Small difference & 2.25 & 2.17 & 1.71 & 1.72 \\ \hline \end{tabular} \caption{Results from the Croatian pilot. In addition to the mean score values it shows the \emph{Predicted Potential} for contextual differences, as judged by the single expert annotator. In each case, Context 1 was the context in which the expert annotator expected the words to be perceived as more similar, and Context 2 as less similar (this applies only to order of presentation here, not to the annotators). Rows shown shaded suggest a trend towards significant difference between ratings for Context1 and Context2 (Mann-Whitney U test $p<0.15$.} \label{tab:cro-pilot} \end{center} \end{table*} \section{Conclusion} The growing use of context-dependent language models and representations in NLP motivates the need for a dataset against which they can be evaluated, and which can test their ability to reflect human perceptions of context-dependent meaning. CoSimLex will provide such a dataset, and do so across a number of less-resourced languages as well as English. The full dataset will be available for the evaluation stage of Semeval2020 at the beginning of February 2020, and be made public when the competition is over (before the LREC2020 conference). \section{Acknowledgements} This paper is supported by European Union's Horizon 2020 research and innovation programme under grant agreement No 825153, project EMBEDDIA (Cross-Lingual Embeddings for Less-Represented Languages in European News Media). The first author is also supported by the EPSRC and AHRC Centre for Doctoral Training in Media and Arts Technology (EP/L01632X/1). \section{Bibliographical References}\label{reference} \bibliographystyle{lrec}
{ "redpajama_set_name": "RedPajamaArXiv" }
4,043
I was on my way to a newly-opened milk tea franchise at Eastwood Mall in Quezon City when this electrifying scene welcomed me. It was just days after the Chinese New Year celebration and the administrators of the establishment had not taken down the Chinese-inspired lanterns which accentuated the lighting-outlines of the buildings that were facing the mall. See more of Karl Ace's snap shots. Discover Pangapisan on the Turista Trails blog. It was past 6:00am in my hometown of Pasig. While waiting for Jollibee to open to have my breakfast, I walked a few more meters to reach Vargas Bridge. Unfortunately, I had to stand in the middle of the road to get a full view of both the student and the street sweeper, but I timed the deed to escape being hit by an incoming truck. Baras, Rizal has a secret hideaway. Read more on the Turista Trails blog. Metro Manila is constantly changing. Your Camerasian is here to take snap shots of urban living, landmarks, and whatnot.
{ "redpajama_set_name": "RedPajamaC4" }
3,320
Q: Show that $\frac{(2n)!!}{(2n+1)!!}$ converges. Given a sequence $\{x_n\}, \ n\in\Bbb N$: $$ x_n = \frac{(2n)!!}{(2n+1)!!} $$ Show that $x_n$ converges. I'm wondering why I'm getting a seemingly wrong result (assuming the problem statement asks to prove convergence): $$ \begin{align} x_n &= \frac{(2n)!!}{(2n+1)!!} \\ &= \frac{2\cdot 4\cdot 6\cdots (2n-2)\cdot(2n)}{3\cdot 5\cdot 7\cdots (2n-1)\cdot(2n+1) } \\ &= \frac{2\cdot 4\cdot 6\cdots (2n-2)\cdot(2n)}{3\cdot 5\cdot 7\cdots (2n-1)\cdot(2n+1)} \cdot \frac{2^n\cdot n!}{2^n\cdot n!} \\ &= \frac{4^n (n!)^2}{(2n+1)!} \\ &= \frac{4^n (n!)^2}{(2n+1)\cdot (2n)!} \\ &=\frac{4^n}{2n+1}\cdot \frac{(n!)^2}{(2n)!} \end{align} $$ By Binomial coefficients: $$ {2n\choose n} = \frac{(2n)!}{n!(2n-n)!} = \frac{(2n)!}{(n!)^2} $$ Thus: $$ x_n = \frac{4^n}{2n+1}\cdot \frac{1}{{2n \choose n}} $$ Doesn't $\frac{4^n}{2n+1}$ grow faster than $\frac{1}{2n\choose n}$ is declining? Shouldn't $x_n$ diverge in that case? A: Using the Stirling approximation for factorials we can observe the following: $n! \sim \sqrt{2 \pi n} \left( \frac{n}{e} \right)^n$. This means that, while being a bit sloppy with the $\sim$ $ \begin{align*} x_n &= \frac{(2n)!!}{(2n+1)!!} \\ &= \frac{4^n (n!)^2}{(2n+1) (2n)!}\\ & \sim \frac{4^n 2 \pi n \left( \frac{n}{e} \right)^{2n}}{ (2n+1) 2\sqrt{\pi n} \left(\frac{2n}{e} \right)^{2n}}\\ &= \frac{\sqrt{\pi n}}{2 n + 1} \rightarrow 0 \end{align*} $ So $x_n \rightarrow 0$ A: You have $$0\leq x_{n+1} = x_n \cdot \frac{2n+2}{2n+3} \leq x_n$$ thus, your sequence is monotone decreasing and bounded from below and therefore convergent.
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,593
LEWISBURG — Dave Cecchini was tired of wearing Valparaiso and Lehigh brown. "I got a lot of earth tones in that wardrobe right now," he said. Instead, he turned in those bland threads for a new sharp Bucknell blue hat and sports coat, matching his blue and orange striped tie. Cecchini was introduced as the next Bucknell football coach on Saturday morning at the University's Graham Center. Bucknell Director of Athletics and Recreation Jermaine Truax, a former defensive back at Edinboro, welcomed Cecchini, a former All-American wide receiver at Lehigh, to a friendly offense vs. defense matchup at Christy Mathewson Stadium. Truax doesn't believe in his man-to-man coverage abilities any more, but he took the challenge of finding a new coach after nine-year coach Joe Susan resigned in January. Truax believes he found the right man in less than a month. Truax was also impressed by the ability of Cecchini to leave a program in better shape than when he arrived. After successful stints as an offensive coordinator at Harvard, The Citadel and Lehigh, Cecchini was named head coach of Valparaiso in 2014. He took over a program that won just three games, all against the same opponent, in the four seasons before his arrival. In his first season in Indiana, Cecchini won four games. After winning a combined five games in his next two years, he guided Valparaiso to its best season in over a decade with a 6-5 mark. For this, Cecchini was a finalist for the American Football Coaches Association's FCS National Coach of the Year and the STATS FCS Coach of the Year awards. He was also voted as the Pioneer Football League Coach of the Year and the AFCA FCS Region 4 Coach of the Year. Cecchini has his hands full with a Bucknell team that went 1-10 last season. But if his resume says anything, he'll have the Bison rising quickly.
{ "redpajama_set_name": "RedPajamaC4" }
7,013
Intoneren (gitaar) Intoneren (orgel)
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,501
Q: Insert image from users computer Hi I would like to know how to insert image from user's computer to my webpage. For example a lot of websites have "insert image" button to choose you profile picture. Coluld you please give me some code on how to do that... :) Thanks in advance! A: That depends on which technology,you want to use? If you are Ok with Javascript I recommend to use this JS tool,Dropzone.js The implementation is simple Add this snippet to the head section of your HTML code <script src="./path/to/dropzone.js"></script> Add this to the body section <form action="/file-upload" class="dropzone" id="my-awesome-dropzone"></form> Add the below Code too,to manage the file after submission <input type="file" name="file" /> To learn more about it,refer to the main website. http://www.dropzonejs.com/ A: This is a way of obtaining a file from the user's computer. <form action="profile_pic.asp"> <input type="file" name="pic" accept="image/*"> <input type="submit"> </form> Then you could assign an ID to the image and to the user and match them, for when the user logs in. Will
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,284
Dr. Tom Butcher provides Oil & Energy readers a first look at the oil-fired heat pump currently in development at NORA's Long Island laboratory. Is New York City's B5 Standard Becoming THE Standard? A uniform biodiesel-blending standard could be coming to the New York Metro Area by July 2018. Now that you have a negative result, and have put the new driver to work, you probably know that other drug tests will be required. But when do you send them? And how often? State economic engines are getting a real boost from energy efficiency programs. Each year, states spend billions on energy, and energy efficiency is helping them recapture some of those dollars.
{ "redpajama_set_name": "RedPajamaC4" }
2,673
Explores creativity from a Systems Perspective - as achievement resulting from a confluence of the Individual, the Domain, and the Field. Investigates creativity's role in the advance of culture; provides student opportunities to enhance personal creativity. Prerequisite(s): junior standing.
{ "redpajama_set_name": "RedPajamaC4" }
6,364
Aegoschema moniliferum es una especie de escarabajo longicornio del género Aegoschema, tribu Acanthoderini, subfamilia Lamiinae. Fue descrita científicamente por White en 1855. Descripción Mide 13,2-24,4 milímetros de longitud. Distribución Se distribuye por Bolivia, Brasil, Colombia, Guayana Francesa y Perú. Referencias moniliferum Insectos descritos en 1855 Insectos de América del Sur Fauna de América del Sur
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,284
Pinguin GamesChoose your character and take part in Snowball Olympics! Indiana Jones 2Help Indy make his way through the desert and find the sacred fortune! Doll Clothing Dress UpGive this doll a sweet look in these frilly dresses. Moon Dance Dress UpWhat clothes are the best for dancing in the moonlight? Find out! World Class Chef: Portugal Try some Portugese cuisine i your kithcen.
{ "redpajama_set_name": "RedPajamaC4" }
3,011
import { TestBitGo, TestBitGoAPI } from '@bitgo/sdk-test'; import { BitGoAPI } from '@bitgo/sdk-api'; import { Talgo, VerifyAlgoAddressOptions } from '../../src'; import { AssertionError } from 'assert'; const should = require('should'); describe('Algo class', function () { let bitgo: TestBitGoAPI; let basecoin; before(function () { bitgo = TestBitGo.decorate(BitGoAPI, { env: 'mock' }); bitgo.safeRegister('talgo', Talgo.createInstance); bitgo.initializeTestVars(); basecoin = bitgo.coin('talgo'); }); describe('for method isWalletAddress', () => { const ROOT_ADDRESS = 'AND6YSSWQMMFOROH6NINCLNDNHXYIICIPIOPORA5VWELJVZ5J6OBAZCB6E'; const USE_UNDEFINED = 'use undefined'; const INVALID_KEY = { pub: 'XH3WEL22VP6EAPVIUHUZCKBYZEJFSLZD3K4PK44MZCVSA4RFSB2ZH4SGLQ' }; const keychains = [ { pub: '5II7OEXHVZUDTTLYMX2VESDSD6NZZ3CJKSYHKNQRU7RW2AEWBBM46VSPRE' }, // '62228ac8c01c5500072dc71d' { pub: '6JMXSB37MWGTALLCUUVJGKMC4LQCU3EEJHRRUZMIOQS4CPLQU2KVYWLR3M' }, // '62228ac8c01c5500072dc726' { pub: 'OH3WEL22VP6EAPVIUHUZCKBYZEJFSLZD3K4PK44MZCVSA4RFSB2ZH4SGLQ' }, // '62228ac9c01c5500072dc72f' ]; const makeVerifyAddressOptions = ( address: string, rootAddress: string, bitgoPubKey?: string, useKeyChain?: typeof keychains | string ): VerifyAlgoAddressOptions => ({ address, chain: 0, index: 0, coin: 'talgo', wallet: '62228ae1c01c5500072dc7a1', coinSpecific: { rootAddress, bitgoKey: '62228ac8c01c5500072dc71d', addressVersion: 1, threshold: 2, ...(bitgoPubKey ? { bitgoPubKey } : {}), }, ...(typeof useKeyChain === 'string' ? {} : { keychains }), }); const otherAddress = 'AWSC7RL3RM72HSUW5QU4XTX3AOHY7QD3WLUZC2CAHWP6BTI5Q7IABVUXTA'; const receivingAddress = 'BIJ332IS63LGDG4HIPBMUWQLE4AMIS3D7A3IPNGGDS5FKGWURCHE5NVMXI'; // Test cases [ { title: 'should validate root address', address: ROOT_ADDRESS, expected: true, }, { title: 'should not validate address outside wallet', address: otherAddress, expected: false, }, { title: 'should validate a receiving address', address: receivingAddress, bitgoKey: '62237caa05ff6900076196d0', bitgoPubKey: 'TZUQM3QPLGLAWIRDSN6REATX6HOSZGBXZFSFGD55OKUVE4JHXFWI5SZ6GY', expected: true, }, { title: 'should not validate a not owned receiving address', address: 'GD64YIY3TWGDMCNPP553DZPPR6LDUSFQOIJVFDPPXWEG3FVOJCCDBBHU5A', bitgoKey: '62237caa05ff6900076196d0', bitgoPubKey: 'TZUQM3QPLGLAWIRDSN6REATX6HOSZGBXZFSFGD55OKUVE4JHXFWI5SZ6GY', expected: false, }, { title: 'should report error for invalid formatted address', address: 'GD64YIY3TWGDMCNPP553DZPPR6LDUSFQOIJVFDPPXWEG3FVOJ', throws: /invalid address/, }, { title: 'should report error for invalid checksum address', address: 'GD64YIY3TWGDMCNPP553DZPPR6LDUSFQOIJVFDPPXWEG3F999', throws: /invalid address/, }, { title: 'should report error if keychain is missing', address: ROOT_ADDRESS, keychains: USE_UNDEFINED, throws: /missing required param keychains/, }, { title: 'should report error if any key is invalid', address: ROOT_ADDRESS, keychains: [keychains[0], keychains[1], INVALID_KEY], throws: /invalid public key/, }, ].forEach(({ title, address, expected, bitgoPubKey, throws, keychains }) => { it(title, async () => { // GIVEN parameter options for created address const params = makeVerifyAddressOptions(address, ROOT_ADDRESS, bitgoPubKey, keychains); try { // WHEN checking address const result = await basecoin.isWalletAddress(params); // THEN no error was expected should(throws).be.undefined(); // THEN address is validated as expected result.should.be.equal(expected); } catch (e) { if (e instanceof AssertionError) { // Do not hide other assertions throw e; } should(throws).be.not.undefined(); should(expected).be.undefined(); (() => { throw e; }).should.throw(throws || 'never reaches here but compiler is unhappy without this'); } }); }); }); });
{ "redpajama_set_name": "RedPajamaGithub" }
3,325
{"url":"https:\/\/math.stackexchange.com\/questions\/3514376\/counting-the-number-of-points-on-elliptic-curve-over-finite-field","text":"# Counting the Number of Points on Elliptic Curve over finite field\n\nI'm studying elliptic curves and have stumbled upon this problem:\n\nLet p be a prime number such that 3 does not divide p \u2212 1. Let E be an elliptic curve defined like this:\n\n$$E = \\{ (x,y ) \\in \\mathbb{F}_{p}^2 | \\quad Y^{2} = X^{3} + 7 \\} .$$\n\nThe Goal is to compute $$| E(\\mathbb{F}_{p})|$$.\n\nI've seen Hasse's bound: $$|E(\\Bbb F_p)| \\geq p+1-2\\sqrt p > 1, \\quad\\forall p \\geq 5$$\n\nAnd that the number of points is $$N=1+\\sum_{x\\in\\Bbb{F}_p}\\left(1+\\left(\\frac{x^3+ax+b}p\\right)\\right).$$\n\n(I thought that for $$p > 3$$, $$p$$ can be written as $$p = 3*k + 2$$ for some $$k \\in \\mathbb{Z}$$ since p-1 is not divisible by 3. However i don't see how i could use this.)\n\nHas anyone got an idea how to compute the number of points?\n\nYou made all the relevant observations, so let me outline an answer in the form of exercises for you.\n\nLet $$p$$ be an odd prime with $$p \\equiv 2 \\pmod{3}$$.\n\nExercise 1: show that the map $$\\varphi: \\mathbf F_p^\\times \\rightarrow \\mathbf F_p^\\times$$ given by $$\\varphi(x) = x^3$$ is an automorphism.\n\n$$\\,$$\n\nExercise 2: use exercise 1 to show that $$\\sum_{x \\in \\mathbf F_p}\\left(\\frac{x^3 + 7}{p}\\right) = \\sum_{x \\in \\mathbf F_p}\\left(\\frac{x}{p}\\right) = 0$$\n\n$$\\,$$\n\nExercise 3: conclude that $$|E(\\mathbf F_p)| = p+1$$.\n\nCan you solve it now?","date":"2021-10-22 17:03:24","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 16, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9000608325004578, \"perplexity\": 56.986869918285414}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-43\/segments\/1634323585516.51\/warc\/CC-MAIN-20211022145907-20211022175907-00477.warc.gz\"}"}
null
null
{"url":"https:\/\/pedestrianobservations.com\/page\/2\/","text":"# Why We Adjust Costs for\u00a0PPP\n\nThe Transit Costs Project adjusts all construction costs for purchasing power parities. This means that, for example, a Chinese subway is converted into dollars not at the exchange rate of $1 = 6.7\u00a5, but at the PPP rate of$1 = 4.2\u00a5; this means that present-day Chinese subways look 1.5 times more expensive in our analysis than in analyses that use exchange rate values, and projects from 10 years ago look twice as expensive. I believe our choice is correct, and would like to explain why, since it has gotten some criticism from serious people, who\u2019s prefer exchange rates.\n\nLocal costs\n\nI started this comparing mature developed countries. The US and Europe have largely separate markets for construction, and so American work is almost entirely done in dollars and European work in euros (or pounds, or kronor, etc.). Japan is likewise very local and so is China. In that case, local costs matter far more than international ones.\n\nBut what\u2019s interesting is that even in countries that use imported technology and international consultants and contractors and have low wages, costs are almost entirely local. I wrote about this last year, referencing an article out of India about the small cost impact of indigenization and an interview I made with a Philippine planner who told me 90% of the value of civil works is local. Rolling stock is internationally traded, but we exclude it from our cost estimates whenever possible.\n\nThe impact of currency changes\n\nUsing PPPs, if a country undergoes a bout of inflation, this should be reflected in changes in construction costs. This is intentional. The example given to me in the critique linked in the lede is that if Bangladeshi food prices rise, then this makes the PPP exchange rate look less favorable (a taka in Bangladesh can then buy less relative to a dollar in the US). But that\u2019s fine \u2013 if Bangladeshi food prices rise then this forces Dhaka to pay higher wages to MRT construction workers, so overall it\u2019s just domestic inflation. It\u2019s no different from how, today, we\u2019re seeing nominal construction cost growth in the United States and Europe because of high inflation.\n\nAt least the inflation today is moderate by any developing-country standard. Core inflation in the United States is 6%; in Germany it\u2019s 3%. This may introduce third-order errors into the database as we deflate costs to the midpoint of construction. In contrast, 50-60% annual inflation is sustained over years in some middle-income countries like Iran, and then the choice of year for prices has significant impact, to the point that Iranian costs have a significant error bar. But that\u2019s regardless of whether one adjusts for PPP or not, since usually inflation leads to deteriorating terms of trade.\n\nIn contrast, if prices are compared in exchange rate terms, then international fluctuations create fictitious changes in construction costs. When China permitted the renminbi to appreciate in the mid-2000s, this would have looked like an increase in costs of about 20% \u2013 but the costs of local inputs did not change, so in reality there was no increase in costs. The euro:dollar rate peaked around 1\u20ac = $1.58 in 2008, before tumbling to 1\u20ac =$1.28 in the financial crisis \u2013 but nothing material happened that would reduce European construction costs by 19% relative to American ones; right now it\u2019s trading at 1\u20ac = $1.05, but this again does not mean that construction in Europe is suddenly a third cheaper compared with in the US relative to 15 years ago. Unusual currency values Some patterns are systemic \u2013 richer countries have stronger currencies relative to PPP value than poor countries. But others are not, and it\u2019s important to control for them. A currency can be weak due to the risk of war or disaster; the Taiwanese dollar is unusually weak for how rich Taiwan is, and this should not mean that Taiwanese construction costs are half what they really are. Or it can be strong or weak based on long-term investment proposition: investors will bid up the value of a currency in a country they expect to profit in in the long term, perhaps due to population growth coming from high birthrates or immigration, and this does not mean that today, it builds infrastructure more expensively. In any of those cases, the unusual value of the currency really reflects capital availability. Capital for investment in Australia is plentiful, but this by itself does not raise its construction costs; capital for investment in Taiwan is scarce, but this certainly does not make it a cheap place to build infrastructure. Foreign-denominated construction In some peripheral countries with unstable currencies, costs are quoted in foreign currency \u2013 dollars or euros. Some Turkish contracts are so quoted, and this is also common in Latin America and sometimes Southeast Asia. But ultimately, the vast majority of the contract\u2019s value is paid out in the local currency, not just labor but also locally-made materials like concrete. This creates a weird-looking statistical artifact in which we convert dollars or euros to local currency in exchange rate terms and then back in PPP terms. This, we do because the quotation of the contract (in dollars or euros) is not the real value. Rather, it comes out of one of two artifacts. The first is data reporting: we rely on international trade media, and those often quote prices in exchange rate dollars or euros, even if the contract is in local currency (and in all cases where we\u2019ve seen both, they match in exchange rate value). The second is that an international consultancy may demand actual payment in foreign currency as a hedge against currency depreciation; in that case its rate of profit should be dollar- or euro-denominated. However, this again is a small minority of overall contract value. Moreover, if a country\u2019s institutions can\u2019t produce enough capital stability to do business in their own currency, it\u2019s a problem that should be reflected in global indices; ultimately, if costs are higher in PPP terms as a result, this means that the country really does have greater problem affording infrastructure. A posteriori justification The above reasoning is all a priori. When I started comparing costs in the early 2010s, I was comparing developed countries and the euro:dollar rate was in flux in the early financial crisis, so I just went with one long-term PPP rate. However, a posteriori, there is another positive feature of PPP adjustment: it levels the differences in construction costs by income. There is positive correlation between metro cost per km and the GDP per capita of the country the metro is built in, about 0.22, but it comes entirely out of the fact that poorer countries (especially India) build more elevated and fewer subway lines; correcting for this factor, the correlation vanishes. This is as it should be: PPP is a way of averaging out costs in different countries, first because it levels short-term fluctuations such as between different developed countries, and second because exchange rate value is dominated by internationally tradable goods, which are relatively more expensive in poor countries than non-tradable goods like food and housing. What this says is that infrastructure should be viewed as an average-tradable good, at least a posteriori: its variation in costs across the world is such that there is no correlation with GDP per capita, whereas food prices display positive correlation even after PPP adjustment, and tradables like smartphones display negative correlation (because they cost largely the same in exchange rate terms). # Tails on Commuter Rail An interesting discussion on Twitter came out of an alternatives analysis for Philadelphia commuter rail improvements. I don\u2019t want to discuss the issue at hand for now (namely, forced transfers), but the discussion of Philadelphia leads to a broader question about tails. Commuter rail systems sometimes have low-frequency tails with through-service to the core system and sometimes don\u2019t, and it\u2019s useful to understand both approaches. What is a tail? For the purposes of this post, a tail is whenever there is a frequent line with trains infrequently continuing farther out. Frequency here is relative, so a subway line running every 2.5 minutes to a destination with every fourth train continuing onward is a tail even though the tail still has 10-minute frequency, and a commuter line running every 20 minutes with every third train continuing onward also has a tail, even though in the latter case the core frequency is lower than the tail frequency in the former case. The key here is that the line serves two markets, one high-intensity and frequent and one lower-intensity warranting less service, with the outer travel market running through to the inner one. Usually the implication is that the inner segment can survive on its own and the contribution of the outer segment to ridership is not significant by itself. In contrast, it\u2019s common enough on S-Bahn systems to have a very frequent trunk (as in Berlin, or Munich, or Paris) that fundamentally depends on through-service from many suburban segments farther out combining to support high frequency in the core; if ridership farther out is significant enough that without it frequency in the core would suffer, I would not call this a tail. When are tails useful? Tails are useful whenever there is a core line that happens to be along the same route as a lower-intensity suburban line. In that case, the suburban line behind can benefit from the strong service in the core by having direct through-service to it at a frequency that\u2019s probably higher than it could support by itself. This is especially valuable as the ridership of the tail grows in proportion to that of the core segment \u2013 in the limiting case, it\u2019s not even a tail, just outer branches that combine to support strong core frequency. Tokyo makes extensive use of tails. The JR East commuter lines all have putative natural ends within the urban area. For example, most Chuo Rapid Line trains turn at Takao, at the western end of the built-up area of Tokyo \u2013 but some continue onward to the west, running as regional trains to Otsuki or as interregional or as intercity trains farther west to Shiojiri. Munich and Zurich both use tails as well on their S-Bahns. In Munich, the base frequency of each of the seven main services is every 20 minutes, but some have tails running hourly, and all have tails running two trains per hour with awkward alternation of 20- and 40-minute gaps. In Zurich, the system is more complex, and some lines have tails (for example, S4) and some do not (for example, S3); S4 is not a portion of an intercity line the way the Chuo Line is, and yet its terminus only gets hourly trains, while most of the line gets a train every 20 minutes. What are the drawbacks of tails? A tail is a commitment to running similar service as in the core, just at lower frequency. In Philadelphia, the proposal to avoid tails and instead force what would be tails into off-peak shuttle trains with timed transfers to the core system is bundled into separate brands for inner and outer service and a desire to keep the outer stations underbuilt, without accessibility or high platforms. Branding is an exercise in futility in this context, but there are, in other places than Philadelphia, legitimate reasons to avoid tails, as in Paris and Berlin: \u2022 Different construction standards \u2013 perhaps the core is electrified and an outer segment is not; historically, this was the reason Philadelphia ended commuter rail service past the limit of electrification, becoming the only all-electrified American commuter rail network. In Berlin, the electrification standards on the mainline and on the S-Bahn differ as the S-Bahn was electrified decades earlier and is run as an almost entirely self-contained system. \u2022 Train size difference \u2013 sometimes the gap in demand is such that the tail needs not just lower frequency than the core but also shorter trains. In the United States, Trenton is a good example of this \u2013 New York-Trenton is a much higher-demand line than Trenton-Philadelphia and runs longer trains, which is one reason commuter trains do not run through. \u2022 Extra tracks \u2013 if there are express tracks on the core segment, then it may be desirable to run a tail express, if it is part of an intercity line like the Chuo Line rather than an isolated regional line like S4 in Zurich, and not have it interface with the core commuter line at all to avoid timetabling complications. If there are no extra tracks, then the tail would have to terminate at the connection point with the core line, as is proposed in Philadelphia, and the forced transfer is a drawback that generally justifies running the tail. Do the drawbacks justify curtailment? Not really. On two-track lines, it\u2019s useful to provide service into city center from the entire line, just maybe not at high frequency on outer segments. This can create situations in which intercity-scale lines run as commuter rail lines that keep going farther than typical, and this is fine \u2013 the JR East lines do this on their rapid track pairs and within the built-up area of Tokyo people use those longer-range trains in the same way they would an ordinary rapid commuter train. This is especially important to understand in the United States, which is poor in four-track approaches of the kind that the largest European cities have. I think both Paris and Berlin should be incorporating their regional lines into the core RER and S-Bahn as tails, but they make it work without this by running those trains on dedicated tracks shared with intercity service but not commuter rail. Boston, New York, and Philadelphia do not have this ability, because they lack the ability to segregate S-Bahn and RegionalBahn services. This means Boston should be running trains to Cape Cod, Manchester, and Springfield as tails of the core system, and New York should electrify its entire system and run trains to the Hamptons as LIRR tails, and Philadelphia should run tail trains to the entire reach of its commuter rail system. # Quick Note: How to Incentivize Transit-Oriented Development The Biden administration recently put out a statement saying that it would work to increase national housing production. It talks about the need to close the housing shortfall, estimated at 1.5 million dwellings, and proposes to use the Bipartisan Infrastructure Law (BIL) to dole out transport funding based on housing production. This is a welcome development, and I\u2019d like to offer some guidelines for how this can be done most effectively. Incentives mean mistrust You do not need to give incentives to trustworthy people. The notion of incentives already assumes that the people who are so governed would behave poorly by themselves, and that the governing body, in this case the federal government, surveils them loosely so as to judge them by visible metrics set in advance. Once this fundamental fact is accepted \u2013 the use of BIL funding to encourage housing production implies mistrust of all local government to build housing \u2013 every other detail should be set up in support of it. Demand conflict with community Federal funding should, in all cases, require state and local governments to discipline community groups that fight housing and extract surplus from infrastructure. Regions that cannot or do not do so should receive less funding; the feds should communicate this in advance, stating both the principle and the rules by which it will be judged. For example, a history of surrender to local NIMBYs to avoid lawsuits, or else an unwillingness to fight said lawsuits, should make a region less favored for funds, since it\u2019s showing that they will be wasted. In contrast, a history of steamrolling community should be rewarded, showing that the government is in control and prioritizes explicit promises to the feds and the voters over implicit promises to the local notables who form the base of NIMBYism. Spend money in growth regions In cities without much housing demand, like Detroit and Cleveland, the problem of housing affordability is one of poverty; infrastructure spending wouldn\u2019t fix anything. This means that the housing grant should prioritize places with growth demand, where current prices greatly exceed construction costs. These include constrained expensive cities like New York and San Francisco, but increasingly also other wealthy cities like Denver and Nashville, whose economic booms translate to population increase as well as income growth, but unfortunately housing growth lags demand. Even poorer interior cities are seeing rent increases as people flee the high prices of richer places, and encouraging housing growth in their centers is welcome (but not in their suburbs, where housing is abundant and not as desirable). Look at residential, not commercial development In the United States, YIMBY groups have focused exclusively on residential development. This is partly for political reasons: it\u2019s easier to portray housing as more moral, benefiting residents who need affordable housing even if the building in question is market-rate, than to portray an office building as needing political support. In some cases it\u2019s due to perceived economic reasons \u2013 the two cities driving the American YIMBY discourse, New York and San Francisco, have unusually low levels of job sprawl for the United States, and in both cities YIMBY groups are based near city center, where jobs look especially plentiful. At the local and state level, this indifference to commercial YIMBY is bad, because it\u2019s necessary to build taller in city center and commercialize near-center neighborhoods like the West Village to fight off job sprawl. However, at the federal level, a focus on residential development is good. This is a consequence of the inherent mistrust assumed in the incentive system. While economically, American cities need city centers to grow beyond the few downtown blocks they currently occupy, politically it\u2019s too easy for local actors to bundle a city center expansion with an outrageously expensive urban renewal infrastructure plan. In New York, this is Penn Station redevelopment, including some office towers in the area that are pretty useful and yet have no reason to be attached to the ill-advised Penn Station South project digging up an entire block to build new tracks. Residential development is done at smaller scale and is harder to bundle with such unnecessary signature projects; the sort of projects that are bundled with it are extensions of urban rail to new neighborhoods to be redeveloped, and those are easier to judge on the usual transport metrics. # Trains are not Planes Trains and planes are both scheduled modes of intercity travel running large vehicles. Virgin runs both kinds of services, and this leads some systems to treat trains as if they are planes. France and Spain are at the forefront of trying to imitate low-cost airlines, with separately branded trains for different classes of passengers and yield management systems for pricing; France is even sending the low-cost OuiGo brand to peripheral train stations rather than the traditional Parisian terminals. This has not worked well, and unfortunately the growing belief throughout Europe is that airline-style competition on tracks is an example of private-sector innovation to be nourished. I\u2019d like to explain why this has failed, in the context of trains not being planes. How do trains and planes differ? All of the following features of trains and planes are relevant to service planning: Taken together, these features lead to differences in planning and pricing. Plane and train seats are perishable \u2013 once the vehicle leaves, an unsold seat is dead revenue and cannot be packaged for later. But trains have low enough variable costs that they do not need 100% seat occupancy to turn a profit \u2013 the increase in cost from running bigger trains is small enough that it is justified on other grounds. Conversely, trains can be precisely scheduled so as to provide timed connections, whereas planes cannot. This means the loci of innovation are different for these two technologies, and not always compatible. What are the main innovations of LCCs? European low-cost carriers reduce cost per seat-km to around 0.05\u20ac (source: the Spinetta report). They do so using a variety of strategies: \u2022 Using peripheral, low-amenity airports located farther from the city, for lower landing fees (and often local subsidies). \u2022 Eliminating such on-board services as free meals. \u2022 Using crew for multiple purposes, as both boarding agents and air crew. \u2022 Flying for longer hours, including early in the morning and later at night, to increase equipment utilization, charging lower fares at undesirable times. \u2022 Running a single class of airplane (either all 737 or all 320) to simplify maintenance. They additionally extract revenue from passengers through hidden fees only revealed at the last moment of purchase, aggressive marketing of on-board sales for ancillary revenue, and an opaque yield management system. But these are not cost cutting, just deceptive marketing \u2013 and the yield management system is in turn a legacy carrier response to the threat of competition from LCCs, which offer simpler one-way fares. How are LCC innovations relevant to trains? On many of the LCC vs. legacy carrier distinctions, daytime intercity trains have always been like LCCs. Trains sell meals at on-board cafes rather than providing complimentary food and drinks; high-speed rail carriers aim at fleet uniformity as much as practical, using scale to reduce unit maintenance costs; trains have high utilization rates using their low variable operating costs. On others, it\u2019s not even possible to implement the LCC feature on a railroad. SNCF is trying to make peripheral stations work on some OuiGo services, sending trains from Lyon and Marseille to Marne-la-Vall\u00e9e and reserving Gare de Lyon for the premium-branded InOui trains. It doesn\u2019t work: the introduction of OuiGo led to a fall in revenue but no increase in ridership, which on the eve of corona was barely higher than on the eve of the financial crisis despite the opening of three new lines. The extra access and egress times at Marne-la-Vall\u00e9e and the inconvenience imposed by the extra transfer with long lines at the ticketing machines for passengers arriving in Paris are high enough compared with the base trip time so as to frustrate ridership. This is not the same as with air travel, whose origins are often fairly diffuse because people closer to city center can more easily take trains. What innovations does intercity rail use? Good intercity train operating paradigms, which exist in East Asia and Northern Europe but not France or Southern Europe, are based on treating trains as trains and not as planes (East Asia treats them more like subways, Northern Europe more like regional trains). This leads to the following innovations: \u2022 Integration of timetable and infrastructure planning, taking advantage of the fact that the infrastructure is built by the state and the operations are either by the state or by a company that is so tightly linked it might as well be the state (such as the Shinkansen operators). Northern European planning is based on repeating hourly or two-hourly clockface timetables. \u2022 Timed connections and overtakes, taking advantage of precise timetabling. \u2022 Very fast turnaround times, measured in minutes; Germany turns trains at terminal stations in 3-4 minutes when they go onward, such as from north of Frankfurt or Leipzig to south of them with a reversal of the train direction, and Japan turns trains at the end of the line in 12 minutes when it needs to. \u2022 Short dwell times at intermediate stops \u2013 Shinkansen trains have 1-minute dwell times when they\u2019re not sitting still at a local station waiting to be overtaken by an express train. \u2022 A knot system in which trips are sped up so as to fit into neat slots with multiway timed connections at major stations \u2013 in Switzerland, trains arrive at Zurich, Basel, and Bern just before the hour every half hour and depart just after. \u2022 Fare systems that reinforce spontaneous trips, with relatively simple fares such that passengers don\u2019t need to plan trips weeks in advance. East Asia does no yield management whatsoever; Germany does it but only mildly. All of these innovations require public planning and integration of timetable, equipment, and infrastructure. These are also the exact opposite of the creeping privatization of railways in Europe, born of a failed British ideological experiment and a French railway that was overtaken by airline executives bringing their own biases into the system. On a plane, my door-to-door time is so long that trips are never spontaneous, so there\u2019s no need for a memorable takt or interchangeable itineraries; on a train, it\u2019s the exact opposite. # How Comparisons are Judged I\u2019m about to complete the report for the Transit Costs Project about Sweden. For the most part, Sweden is a good comparison case: its construction costs for public transport are fairly low, as are those of the rest of Scandinavia, and the projects being built are sound. And yet, the Nordic countries and higher-cost countries in the rest of Northern Europe, that is Germany and the Netherlands, share a common prejudice against Southern Europe, which in the last decade or so has been the world leader in cost-effective infrastructure. (Turkey is very cheap as well but in many ways resembles Southern Europe, complete with having imported Italian expertise early on.) This is not usually an overt prejudice. Only one person who I\u2019ve talked to openly discounted the idea that Italy could be good at this, and they are not Nordic. But I\u2019ve been reading a lot of material out of Nordic countries discussing future strategy, and it engages in extensive international comparisons but only within Northern Europe, including high-cost Britain, ignoring Southern Europe. The idea that Italians can be associated with good engineering is too alien to Northern Europeans. The best way to illustrate it is with a toy model, about the concept of livable cities. Livable cities Consider the following list of the world\u2019s most livable cities: 1. Vienna 2. Stockholm 3. Auckland 4. Zurich 5. Amsterdam 6. Melbourne 7. Geneva 8. Copenhagen 9. Munich 10. Vancouver The list, to be clear, is completely made up. These are roughly the cities I would expect to see on such a list from half-remembering Monocle\u2019s actual lists and some of the discourse that they generate: they should be Northern European cities or cities of the peripheral (non-US\/UK) Anglosphere, and not too big (Berlin might raise eyebrows). These are the cities that urbanist discourse associates with livability. The thing is, prejudices like \u201cNorthern Europe is just more livable\u201d can tolerate a moderate level of heresy. If I made the above list, but put Taipei at a high place shifting all others down and bumping Vancouver, explaining this on grounds like Taipei\u2019s housing affordability, strong mass transit system, and low corona rates (Taiwan spent most of the last two years as a corona fortress, though it\u2019s cracked this month), it could be believed. In effect, Taipei\u2019s status as a hidden gem could be legitimized by its inclusion on a list alongside expected candidates like Vienna and Stockholm. But if instead the list opened with Taipei, Kaohsiung, Taichung, and Tainan, it would raise eyebrows. This isn\u2019t even because of any real criteria, though they exist (Taiwan\u2019s secondary cities are motorcycle- and auto-oriented, with weak metro systems). It just makes the list too Taiwanese, which is not what one expects from such a list. Ditto if the secondary Taiwanese cities were bumped for other rich Asian cities like Singapore or Seoul; Singapore is firmly in the one-heresy status \u2013 it can make such a list if every other city on the list is as expected \u2013 but people have certain prejudices of how it operates and certain words they associate with it, some right and some laughably wrong, and \u201clivable\u201d is not among them. The implication for infrastructure A single number is more objective than a multi-factor concept like livability. In the case of infrastructure, this is cost per kilometer for subways, and it\u2019s possible to establish that the lowest-cost places for this are Southern Europe (including Turkey), South Korea, and Switzerland. The Nordic countries used to be as cheap but with last decade\u2019s cost overruns are somewhat more expensive to dig in, though still cheaper than anywhere else in the world; Latin America runs the gamut, but some parts of it, like Chile, are Sweden-cheap. Per the one-heresy rule, the low costs of Spain are decently acknowledged. Bent Flyvbjerg even summarized the planning style of Madrid as an exemplar of low costs recently \u2013 and he normally studies cost overruns and planning failures, not recipes for success. But it goes deeper than just this, in a number of ways. 1. While Madrid most likely has the world\u2019s lowest urban subway costs, the rest of Southern Europe achieves comparable results and so does South Korea. So it\u2019s important to look at shared features of those places and learn, rather than just treat Spain as an odd case out while sticking with Northern European paradigms. 2. Like Italy, Spain has not undergone the creeping privatization of state planning so typical in the UK and, through British soft power, other parts of Northern Europe. Design is done by in-house engineers; there\u2019s extensive public-sector innovation, rather than an attempt to activate private-sector innovation in construction. 3. Southern European planning isn\u2019t just cheap, but also good. Metro Milano says that M5 carries 176,000 passengers per day, for a cost of 1.35b\u20ac across both phases; in today\u2019s money it\u2019s around$13,000 per rider, which is fairly low and within the Nordic range. Italian driverless metros push the envelope on throughput measured in peak trains per hour, and should be considered at the frontier of the technology alongside Paris. Milan, Barcelona, and Madrid have all been fairly good at installing barrier-free access to stations, roughly on a par with Berlin; Madrid is planning to go 100% accessible by 2028.\n4. As a corollary of point #3, there are substantial similarities between Southern and Northern Europe. In particular, both were ravaged by austerity after the financial crisis; Northern Europe quickly recovered economically, but in both, infrastructure investment is lagging. In general, if you keep finding $10,000\/rider and$15,000\/rider subways to build, you should be spending more money on more subway lines. Turkey is the odd one out in that it builds aggressively, but on other infrastructure matters it should be viewed as part of the European umbrella.\n5. Italian corruption levels in infrastructure are very low, and from a greater distance this also appears true of Spain. Italy\u2019s governance problems are elsewhere \u2013 the institutional problems with tax avoidance drag down the private sector, which has too many family-scale businesses that can\u2019t grow and too few large corporations, and not the public sector.\n\nI\u2019m not going to make a list of the cities with the best urban rail networks in the world, even in jest; people might take this list as authoritative in ways they wouldn\u2019t take a list I made up about livability. But in the same way that there are prejudices that militate in favor of associating livability with Northern Europe and the peripheral Anglosphere, there are prejudices that militate in favor of associating good public transport with Northern and Central Europe and the megacities of rich Asia. All of those places indeed have excellent public transportation, but this is equally true of the largest Southern European cities; Istanbul is lagging but it\u2019s implementing two large metro networks, one for Europe and one for Asia, and already has Marmaray connecting them under the Bosporus.\n\nAnd what\u2019s more, just as Southern Europe has things to learn from Northern Europe, Northern Europe has things to learn from the South. But it doesn\u2019t come naturally to Germans or Nordics. It\u2019s expected that every list of the best places in Europe on every metric should show a north-south gradient, with France anywhere in between. If something shows the opposite, it must in this schema be unimportant, or even fraudulent. Northerners know that Southerners are lazy and corrupt \u2013 when they vacation in Alicante they don\u2019t see anyone work outside the hospitality industry, so they come away with the conclusion that there is no high-skill professional work in the entire country.\n\nBut at a time when Germany is building necessary green infrastructure at glacial rates and France and Scandinavia have seen real costs go up maybe 50% in 20 years, it\u2019s necessary to look beyond the prejudice. Madrid, Barcelona, Rome, Milan, Istanbul, Lisbon, and most likely also Athens have to be treated as part of the European core when it comes to urban rail infrastructure, with as much to teach Stockholm as the reverse and more to teach Berlin than the reverse.\n\n# Consolidating Stops with Irregular\u00a0Spacing\n\nThere was an interesting discussion on Twitter a few hours ago about stop consolidation on the subway in New York. Hayden Clarkin, the founder of TransitCon, brings up the example of 21st Street on the G in Long Island City. The stop is lightly-used and very close to Court Square, which ordinarily makes it a good candidate for removal, a practice that has been done a handful of times in the city\u2019s past. However, the spacing is irregular and in context this makes the stop\u2019s removal a lower-value proposition; in all likelihood there should not be any change and trains should keep calling at the station as they do today.\n\nWhat is 21st Street?\n\nThe G train, connecting Downtown Brooklyn with Long Island City directly, makes two stops in Queens today: Court Square, at the southern end of the Long Island City business district, and 21st Street, which lies farther south. Here is a map of the area:\n\nAt closest approach, the platforms of 21st are 300 meters away from those of Court Square on the G; taking train length into account, this is around 400 meters (the G runs short trains occupying only half the platform). Moreover, Court Square is a more in-demand area than 21st Street: Long Island City by now near-ties Downtown Brooklyn as the largest job center in the region outside Manhattan, and employment clusters around Queens Plaza, which used to be one stop farther north on the G before the G was curtailed to Court Square in order to make more room for Manhattan-bound trains at Queens Plaza. Court Square is still close to jobs, but 21st Street is 400 meters farther away from them, with little on its side of the neighborhood.\n\nStop spacing optimization\n\nSubways cannot continuously optimize their stop spacing the way buses can. Building a new bus stop costs a few thousand dollars, or a few ten thousand if you\u2019re profligate. Building a new subway stop costs tens of millions, or a few hundred million if you\u2019re profligate. This means that the question of subway stop optimization can only truly be dealt with during the original construction of a line. Subsequently, it may be prudent to build a new stop but only at great expense and usually only in special circumstances (for example, in the 1950s New York built an infill express station on the 4 and 5 trains at 59th, previously a local-only station, to transfer with the N, R, and W). But deleting a stop is free; New York has done it a few times, such as at 18th Street on the 6 trains or 91st on the 1. Is it advisable in the case of 21st?\n\nThe answer has to start with the formula for stop spacing. Here is my earliest post about it, in the context of bus stops. The formula is,\n\n$\\mbox{Optimum spacing} = \\sqrt{4\\cdot\\frac{\\mbox{walk speed}}{\\mbox{walk penalty}}\\cdot\\mbox{stop penalty}\\cdot\\mbox{average trip distance}}$\n\nThe factor of 4 in the formula depends on circumstances. If travel is purely isotropic along the line, then the optimum is at its minimum and the factor is 2. The less isotropic travel is, the higher the factor; the number 4 is when origins are purely isotropic, which reflects residential density in this part of New York, but destinations are purely anisotropic and can all be guaranteed to be at distinguished nodes, like business centers and transfer points. Because 21st Street is a residential area and Court Square is a commercial area and a transfer point, the factor of 4 is justified here.\n\nWalk speed is around 1.33 m\/s, the walk penalty is typically 2, the stop penalty on the subway is around 45 seconds, and the average unlinked trip on the subway is 6.21 km; the formula spits out an optimum of 863 m, which means that a stop that\u2019s 400 meters from nearby stops should definitely be removed.\n\nBut there\u2019s a snag.\n\nThe effect of irregular stop spacing\n\nWhen the optimal interstation is 863 meters, the rationale for removing a stop that\u2019s located 400 meters from adjacent stations is that the negative impact of removal is limited. Passengers at the stop to be removed have to walk 400 meters extra, and passengers halfway between the stop and either of the adjacent stops have no more walking to do because they can just walk to the other stop; the average extra walk is then 200 meters. The formula is based on minimizing overall travel time (with a walk penalty) assuming that removing a stop located x meters from adjacent stops incurs an extra walk of x\/2 meters on average near the station. Moreover, only half of the population lives near deleted stops, so the average of x\/2 meters is only across half the line.\n\nHowever, this works only when stop spacing is regular. If the stop to be removed is 400 meters from an adjacent stop, but much farther from the adjacent stop on the other side, then the formula stops applying. In the case of 21st Street, the next stop to the south, Greenpoint Avenue, is 1.8 km away in Brooklyn, across an unwalkable bridge. Removing this stop does not increase the average walk by 200 meters but by almost 400, because anywhere from 21st south in Long Island City the extra walk is 400. Moreover, because this is the entire southern rim of Long Island City, this is more than just half the line in this area.\n\nIn the irregular case, we need to halve the factor in the formula, in this case from 4 to 2 (or from 2 to 1 if travel is isotropic). Then the optimum falls to 610; this already takes into account that 21st Street is a weaker-demand area than Court Square, or else the factor in the formula would drop by another factor of 2. At 610 meters, the impact of removing a stop 400 meters from an adjacent stop is not clearly positive. In the long run, it is likely counterproductive, since Long Island City is a growth area and demand is likely to grow in the future.\n\nDoes this generalize?\n\nYes!\n\nIn New York, this situation occurs at borough boundaries, and also at the state boundary if more service runs between the city and New Jersey. For example, in retrospect, it would have been better for the east-west subway lines in Manhattan to make a stop at 1st or 2nd Avenue, only 300-500 meters from the typical easternmost stop of Lexington. The L train does this, and if anything does not go far enough \u2013 there\u2019s demand for opening a new entrance to the 1st Avenue stop (which is one of the busiest on the line) at Avenue A, and some demand for a likely-infeasible infill stop at Avenue C. These are all high-density areas, but they\u2019re residential \u2013 most people from Queens are not going to 2nd Avenue but to Lex and points west, and yet, 2nd would shorten the walk for a large group of residential riders by around 400 meters, justifying its retrospective inclusion.\n\n# Quick Note: Learning from the Past and the\u00a0Present\n\nThere are two tendencies among Americans in the rail industry that, taken together, don\u2019t really mesh. The first is to ignore knowledge produced outside North America, especially if it\u2019s also outside the Anglosphere, on the grounds that the situations are too different and cannot be compared. The second is to dwell on the past and talk about how things could have been different and, therefore, to spend a lot of time looking at old proposals as a guideline.\n\nThe problem with this is that the past is a foreign country. They do things differently there. The world of the imagined past of modern-day Western romantics, usually placed in the 1950s or early 60s, is barely recognizable, economically or politically; Mad Men hits watchers on the head in its early seasons with how alien it is. The United States was an apartheid state until around 1964; France only decolonized Algeria in 1962; Germany had a deep state until the Spiegel affair of 1962 started to dismantle it and wouldn\u2019t truly apologize for its WW2 crimes until the Kniefall and the fallout therefrom.\n\nSo as a public service, let\u2019s look at some economic indicators comparing the US to the three low-construction cost countries that the Transit Costs Project is doing case studies about:\n\nThe US is comparable to Sweden on net \u2013 the higher GDP per capita is mostly an artifact of shorter vacation times. It is a considerably more developed country than Italy, by most accounts (except health care, where the US is more or less the worst in the developed world). Italy is a more developed country than Turkey. And Turkey, today, is considerably more developed than the US was in the imagined postwar golden age, even if it\u2019s urbanizing later. The one indicator where they look similar, female LFP, masks the fact that the gender gap for employed women today isn\u2019t especially high in Turkey and that, after a fall in female LFP in the late 20th century, today working outside the home is more middle-class, whereas in early postwar America it was considered a marker of poverty for a married woman to work.\n\nSo in that supposed golden age of an America before the Interstates, or when the Interstates were still in their infancy, GDP per capita was about comparable to Mexico today (and underinvestment in public transportation was comparable too; the Mexico City Metro\u2019s expansion ground to a halt after AMLO was elected mayor). Women were only starting to emerge from the More Work for Mother era. Black people were subjected to literal apartheid. 65 was an old age to retire at (the majority of the increase in life expectancy at birth has occurred since age age 65 \u2013 it wasn\u2019t mostly about declining child mortality).\n\nDeindustrialization was nowhere on the horizon in 1960, which is a cause for celebration by people today who view industry as more moral than services. But the industrial jobs that are romanticized today were held by the era\u2019s traditionalists to be morally inferior to the rapidly depleting farm jobs, and did not pay well until generations of wage increases brought about by unions. And Sweden, Italy, and Turkey are all deindustrializing rapidly; China today has a slightly lower manufacturing job share than the US had at its postwar peak, and elsewhere in the world than East Asia, there\u2019s a serious issue of premature deindustrialization.\n\nWhat about the law? Well, in 1960 the US had the same constitution as today, in theory, but the interpretative theories were completely different. The vast majority of the American constitution is unwritten (the word \u201cfilibuster\u201d does not appear there) and there are vast differences in practice today and in the 1950s, when, again, members of the largest minority group risked being lynched if they tried voting in the states the majority of them lived. The party system at the time was extraordinarily loose; Julia Azari speaks of strong partisanship and weak parties today, but by postwar standards, both American parties are characterized by ideological uniformity and congressional command-and-control systems, even if the distribution of power within the parties is dramatically different from the European norm. Turkey might be comparable to postwar America \u2013 it\u2019s hard to exactly say, since the two entities\u2019 democratic systems are flawed in completely different ways. Italy and Sweden are not.\n\nSo the only thing that\u2019s left is the romanticism. It\u2019s the belief of 21st-century Americans that they could have ridden trains out of the old Penn Station, and worked in any of the prestige industries at the time, and done things differently. The constitution of the US today, its politics, its society, and its economy have little to do with their counterparts of 60+ years ago, but it\u2019s useful for a lot of people to pretend that there\u2019s continuity. It feels more stable this way. It just happens to be dangerously incorrect. Burn the past and look at the present.\n\n# The Solution to Failed Process isn\u2019t More\u00a0Process\n\nThe US Department of Transportation has an equity action plan, and it\u2019s not good. It suffers from the same fundamental problem of American governance, especially at the federal level: everything is about process, nothing is about visible outcomes for the people who use public services. If anything, visible change is constantly deprecated, and direct interference in that direction is Not What We Do. Everything is a nudge, everything has to be invisible. When the state does act, it must do so in the direction of ever more layers of red tape, which at this point are for their own sake.\n\nCase in point: a 12-page PDF with many graphics and charts manages to fit in two giant red flags, both with serious implications for how USDOT views its mission. They showcase a state that exists to obstruct and delay and shrugs off social and developmental goals alike. The action plan should be dismissed and replaced with an approach that aims to dissolve anti-developmental institutions and favor action over talk.\n\nContractors, or users?\n\nMost of the document does not concern itself with how to be more equitable for the users of public transportation in the United States. It doesn\u2019t talk about racial differences in commuting patterns \u2013 it says poor people spend more of their income on transportation (as is the case for other basic staples) but ignores the issue where 61% of American public transport commuters are racial or ethnic minorities in a country that\u2019s 62% white.\n\nWhat it does talk about is the needs of contractors. The US has special programs for disadvantaged business enterprises (DBEs). In contracting, this is called MWBE in New York \u2013 minority- and women-owned business enterprise. New York requires 20% of contract value to go to MWBE, and since construction is an oligopoly owned entirely by white men and there is no interest in breaking said oligopoly, everything goes through a web of subcontractors to satisfice the law while driving up costs for the end users; one source at the MTA quotes a 20% premium to me just from the subcontracting web caused by this and other special restrictions.\n\nIn anti-left American media, the black slumlord who complaints that it is racist to levy fines on him for violating building codes is somehow a sympathetic figure, in preference to the people with the misfortune of living in one of his 100 apartments. Similarly, when Americans speak about income mobility in their country, they center the origin stories of billionaires, most of whom grew up comfortably upper middle-class, rather than whether a working poor person has much hope to ascend to the middle class.\n\nIt\u2019s the same with the focus on MWBE. MWBE are not socially relevant. There is no social or developmental purpose in creating a class of business owners shielded from competition \u2013 in this case, federal contractors \u2013 and then trying to diversify it. Most people are not business owners; most people work for someone else and to get to work they need to commute, and for women and minorities, this is disproportionately likely to be public transport. The path forward is a federal repeal of all MWBE laws and their replacement with preemption forbidding states to enact similar laws. Federal power should dissolve failed local arrangements, free from the need to kowtow to local power brokers who have limited power beyond the local level and none at the federal level.\n\nProcess for the sake of process\n\nCommunity meetings in the United States are a failure. The action plan recognizes this problem, and even begins to understand why:\n\n* Public meetings are a common public involvement strategy, but can be inconvenient or impossible to attend for some. Physical meeting locations may be inaccessible for some, including those with disabilities. Virtual public meetings are inaccessible for people without internet access or computer literacy.\n\n* Various methods may be needed to allow people with diverse circumstances to have a voice in decisions that affect their community. Adaptive engagement strategies can be a resource-intensive but valuable endeavor that is responsive to specific community needs, including different language and cultural backgrounds.\n\nUnfortunately, the solution wants to accrete more process for its own sake. There is no positive use for a community meeting; the defenders of the process in multiple American cities, when I challenged them on this point, could not name to me a single useful thing that came out of them. But the negatives are numerous, and not fixable through multilingual meetings:\n\n\u2022 The times at which meetings are held tend to privilege people who can take time off during work hours \u2013 the same class of already overprivileged business owners, comfortable housewives, and retirees, to the exclusion of people who work for someone else.\n\u2022 Community as a concept is exclusive; in Cultural Theory terms, egalitarian systems tend toward strong boundedness and this is inherently exclusive in ways that market- and state-based systems lack. Outsiders who attempt to attend community meetings report being verbally harassed for not looking like the typical attendee, for example if they are much younger.\n\u2022 Community meeting dynamics favor loudness and adversarial agitation. Social media has the same problem, with a growing body of published work about the effect of online harassment on people, disproportionately people from disadvantaged background. Yelling is believed to get results, and the idea that the state should punish it to let other voices than that of the biggest blowhard be heard is treated as so ridiculous that in popular culture it\u2019s put in the mouth of a junta member.\n\u2022 Local community is not relevant to how most people live in metropolitan areas. In New York, only 8% of workers work in the same community board that they live in (and even same-borough commutes are only 39%); the other 92% and their dependents socialize in citywide networks rather than locally. And yet, community boards, representing those 8% with local ties, are taken as closest to the people.\n\u2022 People with limited English proficiency need not just government services in the relevant language but also relevant information. For example, Chinese immigrants receive information out of Chinese networks, which are not especially local to one specific Chinatown, but are often pan-Chinese or pan-Chinese-American. With much thinner sourcing than is available in English, they can form opinions about the issues most in the news, which tend to be national, but not about local issues. This is something every intra-European immigrant gets very quickly \u2013 it\u2019s easier to find someone who speaks the same language with opinions about Annalena Baerbock than someone who speaks the same language with opinions about Bettina Jarasch, let alone any borough-scale politician (I do not remember a single conversation within queer Berlin spaces about borough-scale politicians).\n\u2022 Local knowledge, to the extent it even exists, is not important, but the community meeting foregrounds it. Long-timers insist on talking about the history of every parklet and mural and shop and not about jobs or rents or public services; the community meetings elevates their concerns above memorizing sports statistics or similar trivialities.\n\nThe community meeting as a source of knowledge for the state to use or as a source of informal or formal power is a social stain wherever it is tried, and the impacts disproportionately fall on women, the young, minorities, queers, and immigrants. And yet an equity action plan that understands at least some of the problems created by the process cannot bring itself to recommend its abolition in favor of top-down state action, informed by the academic research of ethnographers to create universal design standards. No: it is recommending even more process. Process cannot fail; it can only be failed. Fair outcomes are out; endless red tape with all talk and no action is in.\n\n# Quick Note: Regional Rail and the Massachusetts State\u00a0Legislature\n\nThe Massachusetts state legislature is shrugging off commuter rail improvements, and in particular ignoring calls to spend some starter money on the Regional Rail plan. The state\u2019s climate bill ignores public transportation, and an amendment proposing to include commuter rail electrification in the plan has been proposed but not yet included in the plan. Much of the dithering appears to be the fault of one politician: Will Brownsberger, who represents Watertown, Belmont, Back Bay, and parts of Brighton.\n\nWhat is Regional Rail?\n\nRegional Rail is a proposal by TransitMatters to modernize the MBTA commuter rail network to align it with the standards that have emerged in the last 50-60 years. The centerpiece of the plan is electrification of the entire network, starting from the already-wired Providence Line and the short, urban Fairmount Line and inner Eastern Line (Newburyport\/Rockport Lines on timetables).\n\nBased on comparable projects in peer countries, full electrification should cost $0.8-1.5 billion, and station upgrades to permit step-free access should cost on the order of$2 billion; rolling stock costs extra upfront but has half the lifecycle costs of diesels. An investment program on the order of high hundreds of millions or very low billions should be sufficient to wire the early-action lines as well as some more, such as the Worcester Line; one in the mid-single digit billions should be enough to wire everything, upgrade all stations, and procure modern trains.\n\nBenefits include much faster trips (see trip planner here), lower operating and maintenance costs, higher reliability, and lower air and noise pollution and greenhouse gas emissions. For a city the size of Boston, benefits exceed costs by such a margin that in the developed world outside North America, it would have been fully wired generations ago, and today\u2019s frontier of commuter rail electrification is sub-million metro areas like Trondheim, Aarhus, and Cardiff.\n\nWho is Will Brownsberger?\n\nBrownsberger is a Massachusetts state senator, currently serving as the Senate\u2019s president pro tempore. His district is a mix of middle-class urban and middle-class inner-suburban; the great majority of his district would benefit from commuter rail modernization.\n\nHe has strong opinions on commuter rail, which are what someone unaware of any progress in the industry since roughly 1960 might think are the future. For example, here\u2019s a blog post he wrote in 2019, saying that diesel engines are more reliable than electric trains because what if there\u2019s a power outage (on American commuter rail systems that operate both kinds of vehicles, electric trains are about an order of magnitude more reliable), and ending up saying rail is an outdated 20th century concept and proposing small-scale autonomous vehicles running on the right-of-way instead. More recently, he\u2019s told constituents that rail electrification with overhead wire is impossibly difficult and the only option is battery-electric trains.\n\nBecause he\u2019s written about the subject, and because of his position in the State Senate and the party caucus, he\u2019s treated as an authority on the subject. Hence, the legislature\u2019s lack of interest in rail modernization. It\u2019s likely that what he tells constituents is also what he tells other legislators, who follow his lead while focusing on their own personal interest, such as health policy, education policy, taxes, or any other item on the liberal policy menu.\n\nWhy is he like this?\n\nI don\u2019t know. It\u2019s not some kind of nefarious interest against modernization, such as the trenchant opposition of New York suburbanites to any policy that would make commuter trains useful for city residents, who they look down on. Brownsberger\u2019s district is fairly urban, and in particular Watertown and Belmont residents would benefit greatly from a system that runs frequently all day at 2020s speeds and not 1920s speeds. Brownsberger\u2019s politics are pretty conventionally liberal and he is interested in sustainability.\n\nMore likely, it\u2019s not-invented-here syndrome. American mainline passenger rail is stuck in the 1950s. Every innovation in the field since then has come from outside North America, and many have not been implemented in any country that speaks English as its primary language. Brownsberger lacks this knowledge; a lifetime in politics does not lend itself well to forming a deep web of transnational relationships that one can leverage for the required learning.\n\nWithout the benefit of around 60 years of accumulated knowledge of French, German, Swiss, Swedish, Dutch, Japanese, Korean, Austrian, Hungarian, Czech, Turkish, Italian, and Spanish commuter rail planning, any American plan would have to reinvent the wheel. Sometimes it happens to reinvent a wheel that is round and has spokes; more often, it invents a wheel with sharp corners or no place to even attach an axle.\n\nWhen learning happens, it is so haphazard that it\u2019s very easy to learn wrong or speculative things. Battery-electric trains are a good example of this. Europe is currently experimenting with battery-electric trains on low-traffic lines, where the fact that battery-electrics cost around double what conventional electric multiple units do is less important because traffic is that light. The technology is thus on the vendors\u2019 mind and so when Americans ask, the vendors offer to sell what they\u2019ve made. Boston is region of 8 million people running eight- and nine-car trains every 15 minutes at rush hour, where the places in Europe that experiment with battery tech run an hourly three-car train, but the without enough background in how urban commuter rail works in Europe, it\u2019s easy for an American agency executive or politician to overlook this difference.\n\nIs there a way forward?\n\nYes!\n\nHere is a proposed amendment, numbered Amendment 13, by Senator Brendan Crighton. Crighton represents some of the suburbs to the northeast of Boston, including working-class Lynn and very posh Marblehead; with only four years in the State Senate and three in the Assembly, he\u2019s not far up the food chain. But he proposed to require full electrification of the commuter rail network as part of the climate bill, on a loose schedule in which no new diesels may be procured after 2030, and lines would be electrified by 2028 (the above-named early action lines) to 2035 (the rest of the system). There are so far four cosponsors in addition to Crighton, and good transit activists in Massachusetts should push for more sponsorship so that Amendment 13 makes it into the climate package and passes.\n\n# Providence Should Use In-Motion Charging for\u00a0Buses\n\nThe future of bus transit is in-motion charging. This technology, increasingly common in Central Europe, is a hybrid of the trolleybus and the battery-electric bus (BEB), offering significant off-wire range with no need for centralized recharge facilities. Moreover, the range of batteries is improving over time and so is the recharge rate; in the limit, a pure BEB system may work, but in the present and near future it is not yet reliable in cold weather and requires diesel or oil heaters when the temperature is below freezing.\n\nMy original post on IMC technology speaks largely of New York and Boston, but Providence is an excellent place for implementing this technology as well, at least as good as Boston and far better than New York. As Rhode Island is thinking of how to invest in urban transit, it should take this technology into consideration, in addition to proposals for light rail along its busiest route (the Rapid, formerly the 99 and 11 buses) or a diesel BRT.\n\nTransit Forward RI 2040\n\nThe guiding program, adopted in 2020, is called Transit Forward, and aims for a statewide plan including regional connections as well as the core of a solid mass transit network in Providence. The Rapid route is to be turned to light rail, perhaps, and multiple other core routes are to be upgraded to BRT standards (including the Rapid if light rail is rejected). This can be viewed here or here. Here is the metropolitan bus map:\n\nObserve that multiple trunks are designed to have very high all-day frequency. Already today, service on Broadway and Westminster from Downcity to Olneyville interlines to a bus every 7.5 minutes; the proposal is to boost this to a bus every 7.5 minutes on Westminster and also one every 5 on Broadway. Past Olneyville, the buses branch at lower frequency. South Main is to have a core trunk route every 10 minutes and also a less frequent regional bus. The Angell\/Waterman one-way pair is to have three routes running every 20 minutes, two every 30, and two less frequent express buses; closer in, this one-way pair shares the bus tunnel between Downcity and College Hill with routes running on Hope, labeled N117 in the plan.\n\nOn net, this is a massive expansion of bus frequency available to people in and around Providence. Were it available when I lived there, I would have an easier time traveling to Pawtucket, East Providence, and other such locations, often for gaming purposes; with the network as it is (or as it was in 2012), I would walk 6 km from my home in Fox Point to a gaming store in Pawtucket and it would still be faster than waiting 40 minutes for the bus in the evening.\n\nIMC and branching\n\nIMC as a technology permits buses to run about 10 km off-wire; the current frontier of the technology is that a minimum of 20-30% of the route needs to be wired. UITP presents it as an advantage in that the wiring cost is only 20-30% of that of a traditional trolleybus, but in fact the wiring cost is much lower, because the trunks can be wired while the branches are left unwired.\n\nThis advantage is hard to realize in a city like Chicago or Toronto, with a relentlessly gridded bus network and little branching. Both cities rely on rapid transit for downtown access, and have a bus grid layered on top of their radial metro systems to provide everywhere-to-everywhere connectivity and feed the trains. In such an environment, IMC saves 70-80% of the cost of a trolleybus, minus the additional cost of procuring a bus with a backup battery. This may sound like a lot, but trolleybus expansion is rare globally, so reducing the cost by a factor of 4 does not necessarily turn it into an attractive investment.\n\nBut in Providence, there is no grid. About 4 km of wire in each direction, from Downcity to past Henderson Bridge, are enough to electrify nearly the entire bus network connecting the city with East Providence. Another 3 km along South Main and I-195 complete electrification to the east. The N117 may need a short stub on Thayer in addition to the bus tunnel; Broadway and Westminster, totally around 6 km, should be enough to electrify buses to and beyond Olneyville; the core of Broad is planned to carry the N12 and R, both at high frequency, and is therefore a prime target for wiring as well; Charles should be enough to wire most of the buses going due north or northwest.\n\nThis way, a core trolleybus network with maybe 30 km of wire in each direction can electrify most of the bus network in Providence, without having to deal with the teething problems of BEBs.\n\nThe issue of legibility\n\nOne minor benefit of wire in Providence is that it helps casual riders make sense of the public transportation network. A big disadvantage of bus networks over rail is their poor legibility: the map has too many routes and a user is expected to know them all over an area, and there is no indication on the street as to where the buses go. Marked bus lanes help solve the latter problem, as does wire.\n\nTrolleybuses are not streetcars. Their ride quality is that of a bus \u2013 usually better, occasionally worse, depending on who I ask. Their network structure is usually like the core of an urban bus network, and not like that of a modern light rail network, which a casual user can get at a glance. The presence of wire makes the system easier to see on the ground, helping improve legibility.\n\nThis is especially important in cities without grid networks \u2013 precisely the environments in which on purely technical issues IMC is already strongest. In Vancouver, the buses are largely gridded, and so it is generally clear where they go: they run on major streets like Broadway, King Edward, 41st, Arbutus, and MacDonald. But in Providence, it\u2019s not always clear, especially in the seams between two networks. Broadway has a few choices of street connections toward Kennedy Plaza \u2013 do buses go on Sabin? Or Fountain? Or Empire to Washington? Westminster has no clear connection \u2013 do buses turn left or right on Franklin\/Dave Gavitt Way? Wire helps make it clear for the confused passenger who doesn\u2019t live in town, or who lives on the East Side and isn\u2019t familiar with the Federal Hill street network.\n\nThis can be better than light rail\n\nRIPTA is interested in making its highest-intensity route, now the R and in the future the N12, into a light rail line. I get where it\u2019s coming from, but I have some worries. Providence development is frustratingly almost linear, but not quite; the train station is in a street loop off Main, and on the map above, the N12 veers off the straight path to connect to it. I don\u2019t know what the optimal way is of serving such a destination, and it\u2019s likely the answer will change over time based on changes in the technology and in other connections.\n\nIMC can be good precisely for this. If the route is partly wired, then small deviations based on changes in the plan are viable, albeit at the cost of legibility. The same goes for uncertainty over which routes connect to which: the R today interlines the old 99 on North Main to Pawtucket and the old 11 on Broad to South Providence, but the plan is to instead connect South Providence to Downcity via the Jewelry District using the N8, and instead have the N12 primary route continue southwest to Warwick via Elmwood and Reservoir. Such changes require a commitment to mode: swaps are fine as long as both routes use the same mode \u2013 if they\u2019re both light rail then it is viable and the same is true if they\u2019re both buses, but not if one is rail and the other is bus. IMC downgrades both to a bus, but in a way that permits higher ride quality to some extent and lower emissions at very low costs.","date":"2022-06-27 23:35:12","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 1, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.2054276466369629, \"perplexity\": 2580.043799139864}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-27\/segments\/1656103344783.24\/warc\/CC-MAIN-20220627225823-20220628015823-00595.warc.gz\"}"}
null
null
Shop posters, calendars, postcards, tapestries, acrylic and canvas boards, textile panels, and fine art of anime, manga, and original art by Japanese artists. TapestriesLove Live! Sunshine!! The School Idol Movie: Over the Rainbow B2-Size Tapestry Chika Takami Ver. PostcardsLove Live! Sunshine!! Season 2 Uranohoshi Girls' High School Store Birthday Present Set: Yoshiko Tsushima Ver.
{ "redpajama_set_name": "RedPajamaC4" }
5,884
Q: How to delete more than one row in a matrix, not just last row, when using a for loop in R I tried to go about this myself and looked up online how to do this, but no direct answer. Basically, I am trying to delete the rows in a matrix that have more than 3 characters. My code is only deleting the last row. Rows 16-31 should be deleted. The i gets iterated, but only deletes the last column which satisfies the condition. However, more rows must be deleted. Thanks for the help in advance! setwd("~/Desktop/Rpractice") c <- c("1", "2", "3", "4", "5") combine <- function (x, y) {combn (y, x, paste, collapse = ",")} combination_mat <- as.matrix(unlist(lapply (1:length (c), combine, c))) for (i in length(combination_mat)) { if (nchar(combination_mat[i]) > 3) { newmat <- print(as.matrix(combination_mat[-i,])) } } A: You really do not need a loop to remove those rows, eg you can look for the rows with more than 3 characters and remove those (please note the drop=FALSE argument to keep the tabular format of the data instead of simplifying that to a vector): > combination_mat[nchar(combination_mat[, 1]) <= 3, , drop = FALSE] [,1] [1,] "1" [2,] "2" [3,] "3" [4,] "4" [5,] "5" [6,] "1,2" [7,] "1,3" [8,] "1,4" [9,] "1,5" [10,] "2,3" [11,] "2,4" [12,] "2,5" [13,] "3,4" [14,] "3,5" [15,] "4,5"
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,852
\section{Introduction} Generating smooth, projective, irreducible curves over finite fields with a given number of points on the curve or on its Jacobian is a hard and interesting problem, with valuable applications and connections to number theory. The case of elliptic curves, for example, has important applications in cryptography, and current solutions rely on computing Hilbert class polynomials associated to imaginary quadratic fields. For curves of genus~2, already additional interesting problems arise when trying to compute the analogous class polynomials, the Igusa class polynomials, since the coefficients are not integral as in the case of genus~1. This leads to the question of understanding and bounding primes of bad reduction for complex multiplication (CM) curves of genus~2, and connections with arithmetic intersection theory (\cite{GorenLauter, LauterViray}). The case of genus~3 is more complicated than the case of genus~2 for several reasons. First, an abelian threefold can be non-simple \emph{without} being isogenous to a product of elliptic curves. Second, it is possible for a sextic CM field to have both primitive \emph{and} non-primitive CM types. Third, the rank of the endomorphism algebra can be larger. Each of these complications requires new ideas to handle. In this paper, we prove the following result which gives a bound on primes of geometric bad reduction for CM curves of genus~3 with primitive CM type. \begin{theorem}\label{thm:main} Let $C/M$ be a (smooth projective geometrically irreducible) curve of genus $3$ over a number field~$M$. Suppose that the Jacobian $\Jacof{C}$ has complex multiplication (CM) by an order $\mathcal{O}$ inside a CM field $K$ of degree $6$ and that the CM type of $C$ is primitive. Let $\mathfrak{p}$ be a prime of $M$ lying over a rational prime $p$ such that $C$ does not have potential good reduction at $\mathfrak{p}$. Then the following upper bound holds on $p$. For every $\mu\in \mathcal{O}$ with $\mu^2$ totally real and $K = \mathbb{Q}(\mu)$, we have $p < \frac{1}{8} B^{10}$ where $B =-\frac{1}{2}\mathrm{Tr}_{K/\mathbb{Q}}(\mu^2)$. \end{theorem} As in the case of genus two~\cite{GorenLauter}, in order to prove Theorem~\ref{thm:main}, we use the fact that bad reduction of $C$ gives an embedding of the CM order $\mathcal{O}$ into the endomorphism ring of the reduced Jacobian such that the Rosati involution induces complex conjugation on $\mathcal{O}$ (see Lemma~\ref{lem:dual}). We show that such an embedding cannot exist for sufficiently large primes. The proof of Theorem \ref{thm:main} is given in Section \ref{sec:proofmain}. To deal with the new situation where the reduction is a product of an elliptic curve with an abelian surface with no natural decomposition, we needed to find a suitable and explicit decomposition. Just the existence of a decomposition is not enough, and our first main contribution is to find the `right' decomposition (Lemma~\ref{lem:s}). The second main new idea is using the primitivity of the CM type in the case where there exist non-primitive CM types. For this we use the reduction of the tangent space in Section~\ref{sec:primitive}. Primitivity is crucial for our methods, but we do give the following conjecture in the non-primitive case. \begin{conjecture}\label{conj} There is a constant $e\in\mathbb{R}_{\geq 0}$ such that the following holds. Let $C/M$ be a (smooth, projective, geometrically irreducible) curve of genus $g\leq 3$ over a number field~$M$. Suppose that $C$ has CM by an order $\mathcal{O}$ in a CM field $K$ of degree $2g$. Let $\mathfrak{p}$ be a prime of $M$ lying over a rational prime $p$ such that $C$ does not have potential good reduction at $\mathfrak{p}$. Then the following upper bound holds on $p$. For every $\mu\in \mathcal{O}$ with $\mu^2$ totally real and $K = \mathbb{Q}(\mu)$, we have $p < B^e$ where $B =-\frac{1}{2}\mathrm{Tr}_{K/\mathbb{Q}}(\mu^2)$. \end{conjecture} \begin{remark} The case $g=1$ is true even with $e=0$, as CM elliptic curves have potential good reduction everywhere. The case of primitive CM types is Goren-Lauter~\cite{GorenLauter} for $g=2$ and Theorem~\ref{thm:main} for $g=3$. The case of non-primitive CM types is an open problem even for $g=2$ as far as we know. We do have numerical evidence in the case $g=2$. Br\"oker-Lauter-Streng~\cite[Lemma 6.4, Table 2 and Table 1]{Broker-Lauter-Streng} give CM hyperelliptic curves $C_{-8}$, $C_{-3}$, $C_{-15}$, $C_{-20}^{i}$, $C_{-20}^{-i}$, $C_{-6}^{1}$ and $C_{-6}^2$ as well as explicit orders~$\mathcal{O}$, and each time the denominators of the absolute Igusa invariants are rather smooth numbers. For example, we have $(I_4I_6/I_{10})(C_{-7}) = (5^2 \cdot 101 \cdot 1186127) / (2^9 \cdot 7^2)$. A proof in the case where the CM type is non-primitive cannot use the tangent space in the way we use it in our proof. On the other hand, in the case of non-primitive CM types there are more endomorphisms that one could use. This is because (for $g\leq 3$) the endomorphism ring $\mathrm{End}(J_{\overline{M}})$ has rank $2g^2$ over $\mathbb{Z}$, whereas in the case of primitive CM types we have $\mathrm{End}(J_{\overline{M}})\cong \mathcal{O}$ of rank~$2g$. Here, and throughout, $\overline{M}$ denotes an algebraic closure of $M$. \end{remark} The following proposition, which is proven in Section \ref{sec:geom}, turns the bound of Theorem~\ref{thm:main} into an intrinsic bound, depending only on the discriminants of the orders involved. \begin{proposition}\label{prop:geomnumbers} Let $\mathcal{O}\subset K$ be an order in a sextic CM field. \begin{enumerate} \item If $K$ contains no imaginary quadratic subfield, then there exists $\mu$ as in Theorem~\ref{thm:main} satisfying $0 < -\frac{1}{2}\mathrm{Tr}_{K/\mathbb{Q}}(\mu^2) \leq (\frac{6}{\pi})^{2/3}|\Delta({\mathcal O})|^{1/3}$, where $\Delta({\mathcal O})$ is the discriminant of the order ${\mathcal O}$. \item If $K$ contains an imaginary quadratic subfield~$K_1$, let $K_+$ be the totally real cubic subfield and let \mbox{$\mathcal{O}_i = K_i\cap \mathcal{O}$} where $i\in\{1,+\}$. Then there exists $\mu$ as in Theorem~\ref{thm:main} with $0 < -\frac{1}{2}\mathrm{Tr}_{K/\mathbb{Q}}(\mu^2) \leq |\Delta({\mathcal O}_1)|(1+2\sqrt{|\Delta({\mathcal O}_+)|})$. \end{enumerate} \end{proposition} Our next result is a consequence of Theorem~\ref{thm:main} in the special case of hyperelliptic curves. A hyperelliptic curve of genus $3$ over a subfield $M$ of $\mathbb{C}$ is a curve with an affine model of the form $C : y^2 = F(x,1)$ such that~$F$ is a separable binary form over $M$ of degree~$8$. A hyperelliptic curve invariant of weight~$k$ for genus~$3$ is a polynomial $I$ over~$\mathbb{Z}$ in the coefficients of~$F$ satisfying $I(F\circ A) = \det(A)^k I(F)$ for all $A\in\mathrm{GL}_2(\mathbb{C})$. For example, the discriminant $\Delta$ of $F$ is an invariant of weight $56$. Shioda~\cite{Shioda} gives a set of invariants that uniquely determines the isomorphism class of $C$ over $\mathbb{C}$. A Picard curve of genus $3$ over a field $M$ of characteristic $0$ is a smooth plane projective curve given by an affine model $C : y^3 = f(x)$ such that $f$ is a monic separable polynomial over $M$ of degree $4$. Such a curve can be written as \begin{equation} \label{eq:picard} y^3 = x^4 + a_{2}x^2 + a_{3} x + a_{4} \end{equation} uniquely up to scalings $(x,y) \mapsto (ux,u^{4/3} y)$, which change $a_l$ into $u^l a_l$. We define the ring of invariants to be the graded ring generated over $\mathbb{Z}_{(3)}$ by the symbols $a_2$, $a_3$ and $a_4$. It contains the discriminant $\Delta$ of the polynomial on the right hand side of~\eqref{eq:picard}, which is an invariant of weight $12$. Note that $\Delta$ is not related to $\Delta(\mathcal{O})$. The following consequence of Theorem~\ref{thm:main} is derived in Section~\ref{sec:invariants}. \begin{theorem}\label{thm:invariantdenom} Let $C/M$ be a hyperelliptic curve of genus $3$ over a number field~$M$. Suppose that $C$ has CM by an order $\mathcal{O}$ inside a CM field $K$ of degree $6$ and that the CM type of $C$ is primitive. Let $l\in\mathbb{Z}_{>0}$ and let $j = u/\Delta^l$ be a quotient of invariants of hyperelliptic curves, such that the numerator $u$ has weight $56l$. Let $\mathfrak{p}$ be a prime over a prime number $p$ such that $\mathrm{ord}_{\mathfrak{p}}(j(C))< 0$. Then $p$ satisfies the bound of Theorem~\ref{thm:main}. \end{theorem} \begin{remark}\label{rem:picardcase} We also believe the Picard case of Theorem~\ref{thm:invariantdenom} to be true for quotients of invariants $u/\Delta^l$ where $u$ has weight~$12l$; for details see Section~\ref{sec:invariants}. Alternatively, in the Picard curve case, work in progress of K{\i}l{\i}\c{c}er, Lorenzo Garc\'ia and Streng~\cite{KLS} gives a much stronger version of Theorem~\ref{thm:invariantdenom} that works for the alternative invariants $a_2a_4/a_3^2 $ and $a_2^3/a_3^2 $. \end{remark} In the Picard curve case, we define $j_1 = a_2^{6}/\Delta$, $j_2 = a_2^3a_3^2/\Delta$, $j_3 = a_2^4a_4/\Delta$, $j_4 = a_3^4/\Delta$, $j_5=a_4^3/\Delta$, $j_6 = a_2a_3^2a_4/\Delta$ and $j_7 = a_2^2a_4^2/\Delta$. Over an algebraic closure $\overline{M}$ of $M$, any Picard curve has a model in one of the following forms: \begin{align*} y^3 &= x^4 + Ax^2 + Ax + B, & & A = j_1^{\phantom{1}}j_2^{-1},\ B = j_1^{\phantom{1}}j_2^{-2}j_3^{\phantom{1}}=j_3^{\phantom{1}}j_4^{-1} & & \mbox{if $j_2\not=0$},\\ y^3 &= x^4 + Ax^2 + Bx + B, & & A = j_6^{\phantom{1}}j_5^{-1},\ B = j_4^{\phantom{1}}j_5^{-1} & & \mbox{if $j_4j_5\not=0$},\\ y^3 &= x^4 + x^2 + A, & & A = j_3^{\phantom{1}}j_1^{-1} & & \mbox{if $j_1\not=0$, $j_2=0$},\\ y^3 &= x^4 + x, & & & & \mbox{if $j_1=j_5=0$},\\ y^3 &= x^4 + 1, & & & & \mbox{if $j_1=j_4=0$}. \end{align*} We use the same notation $j_l$ also in the hyperelliptic case, but there we take it to mean the following quotients of Shioda invariants appearing in Weng~\cite[(5)]{Weng}: $j_1 = I_2^7/\Delta$, $j_3 = I_2^5I_4/\Delta$, \mbox{$j_5 = I_2^4I_6/\Delta$}, $j_7=I_2^3I_8/\Delta$ and $j_9 = I_2^2 I_{10}/\Delta$. Note that these hyperelliptic curve invariants satisfy the hypothesis of Theorem~\ref{thm:invariantdenom}. Now suppose that $K$ is a sextic CM field containing a primitive $4$th root of unity and consider invariants of hyperelliptic curves. Alternatively, let $K$ be a sextic CM field containing a primitive $3$rd root of unity and consider invariants of Picard curves. Let $j=u/\Delta^l$ and $j'=u'/\Delta^l$ be quotients of invariants of hyperelliptic (respectively Picard) curves, such that the numerators $u$ and $u'$ have weight 56l (respectively 12l). We define the class polynomials $H_{K,j}$ and $\widehat{H}_{K,j,j'}$ by \begin{align*} H_{K,j} &= \prod_{C} (X-j(C)) & & \in \mathbb{C}[X],\\ \widehat{H}_{K,j,j'} &= \sum_{C} j'(C) \prod_{D\not\cong C} (X-j(D)) & & \in \mathbb{C}[X], \end{align*} where the products and sum range over isomorphism classes of curves $C$ and $D$ over $\mathbb{C}$ with CM by $\mathcal{O}_K$ of primitive CM type, which are indeed hyperelliptic (resp.~Picard) by Weng \cite[Theorem 4.5]{Weng} (resp. Koike-Weng~\cite[Lemma 1]{KoikeWeng}). The polynomial $\widehat{H}_{K,j,j'}$ is the modified Lagrange interpolation of the roots of $H_{j'}$ introduced in \cite[Section~3]{GHKRW}. These polynomials have rational coefficients as they are fixed by $\mathrm{Aut}(\mathbb{C})$. Moreover, the polynomials $H_{j_l}$ and $\widehat{H}_{j_1,j_l}$, where $l$ ranges over $\{3, 5, 7, 9\}$ in the hyperelliptic case and over $\{2,3\}$ in the Picard case, can be used for the CM method for constructing curves over finite fields. See \cite[Section~3]{GHKRW} as well as \cite{Weng} (resp.~\cite{KoikeWeng}) for how to use these polynomials. The complex numbers $j(C)$ and $j(D)$ (and hence the polynomials $H_{K, j}$ and $\widehat{H}_{K,j,j'}$) can be approximated numerically using the methods of Weng~\cite{Weng} and Balakrishnan-Ionica-Lauter-Vincent~\cite{BILV} in the hyperelliptic case and the methods of Koike-Weng~\cite{KoikeWeng} and Lario-Somoza~\cite{LarioSomoza} in the Picard case. The (rational) coefficients of the polynomials can then be recognized from such approximations using continued fractions or the LLL algorithm. However, to be absolutely sure of the coefficients, one would need a bound on the denominators. We view the following result as a first step towards obtaining such a bound. The following is now an immediate consequence of Theorem~\ref{thm:invariantdenom}. \begin{theorem}\label{thm:classpoly} Let $K$ be a sextic CM field containing a primitive $4$th root of unity and let $p$ be a prime number that divides the denominator of a class polynomial $H_{j}$ or $\widehat{H}_{j,j'}$ with quotients of invariants $j$ and $j'$ as in Theorem~\ref{thm:invariantdenom}. Then $p$ satisfies the bound of Theorem~\ref{thm:main}.\qed \end{theorem} For the Picard curve case, see Remark~\ref{rem:picardcase}. \subsection{Applications, further work and open problems} \subsubsection*{Bad reduction} We believe that the exponent $10$ in Theorem~\ref{thm:main} is far from optimal. For instance, in \cite{BCLLMNO}, for the special case of reduction to a product of 3 elliptic curves with $K$ not containing any proper CM subfield, one gets an exponent of $6$. In the general case, it should be possible to get smaller exponents using variants of our proof, for example with a different choice of isogeny $s$ in Section~\ref{sec:embprb}, or by considering bounds in Section~\ref{sec:coeffbound} coming not just from the matrix of $\mu$, but also from other elements. We also believe that it is now possible to combine our proofs with the techniques of Goren and Lauter~\cite{GorenLauter2} to get not only a bound on the primes in the denominator of Theorems \ref{thm:invariantdenom} and~\ref{thm:classpoly}, but also a bound on the valuations at those primes. Together, these bounds will give a bound on the denominator itself, which is required if one wants to prove that the output of a class-polynomial-computing algorithm is correct. This was done for genus $2$ by Streng~\cite{Streng}. As in the case of genus~$2$, the resulting bounds will be so large that the algorithm is purely theoretical and cannot be run in practice. However, we view our results as a first step towards a denominator formula such as that of Lauter and Viray~\cite{LauterViray}, which is small and explicit enough for yielding proven-correct CM curves, as shown by Bouyer and Streng~\cite{BouyerStreng, StrengImplementation}. \subsubsection*{Hyperelliptic reduction} The reason why Theorems \ref{thm:invariantdenom} and~\ref{thm:classpoly} are only for hyperelliptic (and possibly Picard) curves is that the locus of bad reduction in the compactification of the full moduli space of curves of genus three is of codimension $> 1$, hence is not the vanishing locus of an invariant. The hyperelliptic locus is, however, of codimension~$1$, and there is indeed an invariant among the Dixmier-Ohno invariants whose vanishing locus is the locus of hyperelliptic curves and decomposable Jacobians. Numerical experiments of K{\i}l{\i}{\c{c}}er, Labrande, Lercier, Ritzenthaler, Sijsling and Streng~\cite{KLLRSS} suggest that the analogues of Theorems~\ref{thm:invariantdenom} and~\ref{thm:classpoly} for arbitrary CM curves of genus three are false. More research is needed in that direction. \subsection{Acknowledgements} We thank Irene Bouw, Bas Edixhoven, Everett Howe, Christophe Ritzenthaler and Chia-Fu Yu for useful discussions and for pointing out some of the references. We are grateful to the anonymous referees for many helpful suggestions. Part of this work was carried out during visits to the Max Planck Institute for Mathematics (MPIM), the Istanbul Center for Mathematical Sciences (IMBM), the University of California, San Diego (UCSD) and the University of Warwick. We are grateful to these institutions for their hospitality and ideal working conditions. Ozman is supported by Bogazici University Research Fund Grant Number 15B06SUP3 and by the BAGEP award of the Science Academy, 2016. Streng is supported by NWO Vernieuwingsimpuls VENI. \section{Notation and strategy} For the reader's convenience, we begin by defining some well-known concepts that are essential for our approach. \begin{definition}\label{def:hasCM} Let $\mathcal{O}$ be an order in a \emph{CM field} $K$ of degree $2g$ over $\mathbb{Q}$, that is, an imaginary quadratic extension of a totally real number field. We say that a curve $C$ of genus $g$ over a number field $M$ has \emph{complex multiplication} by $\mathcal{O}$ if there exists an embedding $\phi$ of $\mathcal{O}$ into the endomorphism ring of the Jacobian $\Jacof{C}_{\overline{M}}$ of $C$ over the algebraic closure. \end{definition} \begin{definition}\label{def:primitivetype} Let $\Phi$ be a set of $g$ non-conjugate embeddings $K \hookrightarrow \mathbb{C}$, then we say that $\Phi$ is a \emph{complex multiplication type (CM type)} of $K$. We say that a CM type is primitive if its restriction to any strict CM subfield of $K$ is not a CM type. \end{definition} \begin{definition}\label{def:CMtypeof} Given $J$ and $\phi$ as in Definition~\ref{def:hasCM} with $\overline{M}\subset \mathbb{C}$, we obtain a CM type by diagonalizing the action of $K$ via $\phi$ on the tangent space of $\Jacof{C}_{\overline{M}}$ at~$0$, and we call this the \emph{CM type of $C$}. \end{definition} Now let $C$ be a curve of genus~$3$ defined over a number field $M$ and such that its Jacobian $J = \Jacof{C}$ has complex multiplication by an order $\mathcal{O}$ of a sextic complex multiplication field~$K$. Let us assume that the CM type is primitive. We fix a totally imaginary generator $\mu\in\mathcal{O}$ of $K$ over~$\mathbb{Q}$. Thus, $\mu^2$ is a totally negative element of $\mathcal{O}$ that generates the totally real subfield $K_+$ of $K$. Let $\mathfrak{p}\mid p$ be a prime such that $C$ does not have potential good reduction at $\mathfrak{p}$. In other words, $\mathfrak{p}$ is a prime of geometric bad reduction for $C$, in the sense that even after extension of the base field, the curve $C$ still has bad reduction at all primes above~$\mathfrak{p}$. As noted in \cite[Section~4.2]{BCLLMNO}, this is equivalent to the stable reduction of $C$ being non-smooth, where this type of reduction is simply called ``bad reduction''. As~$J$ has complex multiplication, it has potential good reduction at every prime by a result of Serre and Tate~\cite{SerreTate}. Without loss of generality of our main results, we extend the field $M$ so that $C$ has a stable model for the reduction at $\mathfrak{p}$ and $J$ has good reduction. Let $\overline{J} = J\ (\mathrm{mod}\ \mathfrak{p})$. By Corollary $4.3$ in \cite{BCLLMNO}, we know that, possibly after extending the base field again, there exists an isomorphism $\overline{J}\cong E\times A$ as principally polarized abelian varieties (p.p.a.v.) over the new base field, where $E$ is an elliptic curve with its natural polarization and $A$ is a principally polarized abelian surface. This includes the case where there is an isomorphism $\overline{J}\cong E_1\times E_2\times E_3$ as p.p.a.v., where $A \cong E_2\times E_3$ is a product of elliptic curves. Let us write $\text{End}(E)=\mathcal{R}$ and $\mathcal{B}=\mathcal{R}\otimes \mathbb{Q}$. We will see that there is an isogeny $s:E^2\rightarrow A$ (which is, in fact, already known by \cite[Theorem $4.5$]{BCLLMNO}). Once we fix an isogeny~$s$, there are natural embeddings \[\iota:\,\mathcal{O}\overset{\iota_0}{\hookrightarrow} \text{End}(E\times A)\overset{\iota_1}{\hookrightarrow} \text{End}(E^3)\otimes \mathbb{Q}\cong\Mat{3}(\mathcal{B}).\] Step 1 is to show that for sufficiently large primes~$p$, the entries of $\iota(\mu^2)$ lie in a field~$\Bone\subset\mathcal{B}$ of degree $\leq 2$ over~$\mathbb{Q}$. This is obvious in the case where $E$ is ordinary, and requires work in the supersingular case. As in Goren-Lauter~\cite{GorenLauter}, we prove this by bounding the coefficients of $\iota(\mu)$. The main difficulty here was finding an appropriate isogeny~$s$, as not every isogeny~$s$ allows us to find bounds. Step 2 is to show that in the situation of Step~1, the field $\Bone$ embeds into $K$ and the CM type is induced from $\Bone$, which contradicts the primitivity of the CM type. In order to show this, we use the tangent space of the N\'eron model at the zero section. No analogue of Step 2 was needed in the case of genus~$2$ because a quartic CM field containing an imaginary quadratic subfield has no primitive CM types. The special case of $\overline{J}\cong E_1\times E_2\times E_3$ as p.p.a.v.\ where $K$ does not contain an imaginary quadratic field is the main result of \cite{BCLLMNO}. The following bound will be convenient in the sense that it allows us to formulate Theorem~\ref{thm:main} and Proposition~\ref{prop:field} without the need for case distinctions. \begin{lemma} \label{lem:Bnot1} Let $B = -\frac{1}{2}\mathrm{Tr}_{K/\mathbb{Q}}(\mu^2)$. Then $B$ is an integer and $B\geq 2$. \end{lemma} \begin{proof} Recall that $K_+$ denotes the totally real cubic subfield of $K$. Since $\mu^2\in \mathcal{O}\cap K_+$, we have \mbox{$B = -\mathrm{Tr}_{K_+/\mathbb{Q}}(\mu^2)\in\mathbb{Z}$}. Since $K=\mathbb{Q}(\mu)$, the element $\mu^2$ is totally negative and hence $B>0$. Now suppose for contradiction that $B=1$. Let $a\geq b \geq c \geq 0$ be such that $-a, -b, -c$ are the images of $\mu^2$ inside $\mathbb{R}$ under the three embeddings of $K_+$ into $\mathbb{R}$. Then $a + b + c = B = 1$, so each of $a, b, c$ is in the interval $(0,1)$. In particular, we get $\mathrm{Tr}_{K+/\mathbb{Q}}(\mu^4) = a^2+b^2+c^2 < a + b + c = 1$. As this trace is a non-negative integer, it is zero, hence $a=b=c=0$, contradiction. \end{proof} \section{An embedding problem} \label{sec:embprb} Throughout Sections \ref{sec:embprb}, \ref{sec:coeffbound} and \ref{sec:primitive}, we fix a prime ${\mathfrak p}\mid p$ that is of good reduction for $J=\Jacof{C}$ and not of potential good reduction for~$C$. In particular, possibly after extending the base field, the reduction satisfies $\overline{J}\cong E\times A$ as principally polarized abelian varieties for a principally polarized abelian surface $A$ and an elliptic curve~$E$. Let $\mathcal{R} = \mathrm{End}(E)$ and $\mathcal{B}=\mathcal{R}\otimes \mathbb{Q}$, which is either a quaternion algebra or a number field of degree $\leq 2$ over $\mathbb{Q}$. We write $K=\mathbb{Q}(\mu)$ where $\mu^2$ is a totally negative element of $\mathcal{O}$ that generates the totally real subfield $K_+$ of $K$. \newcommand{\homEA}[1]{\fbox{\raisebox{0ex}[3ex][2ex]{$#1$}}} \newcommand{\homAE}[1]{\framebox[3em][c]{$#1$}} \newcommand{\homAA}[1]{\framebox[3em][c]{\raisebox{0ex}[3ex][2ex]{$#1$}}} \newcommand{\EndEA}[4]{{\setlength\arraycolsep{0pt} \left(\begin{array}{cc} #1 & \homAE{#2} \\ \homEA{#3} & \homAA{#4}\end{array}\right)}} \newcommand{\homEEA}[2]{{\setlength\arraycolsep{0pt} \left(\begin{array}{cc} \homEA{#1} &\homEA{#2}\end{array}\right)}} \newcommand{\homEEEtoEA}[6]{{\setlength\arraycolsep{0pt} \left(\begin{array}{ccc} #1 & #2 & #3 \\ \homEA{#4} &\homEA{#5} &\homEA{#6} \end{array}\right)}} \newcommand{\EndEA{x}{y}{z}{w}}{\EndEA{x}{y}{z}{w}} Let $\iota_0:\,\mathcal{O}\hookrightarrow \text{End}(E\times A)$ be the injective ring homomorphism coming from reduction of $J$ at ${\mathfrak p}$ and write \begin{equation}\label{eq:xyzw} \iota_0(\mu)=:\EndEA{x}{y}{z}{w}, \end{equation} where we have $x\in \mathcal{R}$, $y\in\text{Hom}(A,E)$, $z\in\text{Hom}(E,A)$ and $w\in \text{End}(A)$; and the sizes of the boxes reflect the dimensions of the (co)domains of the homomorphisms. We define a homomorphism \begin{align*} s = \homEEA{z}{\!\! wz \!\!} :E\times E &\longrightarrow A\\ (P,Q)&\longmapsto z(P) + wz(Q). \end{align*} We first quickly eliminate the degenerate case where $s$ is not an isogeny. \begin{lemma}\label{lem:s} The homomorphism $s$ is an isogeny. \end{lemma} \begin{proof} We will prove that $z$ is not the zero map, and that the image $wz(E)$ of $wz$ is not contained in $z(E)$. It then follows that the image of $s$ has dimension $2$ and hence $s$ is an isogeny. Suppose that $z$ is the zero map. Then \eqref{eq:xyzw} gives that $x\in \mathcal{B}$ is a root of the minimal polynomial of $\mu$, which is irreducible of degree $6$, contradiction. Therefore, $z$ is non-zero and $z(E)\subset A$ is an elliptic curve. Now let $E'\subset A$ be an elliptic curve such that \begin{align*} s':E\times E'&\rightarrow A\\ (Q,R)&\rightarrow z(Q)+R \end{align*} is an isogeny. It follows that we have an isogeny \begin{align*} F' = 1 \times s' = \homEEEtoEA{1}{0}{0}{0}{z}{1} : E\times E\times E' &\longrightarrow E\times A\\ (P,Q,R) &\longmapsto (P, z(Q)+R) \end{align*} and hence a further embedding \begin{align*} \iota_1' : \text{End}(E\times A)&\longrightarrow \text{End}(E\times E\times E')\otimes \mathbb{Q}\\ f&\longmapsto (F')^{-1} f F'. \end{align*} Let $\iota' = \iota_1'\circ\iota_0 : \mathcal{O}\rightarrow \text{End}(E\times E\times E')\otimes \mathbb{Q}$. Next, we compute the matrix $\iota'(\mu)$. For the first column, we get \begin{equation}\label{eq:firstcolumn} (F')^{-1}\EndEA{x}{y}{z}{w} F'\left(\begin{array}{c} 1 \\ 0 \\ 0 \end{array}\right) = \homEEEtoEA{1}{0}{*}{0}{z}{*}^{-1} \left(\begin{array}{c} x \\ \homEA{z}\end{array}\right) = \left(\begin{array}{c} x \\ 1 \\ 0 \end{array}\right). \end{equation} Now suppose that $wz(E)$ is contained in $z(E)$. Then we get an element $z^{-1}wz\in \mathcal{B}$ and hence $$ \iota'(\mu)=\left(\begin{array}{ccc}x & * & * \\ 1 & z^{-1}wz & * \\ 0 & 0 & \delta\end{array}\right)\quad\mbox{for some $\delta\in \mathrm{End}(E')\otimes \mathbb{Q}$}. $$ But then $\delta$ is a root of the minimal polynomial of $\mu$, which is a contradiction, hence $wz(E)$ is not contained in $z(E)$ and the image of $s$ has dimension~$2$. \end{proof} It follows that we have an isogeny \begin{align*} F = 1 \times s = \homEEEtoEA{1}{0}{0}{0}{z}{\!\! wz \!\!} : E^3 &\longrightarrow E\times A\\ (P,Q,R) &\longmapsto (P, s(Q,R)) \end{align*} and hence a further embedding \begin{align*} \iota_1 : \text{End}(E\times A)&\longrightarrow \text{End}(E^3)\otimes \mathbb{Q}\cong\Mat{3}(\mathcal{B})\\ f&\longmapsto F^{-1} f F. \end{align*} Let $\iota = \iota_1\circ\iota_0 : \mathcal{O}\hookrightarrow \Mat{3}(\mathcal{B})$. Let $n$ be a positive integer such that $[n]\cdot \ker(s)=0$ (from Lemma~\ref{lem:pol_matrix} onwards we will use a specific~$n$). Then there exists an isogeny $\tilde{s}:\,A\rightarrow E\times E$ such that $s\cdot\tilde{s}=[n]$. \begin{lemma}\label{lem:matrix} We have $$ \iota(\mu)=\left(\begin{array}{ccc}x & a & b \\ 1 & 0 & c/n \\ 0 & 1 & d/n\end{array}\right), $$ where $x,a,b,c,d\in \mathcal{R}$. \end{lemma} \begin{proof} The first column is already computed in \eqref{eq:firstcolumn}, which is also valid with $F$ instead of~$F'$. For the second column, we compute $$ F^{-1}\EndEA{x}{y}{z}{w} F\left(\begin{array}{c} 0 \\ 1 \\ 0 \end{array}\right) = \homEEEtoEA{1}{0}{0}{0}{z}{\!\! wz \!\!}^{-1} \left(\begin{array}{c} * \\ \homEA{\!\! wz \!\!}\end{array}\right) = \left(\begin{array}{c} * \\ 0 \\ 1 \end{array}\right).$$ We have $F^{-1} = 1\times \frac{1}{n}\tilde{s}$, so all entries of $\iota(\mu)$ are in $\mathcal{R}$ except possibly the bottom two rows, which are in $\frac{1}{n}\mathcal{R}$. \end{proof} \section{Bounds on the coefficients}\label{sec:coeffbound} Our goal in this section is to prove the following. \begin{proposition}\label{prop:field} If $p>\frac{1}{8} B^{10}$, then the image $\iota(\mathcal{O})$ is inside the ring of $3\times 3$ matrices over a field $\Bone\subset \mathcal{B}$ of degree $\leq 2$ over $\mathbb{Q}$. \end{proposition} If $\mathcal{B}$ is a field, then we can take $\mathcal{B}_1=\mathcal{B}$. So in the proof of Proposition~\ref{prop:field}, we assume that $E$ is supersingular and $\mathcal{B}$ is a quaternion algebra. Then $\mathcal{B}$ is $B_{p,\infty}$, the quaternion algebra ramified exactly at $p$ and $\infty$. Let $\mathrm{Tr}$ and $\mathrm{N}$ denote the \emph{reduced} trace and norm on $\mathcal{B}$, and let $\cdot^\vee$ denote (quaternion) conjugation, so for all $x\in \mathcal{B}$, we have $\mathrm{N}(x) = xx^\vee = x^\vee x$, $\mathrm{Tr}(x) = x + x^\vee$, and $x^2 - \mathrm{Tr}(x) x + \mathrm{N}(x) = 0$. Note that $\mathcal{B}=B_{p,\infty}$ is a quaternion algebra ramified at infinity, hence a definite quaternion algebra, so the norm $\mathrm{N}(x)$ is a non-negative number and equal to zero if and only if $x=0$. The following result shows that quaternion order elements of small norm commute. We will use this to prove Proposition~\ref{prop:field}. \begin{lemma}[Goren and Lauter]\label{lem:quaternionfield} Let $\mathcal{R}$ be an order in the quaternion algebra $B_{p,\infty}$ and $x, y\in\mathcal{R}$. If $\mathrm{N}(x)\mathrm{N}(y) < p/4$, then $x$ and $y$ commute. \end{lemma} \begin{proof} We give the main idea for completeness. For details, see Lemma 2.1.1 and Corollary 2.1.2 of Goren and Lauter~\cite{GorenLauter} and the proof of Lemma~9.5 of Streng~\cite{Streng}. If $x$ and $y$ do not commute, then $1$, $x$, $y$, $xy$ span a $\mathbb{Z}$-lattice $L\subset \mathcal{R}\subset B_{p,\infty}$ of covolume $\leq 4\mathrm{N}(x)\mathrm{N}(y)$, while $\mathcal{R}$ is contained in a maximal order of covolume $p$. This is a contradiction if $\mathrm{N}(x)\mathrm{N}(y) < p/4$. \end{proof} Recall that $\overline{J}\cong E\times A$ as principally polarized abelian varieties, where $A = (A, \lambda_A)$ is a principally polarized abelian surface. In other words, the natural polarization on $\overline{J}$ corresponds to the product polarization $1\times \lambda_A$. \begin{lemma} \label{lem:pol_matrix} The polarization induced by $1\times \lambda_A$ on $E^3$ via the isogeny $F$ is $$ \lambda:=F^{\vee}(1\times\lambda_A)F=\left(\begin{array}{ccc} 1& 0&0\\0&\alpha & \beta \\ 0 & \beta^{\vee} & \gamma\end{array}\right), $$ for some $\alpha,\gamma\in\mathbb{Z}_{>0}$ and $\beta\in\mathcal{R}$ such that $\alpha\gamma-\beta\beta^{\vee}\in\mathbb{Z}_{>0}$. Here, $F^\vee$ denotes the dual isogeny. Let $n=\alpha\gamma-\beta\beta^{\vee}\in\mathbb{Z}_{>0}$. Then we have $GF = [n]$ for some isogeny~$G$, and therefore $[n]\ker(F)=0$. \end{lemma} \begin{proof} The first column and row of $\lambda$ are easy to compute. The symmetry (i.e., $\alpha,\gamma\in\mathbb{Z}$ and the occurrence of $\beta^\vee$) is Mumford \cite[(3) on page 190]{Mumford} (equivalently the first part of Application III on page 208 of loc.~cit.). The positive-definiteness (which implies $\alpha,\gamma,n > 0$) is the last paragraph of Application III on page 210 of loc.~cit.). It is now straightforward to compute $GF=[n]$ for $$ G = \left(\begin{array}{ccc} n& 0&0\\0&\gamma & -\beta \\ 0 & -\beta^{\vee} & \alpha\end{array}\right)F^{\vee}(1\times\lambda_A). $$ It follows that the kernel of $F$ is contained in the kernel of~$[n]$. \end{proof} From now on, take $n$ as in Lemma~\ref{lem:pol_matrix}. \begin{lemma}[{Proposition $4.8$ in \cite{BCLLMNO}}] \label{lem:dual} For every $\eta\in K$, the complex conjugate $\overline{\eta}\in K$ satisfies \[\iota(\overline{\eta}) = \lambda^{-1} \iota(\eta)^{\vee}\lambda,\] where for a matrix $M$, we use $M^{\vee}$ to denote the transpose of $M$ with conjugate entries. \end{lemma} \begin{proof} Complex conjugation is the Rosati involution, so $\iota_0(\overline{\eta}) = (1\times \lambda_A)^{-1}\iota_0(\eta)^{\vee}(1\times \lambda_A)$. Conjugation with $F^{-1}$ now yields exactly the equality in the lemma: \begin{align*} \iota(\overline{\eta}) &= F^{-1}(1\times \lambda_A)^{-1} \iota_0(\eta)^\vee (1\times \lambda_A)F\\ &= (F^{-1}(1\times \lambda_A)^{-1} F^{-\vee}) (F^{\vee} \iota_0(\eta)^\vee F^{-\vee}) (F^{\vee}(1\times \lambda_A)F) \\ &= (F^{\vee}(1\times \lambda_A)F)^{-1} (F^{-1} \iota_0(\eta) F)^{\vee} (F^{\vee}(1\times \lambda_A)F) \\ &= \lambda^{-1} \iota(\eta)^{\vee}\lambda.\qedhere \end{align*} \end{proof} \noindent For $\eta=\mu$, Lemma~\ref{lem:dual} reads $-\lambda \iota(\mu)=\iota(\mu)^{\vee}\lambda$, that is, $$ \left(\begin{array}{ccc}- x & -a & -b\\ -\alpha & -\beta & -\alpha c/n-\beta d/n\\ -\beta^{\vee}& -\gamma & -\beta^{\vee}c/n-\gamma d/n\end{array}\right)= \left(\begin{array}{ccc} x^{\vee} & \alpha & \beta \\ a^{\vee} &\beta^{\vee}&\gamma\\b^{\vee} & (c^{\vee}/n)\alpha+(d^{\vee}/n)\beta^{\vee}&(c^{\vee}/n)\beta+(d^{\vee}/n)\gamma\end{array}\right). $$ We conclude \begin{align} x^{\vee}&=-x\qquad\mbox{(equivalently $\text{Tr}(x)=0$),}\nonumber\\ a&=-\alpha\qquad\mbox{(and we already knew $\alpha\in\mathbb{Z}_{>0}$),}\nonumber\\ b&=-\beta=\beta^{\vee}\qquad\mbox{(hence $\mathrm{Tr}(\beta)=0$),}\label{eq:identities}\\ \gamma&=-\alpha c/n-\beta d/n\qquad\mbox{(and we already knew $\gamma\in\mathbb{Z}_{>0}$),}\nonumber\\ \mathrm{Tr}(\beta^{\vee}c)+\text{Tr}(\gamma d)&=0.\nonumber \end{align} \begin{lemma}[{Lemma~6.12 in \cite{BCLLMNO}}]\label{lem:trace} For every $\eta\in K$, the trace $\mathrm{Tr}_{K/\mathbb{Q}}(\eta)$ is equal to the sum of the reduced traces of the three diagonal entries of $\iota(\eta)\in \Mat{3}(\mathcal{B})$. \end{lemma} \begin{proof} Choose a prime $l\nmid np$. Then $\mathrm{Tr}_{K/\mathbb{Q}}(\eta)$ equals the trace of $\eta$ when acting on $T_l(J)\otimes \mathbb{Q}$, where $T_l(J)$ is the $l$-adic Tate module of~$J$. This action is preserved by reduction modulo~${\mathfrak p}$. Moreover, the isogeny $F$ induces an isomorphism of $l$-adic Tate modules, hence $\mathrm{Tr}_{K/\mathbb{Q}}(\eta)$ equals the trace of $\iota(\eta)$ when acting on $T_l(E\times E\times E)\otimes \mathbb{Q}$. The latter trace is exactly the sum of the traces of the actions of the diagonal entries of $\iota(\eta)$ on $T_l(E)\otimes \mathbb{Q}$, which are the reduced traces. \end{proof} \begin{remark} Lemma~6.12 in \cite{BCLLMNO} follows from a special case of Lemma~\ref{lem:trace} in which $\eta$ is an element of the totally real cubic subfield $K_+$ of $K$ and the diagonal entries of $\iota(\eta)$ are integers. \end{remark} Since both $\text{Tr}_{K/\mathbb{Q}}(\mu)$ and $\mathrm{Tr}(x)$ are~$0$, Lemma~\ref{lem:trace} applied to $\mu$ gives \begin{equation}\label{eq:traced} \text{Tr}(d)=0. \end{equation} Let $B = -\frac{1}{2}\text{Tr}_{K/\mathbb{Q}}(\mu^2)\in\mathbb{Z}_{>0}$. Then Lemmas \ref{lem:trace} and~\ref{lem:matrix} give \begin{align} B = & -\frac{1}{2}\left(\text{Tr}(x^2)+2\text{Tr}(a)+2\text{Tr}\Bigl(\frac{c}{n}\Bigr)+\text{Tr}\Bigl(\frac{d^2}{n^2}\Bigr)\right). \\ \intertext{On the other hand, the equality \eqref{eq:traced} implies $d^\vee = -d$ hence we have $\text{Tr}(d^2/n^2) = -2\text{N}(d/n)$ as $n\in\mathbb{Z}_{>0}$. Similary, by \eqref{eq:identities} we have $\text{Tr}(x^2) = -2\text{N}(x)$. Moreover, the equality $\gamma=-\alpha c/n-\beta d/n$ in \eqref{eq:identities} and the fact that $\gamma$ and $\alpha$ are integers give $\text{Tr}(c/n) = - \text{Tr}(\gamma/\alpha + \beta d/(n\alpha)) = -2\gamma/\alpha-\text{Tr}(\beta d/(n\alpha))$. Therefore, by $a = -\alpha\in\mathbb{Z}$ in \eqref{eq:identities}, we get} B = & \text{N}(x)+2\alpha+2\frac{\gamma}{\alpha}+\text{Tr}\Bigl(\frac{\beta d}{n\alpha}\Bigr)+\text{N}\Bigl(\frac{d}{n}\Bigr). \end{align} If we manage to rewrite this as a sum of terms that are all non-negative, then this bounds the individual terms from above by~$B$. Note that we recognize the final two terms as terms in the expansion $$ \text{N}\Bigl(\frac{\beta}{\alpha}+\frac{d^{\vee}}{n}\Bigr)=\frac{\text{N}(\beta)} {\alpha^2}+\text{Tr}\Bigl(\frac{\beta d}{\alpha n}\Bigr)+\text{N}\Bigl(\frac{d}{n}\Bigr), $$ so we get $$ B=\text{N}(x)+2\alpha+2\frac{\gamma}{\alpha}-\frac{\text{N}(\beta)} {\alpha^2}+\text{N}\Bigl(\frac{\beta}{\alpha}+\frac{d^{\vee}}{n}\Bigr). $$ Next, by the definition of $n$ in Lemma~\ref{lem:pol_matrix}, we have $n = \alpha\gamma-\text{N}(\beta)$, so $n/\alpha^2 = \gamma/\alpha - \text{N}(\beta)/\alpha^2$, which again allows us to replace two terms, and get \begin{equation}\label{eq:trace} B = \text{N}(x)+2\alpha+\frac{\gamma}{\alpha}+\frac{n} {\alpha^2}+\text{N}\Bigl(\frac{\beta}{\alpha}+\frac{d^{\vee}}{n}\Bigr), \end{equation} in which finally all terms are non-negative, as the norm of an element of $\mathcal{B}_{p,\infty}$ is non-negative. We immediately get that each of the individual terms is at most $B$. So, for example, $$ \mathrm{N}(x)\leq B,\quad 2\alpha \leq B,\quad \gamma/\alpha \leq B.$$ Therefore, we also get \begin{equation} \label{eq:boundbeta} \text{N}(\beta)/\alpha^2 = \frac{\alpha\gamma-n}{\alpha^2} \leq \gamma/\alpha \leq B. \end{equation} In order to bound $\text{N}(d)$, we use the following well-known (in)equalities. \begin{lemma}[Polarization identity for a quadratic form] For all $e,f\in \mathcal{B}$, we have $$\text{N}(e+f)+\text{N}(e-f) = 2(\text{N}(e)+\text{N}(f)).$$ \end{lemma} \begin{proof} This follows from writing it out. The cross terms cancel on the left-hand side and do not appear on the right-hand side. \end{proof} \begin{corollary}\label{cor:normdiff} For all $g,f\in \mathcal{B}$, we have $$\text{N}(g) \leq 2(\text{N}(g+f)+\text{N}(f)).$$ \end{corollary} \begin{proof} From the lemma, we have $\text{N}(e-f) \leq 2(\text{N}(e)+\text{N}(f))$, which we apply to $e = g+f$. \end{proof} Corollary~\ref{cor:normdiff}, with \eqref{eq:trace} and~\eqref{eq:boundbeta}, now gives \begin{align*} \mathrm{N}(d^\vee/n) &\leq 2\mathrm{N}(\beta)/\alpha^2 + 2\mathrm{N}(\beta/\alpha + d^\vee/n)\\ &\leq 2 (\gamma/\alpha + \mathrm{N}(\beta/\alpha+d^\vee/n))\leq 2B. \end{align*} As we also have \begin{equation} \label{eq:n} n\leq \alpha\gamma\leq \alpha^2 B \leq \frac{1}{4}B^3, \end{equation} this gives $$\text{N}(d^\vee) = n^2 \text{N}(d^\vee/n) \leq \frac{1}{8} B^7.$$ \newtheorem*{propfield}{Proposition~\ref{prop:field}} Recall that our goal is to prove the following. \begin{propfield} If $p> \frac{1}{8} B^{10}$, then the image $\iota(\mathcal{O})$ is inside the ring of $3\times 3$ matrices over a field $\Bone\subset \mathcal{B}$ of degree $\leq 2$ over $\mathbb{Q}$. \end{propfield} \begin{proof} Suppose $p> \frac{1}{8} B^{10}$. As $\mu$ generates~$K$, it suffices to show that the entries $\{x, a, b, c/n, d/n\}$ of $\iota(\mu)$ are in a field~$\mathcal{B}_1$. Recall that \eqref{eq:identities} gives $-a=\alpha, \gamma, n\in\mathbb{Z}_{>0}$, $b=-\beta$ and $c = -\frac{n\gamma}{\alpha} - \frac{\beta d}{\alpha}$. In particular, it suffices to prove that the elements of $\{x, \beta, d\}$ lie in a field $\mathcal{B}_1$, for which it suffices to prove that they commute. We have $\text{N}(x)\leq B$, $\text{N}(\beta)\leq \frac{1}{4} B^3$, $\text{N}(d)\leq \frac{1}{8}B^7$ and $B\geq 2$ (Lemma~\ref{lem:Bnot1}), hence the product of any pair of distinct elements of $\{x,\beta,d\}$ has norm less than $p/4$. Therefore, by Lemma \ref{lem:quaternionfield}, every pair of elements commutes. \end{proof} If $p > \frac{1}{8} B^{10}$, then $\iota(\mu)$ is a matrix over~$\Bone$. Let $f$ be the minimal polynomial of $\mu$ over~$\mathbb{Q}$, which has degree~$6$. Then $f(\iota(\mu))=0$, hence $f$ is divisible by the (at most cubic) minimal polynomial of $\iota(\mu)$ over the (at most quadratic) field~$\Bone$. Therefore, the field $K = \mathbb{Q}(\mu)$ contains a subfield isomorphic to $\Bone$ and $\Bone$ is quadratic. We now identify $\Bone$ with this subfield through a choice of embedding. This finishes the proof of Theorem~\ref{thm:main} in the case where $K$ does not contain an imaginary quadratic subfield. \section{If the CM field contains an imaginary quadratic subfield} \label{sec:primitive} In this section, we finish the proof of Theorem~\ref{thm:main}. By the argument at the end of the previous section, we are left with the case where $\iota(\mu)$ has entries in an imaginary quadratic subfield $\Bone$ of~$\mathcal{B}$. We have identified $\Bone$ with the subfield $K_1\subset K$ through a choice of embedding. We fix a prime $p > \frac{1}{8} B^{10}$ where $B$ is as in the previous section. Recall that the curve~$C$ was defined over a number field which was assumed to be large enough such that $J = \Jacof{C}$ has good reduction at every prime above~$p$. We now replace this number field by a finite extension $M$ such that $M$ contains the images of all embeddings $K\hookrightarrow \overline{M}$. Let $\mathfrak{p}\mid p$ be a prime of~$M$. We assume that $C$ does not have potential good reduction at $\mathfrak{p}$. Recall that the CM type is primitive, hence is not induced by a CM type of~$\Bone=K_1\subset K$. This means that the CM type induces two distinct embeddings of $\Bone$ into $M$. This primitivity will play a crucial role in our proof of Theorem~\ref{thm:main}. We will need to be able to distinguish between the two embeddings in characteristic~$p$, for which we will use an element $\sqrt{-\delta}\in\mathcal{O}$ with $\delta\in\mathbb{Z}_{>0}$ and $p\nmid 2\delta$. Such an element automatically exists if $p\nmid 2\Delta(\mathcal{O})$, which is a relatively weak condition to have in a result like Theorem~\ref{thm:main}. However, we do not even need to add such a condition to the theorem because of the following lemma. \begin{lemma} \label{lem:distinctmodp} Let $B = -\frac{1}{2}\mathrm{Tr}_{K/\mathbb{Q}}(\mu^2)$ and suppose that $p>\frac{1}{4} B^{7.5}$. Then $\exists\delta\in\mathbb{Z}_{>0}$ coprime to $p$ such that $\sqrt{-\delta}\in\mathcal{O}\cap \Bone$. \end{lemma} \begin{proof} We will prove the lemma with $\delta=-\Delta(\mathcal{O}\cap \Bone)$. Then $\sqrt{-\delta}\in \mathcal{O}\cap \Bone$ and $\delta\in \mathbb{Z}_{>0}$ since $\Bone$ is imaginary quadratic. We must show that $\delta$ is coprime to $p$. Note that \[\Delta(\mathcal{O}\cap \Bone)=[\mathcal{O}_{\Bone}:\mathcal{O}\cap \Bone]^2\Delta(\mathcal{O}_{\Bone})\] so it will suffice to prove that both $[\mathcal{O}_{\Bone}:\mathcal{O}\cap \Bone]$ and $\Delta(\mathcal{O}_{\mathcal{B}_1})$ are coprime to~$p$, which we do by showing that they are smaller than $\frac{1}{4} B^{7.5}$ in absolute value. Let $a\geq b\geq c\geq 0$ be such that the images of $\mu$ for the embeddings $K\rightarrow \mathbb{C}$ are $\{\pm ai, \pm bi,\pm ci\}$, so $B = a^2+b^2+c^2$. We have \begin{align} [\mathcal{O}:\mathbb{Z}[\mu]]^2[\mathcal{O}_K:\mathcal{O}]^2|\Delta(\mathcal{O}_{K})| &=|\Delta(\mathbb{Z}[\mu])|\nonumber \\ &= (2a)^2(2b)^2(2c)^2(a-b)^4(a+b)^4(a-c)^4(a+c)^4(b-c)^4(b+c)^4\nonumber \\ &= 2^6 a^2b^2c^2(a^2-b^2)^4(a^2-c^2)^4(b^2-c^2)^4,\nonumber \intertext{which, by the inequality between the arithmetic and the geometric mean, is} &\leq 2^6 \left(\frac{a^2+b^2+c^2}{3}\right)^3 \left(\frac{a^2-b^2+a^2-c^2+b^2-c^2}{3}\right)^{12}\nonumber \\ &\leq 2^{6+12} 3^{-(3+12)} B^{15} < 0.019 B^{15}.\label{eq:disc2} \end{align} Since $\frac{\mathcal{O}_{\Bone}}{\mathcal{O}\cap \Bone} \hookrightarrow \frac{\mathcal{O}_K}{\mathcal{O}}$, by \eqref{eq:disc2} we get \[[\mathcal{O}_{\Bone}:\mathcal{O}\cap \Bone]^2\leq [\mathcal{O}_K:\mathcal{O}]^2<0.019B^{15}\] which gives $[\mathcal{O}_{\Bone}:\mathcal{O}\cap \Bone]<0.14B^{7.5}$, as desired. Now for $\Delta(\mathcal{O}_{\mathcal{B}_1})$, we use the tower law for discriminants and \eqref{eq:disc2} to get \[|\Delta(\mathcal{O}_{\mathcal{B}_1})|^3\leq |\Delta(\mathcal{O}_{\mathcal{B}_1})^3 N_{\Bone/\mathbb{Q}}(\Delta_{K/\Bone})|=|\Delta(\mathcal{O}_K)|< 0.019B^{15}. \] Hence, $|\Delta(\mathcal{O}_{\mathcal{B}_1})|<0.27B^5<B^{7.5}$ and our proof is complete. \end{proof} \subsection{Some facts about tangent spaces} In order to detect the CM type (and its all-important primitivity), we will use the tangent space to $J=\Jacof{C}$ at the identity. We collect here some definitions and facts about tangent spaces which will be needed for our discussion. We will use the definition of tangent space as given by Demazure in Expos\'e~II of SGA~3~\cite{SGA3ExposeII} in the special case of a scheme over an affine base scheme. This requires the use of the ring of dual numbers. \begin{definition} For any commutative ring $R$, let $R[\epsilon]$ denote the $R$-algebra of dual numbers over $R$. It is free with basis $1, \epsilon$ as an $R$-module and the $R$-algebra structure comes from setting $\epsilon^2=0.$ \end{definition} The natural inclusion $R\hookrightarrow R[\epsilon]$ and the natural map $R[\epsilon]\rightarrow R$ which sends $\epsilon \mapsto 0$ induce respectively the structure morphism $\rho:\Spec(R[\epsilon])\rightarrow \Spec(R)$ and a section $\sigma:\Spec(R)\rightarrow \Spec(R[\epsilon])$ called the zero section. Let $X\rightarrow S=\Spec(R)$ be a morphism of schemes and let $u\in X(S)=\mathrm{Hom}_S(S,X)$. In \cite{SGA3ExposeII}, Demazure defines a commutative $S$-group scheme called the tangent space of $X/S$ at $u$. We will denote the tangent space of $X/S$ at $u$ by $T^u_{X/S}$. For a commutative $R$-algebra $R'$, let $t:\Spec(R')\rightarrow\Spec(R)$ denote the structure morphism. The set $T^u_{X/S}(R')$ is defined to be the collection of $S$-morphisms $\theta:\Spec(R'[\epsilon])\rightarrow X$ making the following diagram commute. \[ \xymatrix{\Spec(R'[\epsilon])\ar[r]^{\theta}& X\\ \Spec(R')\ar[r]_t \ar[u]_{\sigma}& \Spec(R)\ar[u]^u} \] We now gather some general facts about tangent spaces that we will need in our discussion. \begin{proposition} \label{prop:module} The set $T^u_{X/S}(R')$ has a canonical $R'$-module structure. The zero element is the map $u\circ t\circ \rho$ where $\rho: \Spec(R'[\epsilon])\to \Spec(R')$ denotes the structure morphism. \end{proposition} \begin{proof} This is a slight generalization of \href{http://stacks.math.columbia.edu/tag/0B2B}{the lemma with tag 0B2B} in the Stacks Project \cite{stacks-project}. The proof is the same; we recall the main ingredients here for the reader's convenience. We have a pushout in the category of schemes $$ \Spec(R'[\epsilon]) \amalg_{\Spec(R')} \Spec(R'[\epsilon]) = \Spec(R'[\epsilon_1, \epsilon_2]) $$ where $R'[\epsilon_1, \epsilon_2]$ is the $R'$-algebra with basis $1, \epsilon_1, \epsilon_2$ and $\epsilon_1^2 = \epsilon_1\epsilon_2 = \epsilon_2^2 = 0$. Given two $S$-morphisms $\theta_1, \theta_2 : \Spec(R'[\epsilon]) \to X$, we construct an $S$-morphism \begin{equation} \label{eq: addition} \theta_1 + \theta_2 : \Spec(R'[\epsilon]) \to \Spec(R'[\epsilon_1, \epsilon_2]) \xrightarrow{\theta_1, \theta_2} X \end{equation} where the first arrow is given by $\epsilon_i \mapsto \epsilon$. Now for scalar multiplication, given $\lambda \in R'$ there is a selfmap of $\Spec(R'[\epsilon])$ corresponding to the $R'$-algebra endomorphism of $R'[\epsilon]$ which sends $\epsilon$ to $\lambda \epsilon$. Precomposing $\theta : \Spec(R'[\epsilon]) \to X$ with this selfmap gives $\lambda \cdot \theta$. The axioms of a vector space are verified by exhibiting suitable commutative diagrams of schemes. The statement about the zero element follows immediately from the description of the addition law (\ref{eq: addition}). \end{proof} \begin{proposition} \label{prop:link} Let $v$ be the compositum $v: \Spec(R')\xrightarrow{t} S\xrightarrow{u} X$. Then there is an isomorphism of $R'$-modules \[T^u_{X/S}(R')\cong \mathrm{Hom}_{R'}(v^*(\Omega^1_{X/S}), R'),\] where $\Omega^1_{X/S}$ denotes the sheaf of relative differentials of $X/S$. \end{proposition} \begin{proof} See Remark 3.6.1 and footnote (25) in \cite{SGA3ExposeII}. \end{proof} \begin{proposition} \label{prop:linear} Let $X$ and $Y$ be schemes over $S$ and let $f:X\to Y$ be an $S$-morphism. Then $f$ induces an $S$-morphism $T(f):T^u_{X/S}\to T^{f\circ u}_{Y/S}$, called the derived morphism, with the following properties: \begin{enumerate} \item $T(f\circ g)=T(f)\circ T(g)$; \item $T(f)$ induces an $R'$-module homomorphism $T^u_{X/S}(R')\to T^{f\circ u}_{Y/S}(R').$ \end{enumerate} Furthermore, suppose that $G$ is a group scheme over $S$ with identity section $e$ and $n_G : G \to G$ is the $S$-morphism $g\to g^n$ for $n\in\mathbb{Z}$. Then the derived morphism $T(n_G) : T^e_{G/S}\to T^e_{G/S}$ is multiplication by $n$, meaning it sends $x\in T^e_{G/S}(R')$ to $nx$. \end{proposition} \begin{proof} See \cite{SGA3ExposeII}, Proposition 3.7.bis and Corollaire 3.9.4. If $\theta : \Spec(R'[\epsilon])\to X$ is an element of $T^u_{X/S}(R')$, then $T(f)$ sends $\theta$ to $f\circ \theta$. This clearly preserves the $R'$-module structure described in Proposition \ref{prop:module}. \end{proof} \begin{proposition} \label{prop:products} Let $X$ and $Y$ be schemes over $S$. Then \[T^u_{X/S}\times_S T^w_{Y/S}\cong T^{(u,w)}_{(X\times_S Y)/S}.\] \end{proposition} \begin{proof} See Proposition 3.8 in \cite{SGA3ExposeII}. \end{proof} \subsection{The proof of Theorem~\ref{thm:main}} \label{sec:proofmain} Now we will apply these general facts about tangent spaces to our specific case. We want to relate the tangent space of $J$ at the identity to the tangent space of its reduction modulo $\mathfrak{p}$ at the identity. For this we will use the tangent space at the identity section of a N\'{e}ron model of $J/M$. Let $\mathcal{O}_{{\mathfrak p}}$ be the valuation ring of ${\mathfrak p}$ and let $\mathfrak{K}=\mathcal{O}_M/\mathfrak{p}$ be the residue field. Let $\mathcal{J}/\mathcal{O}_{\mathfrak{p}}$ be a N\'{e}ron model for $J/M$ and let $\overline{J}/\mathfrak{K}$ be the special fibre of $\mathcal{J}$. Let $\tilde{e}:\Spec(\mathcal{O}_{{\mathfrak p}})\rightarrow \mathcal{J}$, $e:\Spec (M)\rightarrow J$ and $e_0: \Spec(\mathfrak{K})\rightarrow \overline{J}$ be the identity sections of $\mathcal{J}$, $J$ and $\overline{J}$ respectively. \begin{lemma} \label{lem:tensors} The $\mathcal{O}_{\mathfrak{p}}$-module $T^{\tilde{e}}_{\mathcal{J}/\mathcal{O}_{\mathfrak{p}}}(\mathcal{O}_{\mathfrak{p}})$ is free of rank~$3$. Furthermore, there are natural isomorphisms \begin{equation} \label{eq:tensorM} T^e_{J/M}(M) \cong T^{\tilde{e}}_{\mathcal{J}/\mathcal{O}_{\mathfrak{p}}}(\mathcal{O}_{\mathfrak{p}})\otimes_{\mathcal{O}_{\mathfrak{p}}}M \end{equation} and \begin{equation} \label{eq:tensork} T^{e_0}_{\overline{J}/\mathfrak{K}}(\mathfrak{K}) \cong T^{\tilde{e}}_{\mathcal{J}/\mathcal{O}_{\mathfrak{p}}}(\mathcal{O}_{\mathfrak{p}})\otimes_{\mathcal{O}_{\mathfrak{p}}}\mathfrak{K} \end{equation} as vector spaces over $M$ and $\mathfrak{K}$ respectively. Moreover, the isomorphisms \eqref{eq:tensorM} and \eqref{eq:tensork} respect the action of $T(f)$ for $f\in\mathrm{End}_M(J)=\mathrm{End}_{\mathcal{O}_\mathfrak{p}}(\mathcal{J})$. \end{lemma} \begin{proof} By \cite[Proposition 6.2.5]{Liu}, $\Omega^1_{\mathcal{J}/\mathcal{O}_{\mathfrak{p}}}$ is free of rank~$3$ in a neighborhood of the image of $e_0$. Note that any such neighborhood contains the image of $\tilde{e}$. Therefore, $\tilde{e}^*(\Omega^1_{\mathcal{J}/\mathcal{O}_{\mathfrak{p}}})$ is a free $\mathcal{O}_{\mathfrak{p}}$-module of rank~$3$. Now Proposition \ref{prop:link} implies that the same is true of $T^{\tilde{e}}_{\mathcal{J}/\mathcal{O}_{\mathfrak{p}}}(\mathcal{O}_{\mathfrak{p}})$. Likewise, $T^e_{J/M}(M)$ and $T^{e_0}_{\overline{J}/\mathfrak{K}}(\mathfrak{K})$ are vector spaces of dimension $3$ over $M$ and $\mathfrak{K}$, respectively. We have canonical identifications $T^e_{J/M}(M)=T^{\tilde{e}}_{\mathcal{J}/\mathcal{O}_{\mathfrak{p}}}(M)$ and $T^{e_0}_{\overline{J}/\mathfrak{K}}(\mathfrak{K})=T^{\tilde{e}}_{\mathcal{J}/\mathcal{O}_{\mathfrak{p}}}(\mathfrak{K})$. Let $F\in\{M,\mathfrak{K}\}$ and let $t:\Spec(F[\epsilon])\rightarrow \Spec(\mathcal{O}_\mathfrak{p}[\epsilon])$ be the natural map. Then precomposing an element $\theta\in T^{\tilde{e}}_{\mathcal{J}/\mathcal{O}_{\mathfrak{p}}}(\mathcal{O}_\mathfrak{p})$ with $t$ gives an element of $T^{\tilde{e}}_{\mathcal{J}/\mathcal{O}_{\mathfrak{p}}}(F)$. The $\mathcal{O}_\mathfrak{p}$-bilinear map $T^{\tilde{e}}_{\mathcal{J}/\mathcal{O}_{\mathfrak{p}}}(\mathcal{O}_\mathfrak{p})\times F\to T^{\tilde{e}}_{\mathcal{J}/\mathcal{O}_{\mathfrak{p}}}(F)$ given by $(\theta,\lambda)\mapsto \lambda\cdot (\theta\circ t)$ induces a homomorphism of $F$-vector spaces \[T^{\tilde{e}}_{\mathcal{J}/\mathcal{O}_{\mathfrak{p}}}(\mathcal{O}_\mathfrak{p})\otimes_{\mathcal{O}_\mathfrak{p}} F\to T^{\tilde{e}}_{\mathcal{J}/\mathcal{O}_{\mathfrak{p}}}(F).\] One can check that this map is injective using the description of the zero element given in Proposition~\ref{prop:module}. Surjectivity follows by comparing dimensions. Finally, the action of $T(f)$, as described in Proposition~\ref{prop:linear}, is clearly preserved. \end{proof} The main ingredient for the proof of Theorem~\ref{thm:main} is the following proposition. \begin{proposition} \label{prop:2evals} Let $B$ and $\delta$ be as in Lemma \ref{lem:distinctmodp}, and suppose $p > \frac{1}{8}B^{10}$. Then there is an invertible matrix $P\in \Mat{3}(\Bone)$ such that \[P\iota(\sqrt{-\delta}) P^{-1}=\pm\begin{pmatrix} \sqrt{-\delta} & 0 & 0\\ 0 & \sqrt{-\delta} & 0\\ 0 & 0 & -\sqrt{-\delta}\end{pmatrix}.\] \end{proposition} \begin{proof} By Proposition \ref{prop:field}, reduction at a prime above $p>\frac{1}{8} B^{10}$ induces a $\mathbb{Q}$-algebra homomorphism $\iota: K\hookrightarrow \mathrm{End}(E^3)\otimes \mathbb{Q}=\text{Mat}_3(\mathcal{B})$ with image contained in $\text{Mat}_3(\Bone)$, the ring of $3\times 3$ matrices over $\Bone$. Since $\iota(\sqrt{-\delta})^2=-\delta I_3$ and $\sqrt{-\delta}\in \Bone$, we can take a change of basis over $\Bone$ to diagonalise the matrix $\iota(\sqrt{-\delta})$. Moreover, the eigenvalues of $\iota(\sqrt{-\delta})$ are in $\{\pm\sqrt{-\delta}\}$. It suffices to show that $\iota(\sqrt{-\delta})$ has two distinct eigenvalues, i.e. $\iota(\sqrt{-\delta})\neq \pm \sqrt{-\delta}I_3$. For this we will use the primitivity of the CM type. In order to detect the CM type, we will use the tangent space to $J=\Jacof{C}$ at the identity. By the N\'{e}ron mapping property, $\sqrt{-\delta}$ has a unique extension to an $\mathcal{O}_\mathfrak{p}$-endomorphism of the N\'{e}ron model $\mathcal{J}$, which we will denote by $\varphi$. Applying Proposition \ref{prop:linear}, we get an endomorphism $T(\varphi)$ of $T^{\tilde{e}}_{\mathcal{J}/\mathcal{O}_{\mathfrak{p}}}$ which induces an $\mathcal{O}_{\mathfrak{p}}$-linear endomorphism of $T^{\tilde{e}}_{\mathcal{J}/\mathcal{O}_{\mathfrak{p}}}(\mathcal{O}_{\mathfrak{p}})$. By the definition of primitivity of the CM type (Definitions \ref{def:primitivetype} and~\ref{def:CMtypeof}), the action of $T(\varphi)$ on $T^e_{J/M}(M)$ has two distinct eigenvalues, $\sqrt{-\delta}$ and $-\sqrt{-\delta}\in\mathcal{O}_{\mathfrak{p}}$. By Lemmas~\ref{lem:Bnot1} and \ref{lem:distinctmodp}, the two eigenvalues $\pm\sqrt{-\delta}$ remain distinct in the residue field $\mathfrak{K}$, which has characteristic $p>\frac{1}{8} B^{10}\geq \frac{1}{4}B^{7.5}>2$. Therefore, applying Lemma \ref{lem:tensors} again, we see that the action of $T(\varphi)$ on $T^{e_0}_{\overline{J}/\mathfrak{K}}(\mathfrak{K})$ has two distinct eigenvalues. Now, let the isogeny $F: E^3\rightarrow \overline{J}$ be as in Sections \ref{sec:embprb} and~\ref{sec:coeffbound}. By Lemma~\ref{lem:pol_matrix}, there is an integer $n>0$ and an isogeny $G: \overline{J}\rightarrow E^3$ such that $GF$ is multiplication by $n$ on~$E^3$. Then $\iota(\sqrt{-\delta})=n^{-1}G\overline{\varphi}F$, where $\overline{\varphi}$ denotes the $\mathfrak{K}$-endomorphism of $\overline{J}$ induced by $\varphi$. Recall from \eqref{eq:n} in Section \ref{sec:coeffbound} that $n\leq \frac{B^3}{4}<\frac{1}{8} B^{10}<p$. Therefore, $n$ is invertible in $\mathfrak{K}$ and Proposition \ref{prop:linear} gives \[T(G\overline{\varphi} F) = T(G)\circ T(\varphi)\circ T(F) = n n^{-1}T(G)\circ T(\varphi)\circ T(F) = n T(F)^{-1} \circ T(\varphi)\circ T(F).\] The right-hand side is $n$ times a conjugate of $T(\varphi)$, whereby its eigenvalues in~$\overline{\mathfrak{K}}$ are $n$ times those of $T(\varphi)$. Therefore, $T(G\overline{\varphi} F)$ has two distinct eigenvalues in~$\mathfrak{K}$ for its action on the tangent space $T^0_{E^3/\mathfrak{K}}(\mathfrak{K})$ of $E^3$ at the identity. By Proposition \ref{prop:products}, we have \[T_{E^3/\mathfrak{K}}^0(\mathfrak{K})\cong T_{E/\mathfrak{K}}^0(\mathfrak{K})\oplus T_{E/\mathfrak{K}}^0(\mathfrak{K})\oplus T_{E/\mathfrak{K}}^0(\mathfrak{K})\] where $T_{E/\mathfrak{K}}^0(\mathfrak{K})$ denotes the tangent space of $E$ at the identity. Now suppose that $n\iota(\sqrt{-\delta})=\pm n\sqrt{-\delta}I_3$. Then $T(G\overline{\varphi} F) = T(n\iota(\sqrt{-\delta})) = \pm n \sqrt{-\delta}I_{3}$ has only one eigenvalue. Contradiction. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:main}.] Suppose $p > \frac{1}{8} B^{10}$. Recall from the end of Section~\ref{sec:coeffbound} that $\iota(\mu)$ has coefficients in a quadratic field $\Bone\ni \sqrt{-\delta}$. Applying Proposition \ref{prop:2evals}, we see that since $\mu$ commutes with $\sqrt{-\delta}$, the matrix $P\iota(\mu)P^{-1}$ is of the form \[\begin{pmatrix} * & * & 0\\ *& * & 0\\ 0 & 0 & *\end{pmatrix}.\] But this means that the bottom right entry of $P\iota(\mu)P^{-1}$ is a root of the (irreducible degree six) minimal polynomial of $\mu$ over $\mathbb{Q}$. This gives a contradiction because the entries of the matrix $P\iota(\mu)P^{-1}$ lie in the quadratic field $\Bone$. This completes the proof of Theorem~\ref{thm:main}. \end{proof} \section{Geometry of numbers}\label{sec:geom} The following is a reformulation and proof of Proposition~\ref{prop:geomnumbers}. \begin{proposition} Let $\mathcal{O}$ be an order in a sextic CM field $K$, and let $K_+$ denote the totally real cubic subfield of $K$. \begin{enumerate} \item If $K$ contains no imaginary quadratic subfield, then there exists \mbox{$\mu\in{\mathcal O}$} such that $K = \mathbb{Q}(\mu)$ and $\mu^2$ is a totally negative element in $K_+$ satisfying $0<-\mathrm{Tr}_{K_+/\mathbb{Q}}(\mu^2)\leq (\frac{6}{\pi})^{2/3}|\Delta({\mathcal O})|^{1/3}$. \item If $K$ contains an imaginary quadratic subfield~$K_1$, we write $\mathcal{O}_i = K_i\cap \mathcal{O}$ for $i\in\{1,+\}$. Then there exists \mbox{$\mu\in {\mathcal O}$} such that $K = \mathbb{Q}(\mu)$ and $\mu^2$ is a totally negative element in $K_+$ satisfying \mbox{$0 < -\mathrm{Tr}_{K_+/\mathbb{Q}}(\mu^2) \leq |\Delta({\mathcal O}_1)|(1+2\sqrt{|\Delta({\mathcal O}_+)|})$}. \end{enumerate} \end{proposition} \begin{proof} Let $\Phi=\{\phi_1,\phi_2,\phi_3\}$ be the set of embeddings of $K$ into $\mathbb{C}$ up to complex conjugation. We identify $K\otimes_{\mathbb{Q}}\mathbb{R}$ with $\mathbb{C}^3$ via the $\mathbb{R}$-algebra isomorphism $K\otimes_\mathbb{Q}\mathbb{R}\rightarrow\mathbb{C}^3: x\otimes a \mapsto (a\phi_1(x), a\phi_2(x),a\phi_3(x))$. \noindent \textit{Case 1.} The order ${\mathcal O}\subset K$ is a lattice of co-volume $2^{-3}|\Delta({\mathcal O})|^{1/2}$ in $\mathbb{C}^3$. We choose a symmetric convex body in $\mathbb{C}^3$: $$ {\mathcal C}_R = \{x\in \mathbb{C}^3: |\mathrm{Re}(x_i)|<1\ \text{for all $i$, }\ \sum_{i}\mathrm{Im}(x_i)^2<R^2\}. $$ Next, we claim that if $R=(\frac{3}{4\pi})^{1/3}|\Delta({\mathcal O})|^{1/6}+\epsilon$ for some $\epsilon>0$, then there is a non-zero $\gamma\in{\mathcal O}\cap {\mathcal C}_R$ such that $K = \mathbb{Q}(\gamma)$. Indeed, suppose that $R=(\frac{3}{4\pi})^{1/3}|\Delta({\mathcal O})|^{1/6}+\epsilon$. Then we have \[\mathrm{vol}({\mathcal C}_R) = 2^3(\frac{4}{3}\pi R^3)>2^3|\Delta({\mathcal O})|^{1/2} = 2^6\mathrm{covol}({\mathcal O}).\] By Minkowski's first convex body theorem (see Siegel~\cite[Theorem 10]{Siegel}), there is a non-zero element $\gamma$ in ${\mathcal O}\cap{\mathcal C}_R$. If $\gamma\in K_+$, then we have $|\mathrm{N}_{K_+/\mathbb{Q}}(\gamma)| = \prod_{\phi_i\in\Phi}|\mathrm{Re}(\phi_i(\gamma))|<1$, so we get $\gamma=0$, a contradiction. Hence $\gamma\in{\mathcal O}\cap {\mathcal C}_R$ and $\gamma\notin K_+$. To prove the claim, it now only remains to prove that $\gamma$ generates~$K$. As $K$ has degree $6$, the field generated by $\gamma$ has degree $1$, $2$, $3$ or $6$. Since any subfield of a CM field is either totally real or a CM field, we find that either $\gamma$ is totally real (hence in $K^+$, contradiction), or generates a CM subfield of $K$. As CM fields have even degree and we are in case \textit{1.}, where $K$ has no imaginary quadratic subfield, we find that $\gamma$ generates $K$. This proves the claim. Let $\mu = \gamma-\overline{\gamma}$. Then $\mu^2$ is a totally negative element in $K_+$, hence $\mathbb{Q}(\mu)=K$. We get $$ -\mathrm{Tr}_{K_+/\mathbb{Q}}(\mu^2)=-\sum_{i}\phi_i(\mu^2) = -\sum_{\phi_i\in\Phi}\phi_i((\gamma-\overline{\gamma})^2)=4\sum_{i}\mathrm{Im}(\phi_i(\gamma))^2< 4R^2. $$ Since $\gamma$ is an algebraic integer in~$K$, we have $\mathrm{Tr}_{K_+/\mathbb{Q}}(\mu^2)\in\mathbb{Z}$. So when we let $\epsilon$ tend to $0$, we get $-\mathrm{Tr}_{K_+/\mathbb{Q}}(\mu^2)\leq 4(\frac{3}{4\pi})^{2/3}|\Delta({\mathcal O})|^{1/3}= (\frac{6}{\pi})^{2/3}|\Delta({\mathcal O})|^{1/3}$, which proves~\textit{1.}. \noindent {\textit{Case 2.}} The order ${\mathcal O}_+\subset K_+$ is a lattice of co-volume $|\Delta({\mathcal O}_+)|^{1/2}$ in $\mathbb{R}^3$. We choose a symmetric convex body in $\mathbb{R}^3$: $$ {\mathcal C}_R = \{x\in \mathbb{R}^3: |x_1| < 1, |x_2| < R, |x_3| < R\}. $$ Next, we claim that if $R=|\Delta({\mathcal O}_+)|^{1/4}+\epsilon$ for some $\epsilon>0$, then there is a non-zero $\gamma\in{\mathcal O}_+\cap {\mathcal C}_R$ such that $\gamma\in K_+\setminus \mathbb{Q}$. Indeed, we then have $\mathrm{vol}({\mathcal C}_R) = 2^3R^2>2^3|\Delta({\mathcal O}_+)|^{1/2} = 2^3\mathrm{covol}({\mathcal O}_+)$. By Minkowski's first convex body theorem (see Siegel~\cite[Theorem 10]{Siegel}), there is a non-zero element $\gamma$ in ${\mathcal O}_+\cap{\mathcal C}_R$. If $\gamma\in \mathbb{Q}$, then $\gamma\in\mathbb{Z}$, but $|\gamma|< 1$, so we get $\gamma=0$, a contradiction. Hence $\gamma\in{\mathcal O}_+\cap {\mathcal C}_R$ and $\gamma\notin \mathbb{Q}$. This proves the claim. Let $\mu = \sqrt{\Delta({\mathcal O}_1)}\gamma$. Then $\mu^2$ is a totally negative element in $K_+$. We get $$ -\mathrm{Tr}_{K_+/\mathbb{Q}}(\mu^2)=|\Delta({\mathcal O}_1)| \sum_{i}\phi_i(\gamma^2) \leq |\Delta({\mathcal O}_1)| (1+2R^2). $$ Since $\gamma$ is an algebraic integer in~$K_+$, we have $\mathrm{Tr}_{K_+/\mathbb{Q}}(\mu^2)\in\mathbb{Z}$. So when we let $\epsilon$ tend to $0$, we get $-\mathrm{Tr}_{K_+/\mathbb{Q}}(\mu^2)\leq |\Delta({\mathcal O}_1)|(1+2|\Delta({\mathcal O}_+)|^{1/2})$. Since $|\Delta({\mathcal O}_+)|$ is a positive integer, the result \textit{2.}~follows. \end{proof} \section{Invariants}\label{sec:invariants} In this section, we prove Theorem \ref{thm:invariantdenom}. Let $j=\frac{u}{\Delta^l}$ be as in that theorem: a quotient of invariants of hyperelliptic curves $y^2=F(x,1)$ of genus~$3$, where~$\Delta$ is the discriminant of~$F$ and $u$ is an invariant of weight $56l$. Let $C$ be such a curve over a number field~$M$. \begin{theorem}\label{thm:stablebad} In the situation above, if $j(C)$ has negative valuation at a prime $\mathfrak{p}$ with $\mathfrak{p}\nmid 6$, then $C$ does not have potential good reduction at~$\mathfrak{p}$. \end{theorem} \begin{proposition}[Example 10.1.26 in \cite{Liu}]\label{prop:hyper} Let $S=\Spec (\mathcal{O}_{\mathfrak{p}})$, where $\mathcal{O}_{\mathfrak{p}}$ is a discrete valuation ring with field of fractions $M$ and residue field $\mathfrak{K}$ with $\text{char}(\mathfrak{K})\neq 2$. Let $C$ be a hyperelliptic curve of genus $g\geq1$ over~$M$ defined by an affine equation $$ y^2=P(x), $$ with $P(x)\in M[x]$ separable. Then $C$ has good reduction if and only if $C$ is isomorphic to a curve given by an equation as above with $P(x)\in\mathcal{O}_{\mathfrak{p}}[x]$ such that the image of $P(x)$ in $\mathfrak{K}[x]$ is separable of degree $2g+1$ or $2g+2$. Furthermore, any such isomorphism is given by a change of variables as in \cite[Corollary 7.4.33]{Liu}.\qed \end{proposition} We believe the Picard curve analogue of Proposition~\ref{prop:hyper} to be true, and the Picard curve analogues of Theorems~\ref{thm:invariantdenom} and \ref{thm:classpoly} would follow from this. Work in progress of Bouw and Wewers gives a result similar to Proposition~\ref{prop:hyper} for Picard curves. \begin{proof}[Proof of Theorem~\ref{thm:stablebad}] Assume that $C$ has potential good reduction at $\mathfrak{p}$ with $\mathfrak{p}\nmid 6$. Extend the base field so that $C$ has good reduction, and then take a model $y^2=P(x)\in\mathcal{O}_{\mathfrak{p}}[x]$ such that the image of $P(x)$ in $\mathfrak{K}[x]$ is separable of degree $2g+1$ or $2g+2$, as in Proposition~\ref{prop:hyper}. This changes the coefficients, but not the normalized invariant $j = \frac{u}{\Delta^l}$ by the definition of hyperelliptic curve invariants. Since $P(x)$ has coefficients in $\mathcal{O}_{\mathfrak{p}}$, it follows that $u(P(x))\in\mathcal{O}_{\mathfrak{p}}$. Also, we have $\Delta(P(x))\in\mathcal{O}_{\mathfrak{p}}^*$ by Proposition~\ref{prop:hyper}, hence $j(C)\in\mathcal{O}_{\mathfrak{p}}$. This contradicts the assumption and, therefore, the theorem follows. \end{proof} \noindent\textit{Proof of Theorem~\ref{thm:invariantdenom}.} By Lemma~\ref{lem:Bnot1}, the bound $B$ appearing in Theorem~\ref{thm:main} satisfies $B\geq 2$. Therefore, for $p\leq 3$ we have $p<\frac{1}{8} B^{10}$ and there is nothing to prove. Hence we assume that $p>3$. In the situation of Theorem~\ref{thm:invariantdenom}, we showed in Theorem~\ref{thm:stablebad} that the curve does not have potential good reduction, hence Theorem~\ref{thm:main} applies. \qed \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
1,339
A Moulin Rouge! 2001-ben bemutatott filmmusical, mely Baz Luhrmann "vörös függönyös trilógiájának" (Kötelező táncok, Rómeó + Júlia) a befejező része. A történet helyszíne Párizs, a századforduló idején. Párizs fényözönéből is kiragyog a Moulin Rouge. E mulatóban gyűltek össze mind, akiknek még több fényűzésre, vágyra és vadságra volt szükségük. Ebbe a világba csöppen bele az ifjú költő: Christian. Őt mégsem ez a hatalmas fényűzés ejti rabul, hanem minden férfi vágyálma, a környék legkedveltebb prostituáltja: Satine. A film e két ember lehetetlen, gyönyörű és tragikus szerelmét mutatja be. Igaz ezt a történetet számtalan formában láthattuk már, ugyanakkor Baz Luhrmann egészen egyedi, meglepő és gyönyörű új köntösbe öltözteti. A ragyogó jelmezek és a látvány tervezés az Oscar-díjat is kiérdemelték, ill. Nicole Kidmant is jelölték a legjobb női főszereplő díjára. A zenei számokban rendkívül sok sláger van feldolgozva a leghíresebb együttesektől és sztároktól, Craig Armstrong eredeti zenéjén kívül. Fontosabb díjak, jelölések BAFTA-díj (2002) díj: legjobb férfi mellékszereplő (Jim Broadbent) díj: Anthony Asquith-díj a legjobb filmzenének (Craig Armstrong, Marius De Vries) díj: legjobb hang (Anna Behlmer, Andy Nelson, Roger Savage) jelölés: legjobb vizuális effektusok jelölés: legjobb operatőr (Donald McAlpine) jelölés: legjobb jelmeztervezés (Catherine Martin, Angus Strathie) jelölés: legjobb vágás (Jill Bilcock) jelölés: legjobb film (Martin Brown, Baz Luhrmann, Fred Baron) jelölés: legjobb smink (Maurizio Silvi, Aldo Signoretti) jelölés: legjobb látványtervezés (Catherine Martin) jelölés: legjobb forgatókönyv (Baz Luhrmann, Craig Pearce) jelölés: legjobb rendezés (Baz Lurmann) Golden Globe-díj (2002) díj: legjobb film – zenés film és vígjáték díj: legjobb női alakítás (Nicole Kidman) díj: legjobb filmzene (Craig Armstrong) jelölés: legjobb rendező (Baz Lurmann) jelölés: legjobb dal (David Baerwald: Come What May) jelölés: legjobb férfi főszereplő (Ewan McGregor) MTV Movie Awards (2002) díj: legjobb női főszereplő (Nicole Kidman) díj: legjobb zenés jelenet (Ewan McGregor) jelölés: legjobb cameo (Kylie Minogue) jelölés: legjobb csók (Nicole Kidman és Ewan McGregor) jelölés: legjobb zenés jelenet (Nicole Kidman) Oscar-díj (2002) díj: legjobb jelmeztervezés (Angus Strathie, Catherine Martin) díj: legjobb látványtervezés (Brigitte Broch, Catherine Martin) jelölés: legjobb film jelölés: legjobb női főszereplő (Nicole Kidman) jelölés: legjobb operatőr (Donald McAlpine) jelölés: legjobb vágás (Jill Bilcock) Jegyzetek További információk 2001 filmjei Amerikai romantikus filmek Amerikai musicalfilmek Ausztrál romantikus filmek Ausztrál musicalfilmek 20th Century Fox-filmek InterCom-filmek Párizsban játszódó filmek
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,923
\subsection{\mbox{\textit{N}--body}{} simulations} \label{sec:Methods:Simulations} We calibrate our predictions for the number of substructures in a WDM model using high-resolution DM-only \mbox{\textit{N}--body}{} simulations of cosmological volumes. The Copernicus Complexio~(\mbox{{\sc COCO}}{}) suite consists of two zoom-in simulations: one of {$\Lambdaup$CDM}{} that we refer to as \mbox{{\sc COCO-COLD}}{} \citep{hellwing_copernicus_2016}, and the other of \keV[3.3] thermal relic {WDM}{}, hereafter \mbox{{\sc COCO-WARM}}{} \citep{bose_copernicus_2016}. These two versions differ only in the matter power spectra used to perturb the simulation particles in the initial conditions. Both \mbox{{\sc COCO-COLD}}{} and \mbox{{\sc COCO-WARM}}{} are simulated in periodic cubes of side \cMpc[70.4] using the \Gadget3{} code that was developed for the Aquarius{} Project \citep{springel_aquarius_2008}. The high-resolution regions correspond approximately to spherical volumes of radii \cMpc[{\sim}18] that each contain ${\sim}1.3\times10^{10}$ DM particles of mass, $m_p{=}\cMsun[1.135\times10^5]{}$. Haloes at the edges of these regions can become contaminated with high-mass simulation particles that disrupt their evolution. We identify these contaminated haloes as having a low-resolution DM particle within $3\times\R{200}{}$ of the halo centre at \z[0]. The cleaned catalogues provide large samples of haloes in both cosmological models and both simulations resolve the subhalo mass functions of DM haloes down to masses $\Msun[{\sim}10^7]$. The cosmological parameters assumed for this suite of simulations are derived from the {WMAP}{} seventh-year data release \citep{komatsu_seven-year_2011}: \ensuremath{{\ensuremath{H_0}{}=\kms[70.4]\pMpc{}},\,\allowbreak {\Omegaup_{\rm M}=0.272},\,\allowbreak {\Omegaup_\Lambdaup=0.728},\,\allowbreak {n_s=0.967},\, \sigma_8=0.81}{}. In \mbox{\textit{N}--body}{} cosmological simulations the discreteness of the simulation particles can give rise to gravitational instabilities that produce artificial structures. Models such as {WDM}{} that impose a cut-off in the primordial matter power spectrum are especially susceptible to these effects \citep{wang_discreteness_2007,angulo_warm_2013,lovell_properties_2014}. The instabilities are resolution-dependent and lead to the artificial fragmentation of filaments, giving rise to small `spurious' haloes that create an upturn at the low-mass end of the {WDM}{} halo mass function. \Myref{lovell_properties_2014} developed a method to identify and prune these objects from the halo merger trees using their mass and particle content. The onset of numerical gravitational instabilities translates into a resolution-dependent mass threshold. Haloes that do not exceed this during their formation and subsequent evolution are likely to be spurious. This coarse requirement is refined further by a second criterion on the particles that compose the halo when its mass is half that of its maximum value, $M_{\rm max}\, /\, 2$. In the initial conditions of the simulation, the Lagrangian regions formed by the particles in spurious haloes are highly aspherical. Their shapes are parametrized by $\ensuremath{s_\text{half-max}}{}{=}c\, /\, a$, where \emph{a} and \emph{c} are the major and minor axes of the diagonalized moment of inertia tensor of the DM particles in Lagrangian coordinates. These considerations were applied to the \mbox{{\sc COCO-WARM}}{} simulation by \myref{bose_copernicus_2016} who find that almost all spurious haloes can be removed by applying the criteria: $M_{\rm max}{<}\cMsun[3.1\times10^7]$ and $\ensuremath{s_\text{half-max}}{}{<}0.165$. The details of the calculation of these threshold values can be found in \extsec{2.3} of \myref{bose_copernicus_2016}. Applying such simple criteria means that some genuine haloes can be removed while some spurious haloes remain; however, the numbers of each are extremely small and do not affect our results. Therefore, we follow this prescription to `clean' the \mbox{{\sc COCO-WARM}}{} catalogues of spurious haloes for use throughout the rest of this paper. The resolution of a simulation also affects the identification of subhaloes in the inner regions of simulated haloes, e.g. \citep{springel_aquarius_2008,onions_subhaloes_2012}. Subhaloes that fall below the resolution limit at any time are discarded by some substructure finders, and some other subhaloes are disrupted artificially by numerical effects \citep{van_den_bosch_dark_2018,green_tidal_2019,errani_asymptotic_2021,green_tidal_2021}. Consequently, these objects do not appear in the subhalo catalogue, even though they may still exist at the present day. We correct for this by identifying such subhaloes in \mbox{{\sc COCO-COLD}}{} and \mbox{{\sc COCO-WARM}}{} before they are accreted and tracking them to \z[0], and restoring them to the subhalo catalogues that we use to calibrate our methodology. In \appref{app:Convergence_tests}, we discuss in more detail the procedure we use to recover these objects, and the effect that excluding them has on the halo mass function. \subsection{Model-independent radial density profile of the MW satellites} \label{sec:Methods:Model-independent_LF} \begin{figure}% \centering% \includegraphics[width=0.5\columnwidth]{coco_cdm_wdm_density_profile.pdf}% \vspace{-10pt}% \caption[Stacked subhalo radial number density profiles of \mbox{{\sc COCO}}{} haloes]{The radial number density of subhaloes with ${\ensuremath{v_\mathrm{peak}}\geq\kms[20]}$ normalized to the mean density within \R{200}. The solid lines show the profiles obtained by stacking $767$ and $764$ uncontaminated host haloes with masses $\MNFW{}\geq\Msun[10^{11}]$ from \mbox{{\sc COCO-COLD}}{} and \mbox{{\sc COCO-WARM}}{}, respectively. The \percent{68} bootstrapped uncertainties in the stacked profiles are approximately the same size as the line thicknesses and are not shown.} \label{fig:Methods:compare_COCO_C_vs_W}% \vspace{-10pt}% \end{figure}% To obtain the best constraints on the {WDM}{} particle mass we need a complete census of the Galactic satellites. The satellite population is dominated by ultra- and hyperfaint galaxies with absolute magnitudes fainter than $\MV{} = -8$ (see e.g. \citep{tollerud_hundreds_2008,hargis_too_2014,newton_total_2018}), which can be detected only in deep surveys. This means that large areas of the sky remain unexplored and that currently we have only a partial census of the MW satellites. However, there are several methods that use the current observations to infer the total satellite count of our Galaxy (see e.g. \citep{koposov_luminosity_2008,tollerud_hundreds_2008}). Here, we use the estimates from \myref{newton_total_2018} that are based on a Bayesian formalism that has been tested robustly using mock observations. These results were obtained by combining the observations of the Sloan Digital Sky Survey~({SDSS}{}) \citep{alam_eleventh_2015} and the Dark Energy Survey~({DES}{}) \citep{bechtol_eight_2015,drlica-wagner_eight_2015}, which together cover nearly half the sky, and estimating the MW satellite luminosity function down to a magnitude, \MV[0]. This roughly corresponds to galaxies with stellar mass higher than $\Msun[10^2]$ \citep{bose_no_2019}. The method of \myref{newton_total_2018}~(code implementing this is available from \citep{newton_mw_2018}) takes two input components. First, it uses the sky coverage of a given survey and the distance from the Sun within which a satellite galaxy of a given magnitude can be detected. This depends on the depth of the survey and the satellite detection algorithm. Secondly, the \myref{newton_total_2018} method requires the radial probability distribution function of satellite galaxies. Simulations of DM-only {CDM}{} haloes show that subhaloes selected by \ensuremath{v_\mathrm{peak}}{}, the highest maximum circular velocity achieved in their evolutionary histories, have the same radial number density profile as that of the observed satellites (see \myref{newton_total_2018}, and discussion therein). Furthermore, {CDM}{} simulations (e.g. \citep{springel_aquarius_2008,hellwing_copernicus_2016}) have shown that the radial distribution of satellites is largely independent of their mass as well as of the host mass when expressed in terms of the rescaled distance, $r/R_{200}$, where $r$ and $R_{200}$ denote the radial distance and the host halo radius, respectively. This is studied further in \figref{fig:Methods:compare_COCO_C_vs_W}, where we compare the normalized radial number density profiles of stacked populations of subhaloes in the \mbox{{\sc COCO-WARM}}{} and \mbox{{\sc COCO-COLD}}{} simulations. The fiducial populations were obtained by selecting subhaloes with ${\ensuremath{v_\mathrm{peak}}{}\geq\kms[20]}$ and identifying and including subhaloes that would exist at $z{=}0$ if they had not been prematurely destroyed or missed by substructure finders (for details see \appref{app:Convergence_tests}). We apply this correction after pruning the spurious haloes from the merger trees (see \secref{sec:Methods:Simulations}) to ensure that they are not inadvertently restored. \figref{fig:Methods:compare_COCO_C_vs_W} illustrates that both {CDM}{} and {WDM}{} predict the same radial distribution of satellites, which means that we can use the satellite distribution inferred from {CDM}{} to make predictions for {WDM}{} models. This is beneficial as {CDM}{} simulations sample better the inner radial profile, to which the \myref{newton_total_2018} result is particularly sensitive. To summarize, in this paper we infer the satellite galaxy luminosity function of the MW within \R{200}{} for assumed host halo masses in the range, $\MNFW{}=\Msun[{\left[0.5,\, 2.0\right]\times10^{12}}]$, using the Bayesian methodology presented in \myref{newton_total_2018}. As we mentioned above, this requires two components: \begin{enumerate} \item a tracer population of DM subhaloes with a radial profile that matches that of the observed satellites; and, \item a set of satellite galaxies detected in surveys for which the completeness is characterized well. \end{enumerate} For the former, we use the same \ensuremath{v_\mathrm{peak}}{}-selected $\left(\ensuremath{v_\mathrm{peak}}{}{\geq}\kms[10]\right)$ fiducial {CDM}{} subhalo populations as used in \myref{newton_total_2018}. These are obtained from five high-resolution {$\Lambdaup$CDM}{} DM-only \mbox{\textit{N}--body}{} simulations of isolated MW-like host haloes from the Aquarius{} suite of simulations \citep{springel_aquarius_2008}. For the latter, we use the observations of nearby dwarf galaxies from the {SDSS}{} and {DES}{} supplied in \extapp{A} of \myref{newton_total_2018} (compiled from \citep{watkins_substructure_2009,mcconnachie_observed_2012,drlica-wagner_eight_2015,kim_heros_2015,koposov_kinematics_2015,jethwa_magellanic_2016,kim_portrait_2016,walker_magellan/m2fs_2016,carlin_deep_2017,li_farthest_2017}). Later work to infer the luminosity function using more recent observational data and a better characterization of the {DES}{} completeness function is in good agreement with the \myref{newton_total_2018} results \citep{nadler_modeling_2019,drlica-wagner_milky_2020}. \subsection{Estimating the amount of halo substructure} \label{sec:Methods:EPS} Estimates of the average number of subhaloes in MW-like DM haloes can be obtained using the Extended Press-Schechter~(EPS) formalism \citep{press_formation_1974,bond_excursion_1991,bower_evolution_1991,lacey_merger_1993,parkinson_generating_2008}. In this approach, the linear matter density field is filtered with a window function to identify regions that are sufficiently dense to collapse to form virialized DM haloes. In {CDM}{} models the filter employed takes the form of a top-hat in real space. However, applying this to models such as {WDM}{} in which power is suppressed at small scales leads to an over-prediction of the number of low-mass haloes \citep{benson_dark_2013}. This occurs because the variance of the smoothed density field on small scales becomes independent of the shape of the linear matter power spectrum if the latter decreases faster than $k^{-3}$. Consequently, the halo mass function continues to increase at small masses rather than turning over [\citealp{lovell_satellite_2016}, \citealp{leo_new_2018} \extsec{3.1}], making the top-hat filter an inappropriate choice. Using a sharp \emph{k}-space filter seemed to address this by accounting for the shape of damped power spectra at all radii \citep{benson_dark_2013,schneider_halo_2013}; however, subsequent work by \myref{leo_new_2018} demonstrates that this over-suppresses the production of small haloes. They find that using a smoothed version of the sharp \emph{k}-space filter produces halo mass functions in best agreement with \mbox{\textit{N}--body}{} simulations. Throughout this paper, we use the \myref{leo_new_2018} smooth \emph{k}-space filter for the {WDM}{} models that we consider. To obtain our estimates of the number of substructures, \Nsub{}, within \R{200}{} of MW-like haloes we follow the approach described by \myref{giocoli_analytical_2008} that was subsequently modified in \extsec{4.4} of \myref{schneider_structure_2015} for use with sharp \emph{k}-space filters. Using the \myref{leo_new_2018} filter, a conditional halo mass function, $N_{\rm SK}$, is generated from the primordial linear matter power spectrum. \Myref{bode_halo_2001} showed that {WDM}{} power spectra, $P_{\rm {WDM}{}}\!\left(k\right),$ are related to the {CDM}{} power spectrum, $P_{\rm {CDM}{}}\!\left(k\right),$ by $P_{\rm {WDM}{}}\!\left(k\right)=T^2\!\left(k\right) P_{\rm {CDM}{}}\!\left(k\right),$ where $T\!\left(k\right)$ is the transfer function given by \begin{equation} \label{eq:WDM_constraint_method:Methods:EPS:T_k} T\!\left(k\right) = \left[1 + \left(\alpha k\right)^{2\nu}\right]^\frac{-5}{\nu}. \end{equation} Here, $\nu=1.12$ and $\alpha$ is described by \myref{viel_constraining_2005} as being a function of the {WDM}{} particle mass, \ensuremath{m_{\rm th}}{}, given by \begin{equation} \label{eq:WDM_constraint_method:Methods:EPS:alpha} \alpha = 0.049\, \left[\frac{\ensuremath{m_{\rm th}}{}}{\keV{}}\right]^{-1.11}\, \left[\frac{\upOmega_{\rm {WDM}{}}}{0.25}\right]^{0.11}\, \left[\frac{h}{0.7}\right]^{1.22} \cMpc{}. \end{equation} \Myref{schneider_structure_2015} showed that integrating the conditional halo mass function over the redshift-dependent spherical collapse threshold of a given progenitor, $\delta_c\!\left(z\right)$, gives the subhalo mass function \begin{equation} \fdv{\Nsub{}}{\ln M} = \frac{1}{N_{\rm norm}} \int_{\delta_c\!\left(0\right)}^\infty \fdv{N_{\rm SK}}{\ln M}\, \dv{\delta_c}\,, \end{equation} where $M$ is the filter mass and $N_{\rm norm}$ is a normalization constant. The latter term, which is a free parameter, corrects the total count for progenitor subhaloes that exist at multiple redshifts which are counted more than once. Using the \myref{leo_new_2018} filter introduces two other free parameters, $\hat{\beta}$ and $\hat{c}$, that control the `smoothness' and the mass-radius relationship of the filter function. We calibrate the free parameters of the EPS formalism by comparing its predictions of DM substructure with the fiducial subhalo populations of \mbox{{\sc COCO}}{} haloes in the mass bin\linebreak ${\MNFW{} = \left[0.95,\, 1.10\right]\Msun[\times10^{12}]}$. Specifically, we determine the EPS free parameters by applying the following two criteria: \begin{enumerate} \item the EPS estimate of the mean number of {CDM}{} subhaloes with mass $M \geq \Msun[10^9]$ must equal the mean number of objects with $\ensuremath{M_\mathrm{peak}}{}\geq\Msun[10^9]$ in \mbox{{\sc COCO-COLD}}{} haloes; and, \item the EPS prediction of the mean number of {WDM}{} subhaloes with ${M \geq \Msun[10^6]{}}$ must equal the mean number of objects with $\ensuremath{M_\mathrm{peak}}{}\geq\Msun[10^6]$ in \mbox{{\sc COCO-WARM}}{} haloes (i.e. all subhaloes). \end{enumerate} Here, \ensuremath{M_\mathrm{peak}}{} is determined using the {\sc subfind}{} definition of halo mass \citep{springel_populating_2001,dolag_substructures_2009} and represents the highest mass achieved by the subhaloes at any time during their evolutionary histories. Typically, haloes achieve \ensuremath{M_\mathrm{peak}}{} just before infall into a more massive halo. In the second calibration criterion, we compare the mass functions at \Msun[10^6] as this is below the turnover in the {WDM}{} power spectrum used in \mbox{{\sc COCO-WARM}}{}. We obtain excellent agreement between the mean EPS estimates and the \mbox{{\sc COCO}}{} simulation results by setting ${N_{\rm norm}=2.59},\, {\hat{\beta}=4.6},$ and ${\hat{c} = 3.9}$. This is shown in \figref{fig:Methods:calibrate_EPS}, which is discussed below. \begin{figure}% \centering% \includegraphics[width=0.5\columnwidth]{Thermal_calibration.pdf}% \vspace{-10pt}% \caption{The total number of DM subhaloes within \R{200}{} as a function of DM halo mass, \MNFW{}. The dashed line shows the mean number of subhaloes predicted by the EPS formalism and the dark shaded region indicates the associated \percent{68} Poisson scatter. The light shaded region gives the \percent{68} scatter modelled using the negative binomial distribution given by \eqnref{eq:WDM_constraint_method:Methods:EPS:Neg_binom}. Triangular symbols represent individual haloes from the \mbox{{\sc COCO-WARM}}{} simulations and circular symbols represent the mean of the number of subhaloes in haloes in each mass bin. The width of each halo mass bin is indicated by a horizontal dashed error bar and the vertical error bar displays the corresponding \percent{68} scatter. In both cases, unfilled symbols represent objects from a subhalo catalogue where the `prematurely destroyed' subhaloes have not been recovered, and filled symbols indicate the same haloes using the subhalo catalogue after restoration of the `prematurely destroyed' subhaloes. }% \label{fig:Methods:calibrate_EPS}% \vspace{-10pt}% \end{figure}% The EPS formalism predicts only the mean number of subhaloes in DM haloes of a given mass, and not the host-to-host scatter in the subhalo count. As we will discuss later, including this scatter is very important to obtain unbiased results and thus needs to be accounted for. We do this using the results of cosmological \mbox{\textit{N}--body}{} simulations that have shown that the scatter in the subhalo mass function is modelled well by a negative binomial distribution \citep{boylan-kolchin_theres_2010,cautun_subhalo_2014}. This takes the form \begin{equation} \label{eq:WDM_constraint_method:Methods:EPS:Neg_binom} {\rm P}\left(N \right|\left. r,\, p \right) = \frac{\upGamma\!\left(N+r\right)} {\upGamma\!\left(r\right)\upGamma\!\left(N+1\right)}\, p^r\! \left(1 - p\right)^N\,, \end{equation} where \emph{N} is the number of subhaloes and $\upGamma\!\left(x\right){=}\left(x-1\right)!$ is the Gamma function. The variable, ${p=\langle N \rangle\, /\, \upsigma^2,}$ where $\langle N\rangle$ and $\upsigma^2$ are, respectively, the mean and the dispersion of the distribution. This scatter in the subhalo count can be described best as the convolution of a Poisson distribution with a second distribution that describes the additional intrinsic variability of the subhalo count within haloes of fixed mass, such that ${\upsigma^2=\upsigma^2_{\rm Poisson}+\upsigma^2_{I}}$. The parameter \emph{r} then describes the relative contribution of each of these two terms: ${r=\upsigma_{\rm Poisson}^2\, /\, \upsigma_{\rm I}^2}$. We find that the scatter in the subhalo count of haloes in the \mbox{{\sc COCO}}{} suite is modelled well by $\upsigma_I{=}0.12\langle N\rangle$, as depicted in \figref{fig:Methods:calibrate_EPS}. We use this approach to characterize the scatter associated with the EPS predictions throughout the remainder of this paper. In \figref{fig:Methods:calibrate_EPS}, we compare the EPS predictions for haloes in the mass range ${\left[0.5,\,2.0\right]\times10^{12}\Msun{}}$ to the number of subhaloes in individual \mbox{{\sc COCO}}{} haloes of the same mass. We obtain excellent agreement with the \mbox{\textit{N}--body}{} results across the entire halo mass range of interest for this study. In particular, our approach reproduces very well both the mean number of subhaloes and its halo-to-halo scatter, which are represented by the grey shaded region and the vertical error bars, respectively. \subsection{Calculating model acceptance probability} \label{sec:Methods:Calculate_acceptance_probability} We rule out sections of the viable thermal relic {WDM}{} parameter space by calculating the fraction, \ensuremath{f_{\rm v}}{}, of {WDM}{} systems that have at least as many subhaloes as the total number of MW satellites. We denote with $p^{\rm EPS}$ the probability density function of the number of DM subhaloes predicted by the EPS formalism. Then, the fraction of haloes with $\Nsat[\rm MW]$ or more subhaloes is given by \begin{equation} \label{eq:WDM_constraint_method:Methods:EPS:F_acc_DMO} \ensuremath{f_{\rm v}}{}\!\left(\Nsub{} \geq \Nsat[\rm MW] \right) = \int_{\Nsat[\rm MW]}^{\infty} \dv{\Nsub{}}\, p^{\rm EPS}\!\left(\Nsub{}\right)\,. \end{equation} However, as we discussed in \secref{sec:Methods:Model-independent_LF}, the inferred total number of MW satellite galaxies is affected by uncertainties. We can account for these by marginalizing over the distribution of MW satellite counts, $p^{\rm MW}\!\left(\Nsat[\rm MW]\right)$. Combining everything, we find that the fraction of WDM haloes with at least as many subhaloes as the MW satellite count is given by \begin{equation} \label{eq:WDM_constraint_method:Methods:EPS:F_acc} \ensuremath{f_{\rm v}}{} = \int_0^\infty d\Nsat[\rm MW] \left[p^{\rm MW}\!\left(\Nsat[\rm MW]\right) \;\; \int_{\Nsat[\rm MW]}^{\infty} \dv{\Nsub{}}\, p^{\rm EPS}\!\left(\Nsub{}\right)\right]\,. \end{equation} While not explicitly stated, both the number of MW satellites and the number of subhaloes (e.g. see \figref{fig:Methods:calibrate_EPS}) depend on the assumed MW halo mass \citep{wang_missing_2012}, which is still uncertain at the \percent{20} level (e.g. \citep{wang_mass_2020}). This means that the fraction of valid {WDM}{} haloes depends strongly on the assumed mass of the Galactic halo. Note that the inferred total number of MW satellites depends weakly on the MW halo mass when calculated within a fixed physical distance, e.g. within 300\kpc{} from the Galactic Centre (see \extfig{10} in \myref{newton_total_2018}); however, here we calculate the expected number of satellites within \R{200}{} for each MW halo mass. \begin{figure}% \centering% \includegraphics[width=0.5\columnwidth]{accepted_fraction_distributions.pdf}% \vspace{-10pt}% \caption{The fraction, \ensuremath{f_{\rm v}}{}, of {WDM}{} systems with at least as many DM subhaloes, \Nsub{}, as the inferred total number of MW satellites, \Nsat{}, for a DM halo with ${\MNFW{}=\Msun[1\times10^{12}]}$. Thermal relic masses for which $\ensuremath{f_{\rm v}}{}\leq0.05$ are ruled out with \percent{95} confidence. Earlier works that do not account for the uncertainty in \Nsat{} or the scatter in \Nsub{} at fixed halo mass (thin lines) artificially exclude too many thermal relic particle mass values. In this work (thick line) we include both sources of uncertainty in our calculation. The horizontal dotted line indicates the \percent{5} rejection threshold that we use to rule out parts of the {WDM}{} parameter space. }% \label{fig:Methods:accepted_fractions}% \vspace{-10pt}% \end{figure}% This approach to calculating the fraction of viable {WDM}{} systems for the first time incorporates the scatter in \Nsub{} at fixed halo mass \emph{and} the uncertainty in the inferred total MW satellite population. This is important, as excluding one, or both, of these sources of uncertainty produces constraints on \ensuremath{m_{\rm th}}{} that are too strict. We demonstrate this in \figref{fig:Methods:accepted_fractions} where, for each {WDM}{} particle mass, we plot the fraction of haloes with mass $\MNFW[{\Msun[10^{12}]}]$ that contain enough DM substructure to host the inferred population of MW satellite galaxies. We derive our constraints on \ensuremath{m_{\rm th}}{} from the intersection of these cumulative distributions with the \percent{5} rejection threshold indicated by the horizontal dotted line. In this example, neglecting both sources of uncertainty excludes thermal relic DM with particle masses \MConstraint{2.4}, which is \percent{{\sim}15} more restrictive than our reported value of $\ensuremath{m_{\rm th}}{}{\lesssim}\keV[2.1]$ (thickest solid line). Some previous analyses (e.g. \citep{polisensky_constraints_2011,lovell_properties_2014}) account for some of the uncertainty by modelling the scatter in the number of DM subhaloes at fixed halo mass. This weakens the constraint; however, the results are still artificially \percent{{\sim}5} more stringent than they should be with our more complete treatment of the uncertainties. In addition to these complications, earlier works also suffer from incompleteness in the \z[0] subhalo catalogues due to numerical resolution effects. This contributes to a much more significant overestimation of the constraints and we discuss this in detail in the next section. \subsection{Thermal relic particle mass constraints} \label{sec:Constraints_on_mth:Constraint} \begin{figure}% \centering% \includegraphics[width=0.5\columnwidth]{EPS_constraint.pdf}% \vspace{-10pt}% \caption{Constraints on the particle mass, \ensuremath{m_{\rm th}}{}, of the thermal relic {WDM}{}. These depend on the assumed mass of the MW halo, which is shown on the vertical axis. We exclude with \percent{95} confidence parameter combinations in the shaded region. The dotted line indicates the extent of this exclusion region if we do not include `prematurely destroyed' subhaloes when calibrating the EPS formalism with the \mbox{{\sc COCO}}{} simulations~(see \secref{sec:Methods:EPS} for details). The constraints obtained by previous works, which do not consider some of the highest MW halo masses displayed here, are indicated by the hatched regions. These rule out too much of the parameter space as they do not account for some sources of uncertainty~(see \secref{sec:Methods:Calculate_acceptance_probability} for details). The two dashed horizontal lines show the \percent{68} confidence range on the mass of the MW halo from \myref{callingham_mass_2019}. }% \label{fig:Results:Thermal_exclusion_region}% \vspace{-10pt}% \end{figure}% We calculate the model acceptance distributions of DM haloes in the mass range\linebreak $\MNFW[{\left[0.5,2.0\right]\times\Msun[10^{12}]}]$ for several thermal relic {WDM}{} models. We rule out with \percent{95} confidence all combinations of \MNFW{} and \ensuremath{m_{\rm th}}{} with ${\ensuremath{f_{\rm v}}{} \leq 0.05}$. Problems arising from resolution effects persist even when using high-resolution simulations, and these effects are amplified as the resolution decreases. In addition to incorporating the scatter in \Nsub{} and the uncertainty in \Nsat{}, we account for resolution effects in the \mbox{\textit{N}--body}{} simulations with which we calibrate the EPS formalism by including subhaloes that have been lost below the resolution limit at higher redshifts or destroyed artificially by tracking the most bound particle of these objects to \z[0] (for details see \appref{app:Convergence_tests}). The results that we obtain using this approach are displayed in \figref{fig:Results:Thermal_exclusion_region}. The shaded region represents the parameter combinations that we rule out with \percent{95}~confidence. Independently of MW halo mass, we find that all thermal relic models with particle mass \MthTopConstraint{} are inconsistent with observations of the MW satellite population. The exact constraints vary with the MW halo mass, such that for lower halo masses we exclude heavier DM particle masses. Recent studies, especially using \emph{Gaia} mission data, have provided more precise measurements of the MW halo mass (for a recent review, see \citep{wang_mass_2020}). We can take advantage of these results to marginalise over the uncertainties in the MW halo mass. For this, we use the \myref{callingham_mass_2019} estimate of the MW mass, which we illustrate in \figref{fig:Results:Thermal_exclusion_region} with two horizontal dashed lines indicating their \percent{68} confidence interval. This estimate is in good agreement with other MW mass measurements, such as estimates based on the rotation curve or on stellar halo dynamics \citep{cautun_milky_2020,wang_mass_2020}. Marginalising over the MW halo mass, we rule out all models with \marginalisedDMConstraint{}. These constraints do not depend on uncertain galaxy formation physics and therefore they are the most robust constraints to be placed on the thermal relic particle mass to date. A more realistic treatment of galaxy formation processes --- the effect of which would be to render a large number of low-mass subhaloes invisible --- would allow us to rule out more of this parameter space as fewer {WDM}{} models would produce a sufficient number of satellites to be consistent with the inferred total population. We consider this possibility in more detail in \secref{sec:Galaxy_formation:Modelling_galform}. In \figref{fig:Results:Thermal_exclusion_region}, we include for comparison the constraints obtained by \myrefs{lovell_properties_2014}{polisensky_constraints_2011} who use similar analysis techniques. These constraints suffer from resolution effects that suppress the identification of some substructures that survive to the present day. The dotted line demarcates the exclusion region that we would obtain in our analysis if we \emph{did not} account for these prematurely destroyed subhaloes. Such issues are not revealed by numerical convergence tests that are typically used to assess the reliability of particular simulations. For example, even the `level~$2$' simulations of Aquarius{} haloes, which are some of the highest resolution DM-only haloes available, are not fully converged. We explore this in more detail in \appref{app:Convergence_tests}. \subsection{Modelling galaxy formation} \label{sec:Galaxy_formation:Modelling_galform} In the preceding sections we described an approach that gives a highly robust, albeit conservative, lower limit on the allowed mass of the {WDM}{} thermal relic particle. This ignores the effects of galaxy formation processes on the satellite complement of the MW. These mechanisms play an important role in the evolution of the satellite galaxy luminosity function but still are not fully understood. Semi-analytic models of galaxy formation enable the fast and efficient exploration of the parameter space of such processes and thus help us to understand how they affect the {WDM}{} constraints. {\sc galform}{} \citep{cole_recipe_1994,cole_hierarchical_2000} is one of the most advanced semi-analytic models that is currently available and is tuned to reproduce a selection of properties of the local galaxy population. A complete summary of the observational constraints used to calibrate the {\sc galform}{} model parameters is provided in \extsec{4.2} of \myref{lacey_unified_2016}; hereafter \citetalias{lacey_unified_2016}{}. Of particular interest to our study is the reionization of the Universe, which is the main process that affects the evolution of the faint end of the galaxy luminosity function. The UV radiation that permeates the Universe (and that is responsible for reionization) heats the intergalactic medium and prevents it from cooling into low-mass haloes, impeding the replenishment of the cold gas reservoir from which stars would form. In {\sc galform}{}, the effect of reionization on haloes is modelled using two parameters: a circular velocity cooling threshold, \vcut{}, and the redshift of reionization, \zreion{}. The intergalactic medium is taken to be fully ionized at a redshift, \z[\zreion{}], whereafter the cooling of gas into haloes with circular velocities, ${v_{\rm vir} < \vcut{}}$, is prevented. This simple scheme has been verified against more sophisticated calculations of reionization, with which it has been shown to produce a good agreement \citep{benson_effects_2002,font_population_2011}. Recent studies by e.g.~\myref{bose_imprint_2018} have characterized the sensitivity of the satellite galaxy luminosity function to changes in these parameters: a later epoch of reionization allows more faint satellites to form, and a smaller circular velocity cooling threshold permits those faint satellites to become brighter. We use {\sc galform}{} to explore the effect of different parametrizations of reionization on the number of substructures containing a luminous component around the MW. Several previous works that have adopted a similar approach \citep{kennedy_constraining_2014,jethwa_upper_2017} used the \citetalias{lacey_unified_2016}{} model, which has \zreion[10] and \vcut[30]; however, this combination of parameters is now disfavoured by more recent theoretical calculations and the analysis of recent observational data, e.g. \citep{mason_universe_2018,planck_collaboration_planck_2020}. Additionally, others have noted that using \zreion[10] is not self-consistent and that a modified \citetalias{lacey_unified_2016}{} model with \zreion[6] is a more appropriate choice \citep{bose_imprint_2018}. In light of these theoretical and observational developments, for this study we consider parametrizations of reionization in the ranges $6 \leq \zreion{} \leq 8$ and $\kms[25] \leq \vcut{} \leq \kms[35]$ (see \citep{okamoto_mass_2008,font_population_2011,robertson_cosmic_2015,banados_800-million-solar-mass_2018,davies_quantitative_2018,mason_universe_2018,planck_collaboration_planck_2020}). \subsection{Constraints using GALFORM models} \label{sec:Thermal_relic_constraints:Galform_constraints} \begin{figure}% \centering% \includegraphics[width=0.5\columnwidth]{fiducial_galform_constraints.pdf}% \vspace{-10pt}% \caption{Constraints on \ensuremath{m_{\rm th}}{} obtained assuming our fiducial model of reionization with \zreion[7] and \vcut[30] within the {\sc galform}{} galaxy formation model (thick solid line). Parameter combinations to the left of and beneath this envelope are ruled out with \percent{95} confidence. The constraints obtained by previous works that adopted similar approaches are displayed by the hatched regions \citep{kennedy_constraining_2014,jethwa_upper_2017,nadler_milky_2020}. Arrows indicate the \keV[2]~\citep{safarzadeh_limit_2018}, \keV[2.96]~\citep{baur_lyman-alpha_2016}, \keV[3.3]~\citep{viel_warm_2013}, \keV[3.5]~\citep{irsic_new_2017}, and \keV[3.8]~\citep{hsueh_sharp_2020} envelopes of the most robust constraints on the thermal relic particle mass obtained from modelling of the \Lyman{\upalpha} forest. The shaded region shows the \percent{68} confidence interval on the mass of the MW halo from \myref{callingham_mass_2019}.}% \label{fig:Results:Fiducial_Galform_constraint}% \vspace{-10pt}% \end{figure}% Our exploration of different prescriptions for reionization assumes the \citetalias{lacey_unified_2016}{} {\sc galform}{} model as a reasonable description of various feedback and evolutionary processes in galaxy formation. We vary the reionization parameters in the ranges described in \secref{sec:Galaxy_formation:Modelling_galform} and apply {\sc galform}{} to Monte Carlo merger trees calibrated as closely as possible to the \mbox{{\sc COCO}}{} suite. The Monte Carlo algorithm used in {\sc galform}{} cannot be calibrated to match exactly the \mbox{\textit{N}--body}{} results as it lacks sufficient free parameters to match both the high- and low-mass ends of galaxy formation. Where a discrepancy exists between the Monte Carlo and \mbox{\textit{N}--body}{} luminosity functions, we remap the \MV{} values of Monte Carlo satellite galaxies to new values such that the resulting luminosity function is consistent with the \mbox{\textit{N}--body}{} results. Using these, we obtain predictions for the dwarf galaxy luminosity function for $500$ realizations of each MW halo mass, allowing us to compute the model acceptance distributions in the same manner as before (see \secref{sec:Methods:Calculate_acceptance_probability}). Details of the merger tree algorithm and the functions to remap the Monte Carlo satellite galaxy $V-$band magnitudes are provided in \appref{app:PCH_algorithm_adjustment}. \begin{figure}% \includegraphics[width=\textwidth]{fixed_vcut_zcut_horizontal.pdf}% \vspace{-10pt}% \caption{Constraints on \ensuremath{m_{\rm th}}{} obtained assuming different parametrizations of reionization in the {\sc galform}{} galaxy formation model. Combinations of \MNFW{} and \ensuremath{m_{\rm th}}{} to the left of and beneath the envelopes are ruled out with \percent{95} confidence. In both panels, our fiducial choice is indicated by the thick solid line; the shaded region represents the \percent{68} confidence interval on the mass of the MW halo from \myref{callingham_mass_2019}. \emph{Left panel:} here, the cooling threshold is fixed at \vcut[30] and the dotted, solid, and dashed lines represent constraints obtained assuming $\zreion[6],\, 7,$ and $8$, respectively. High values of \zreion{} produce more stringent constraints on the thermal relic mass at fixed MW halo mass. \emph{Right panel:} here, reionization is assumed to have ceased by \zreion[7], and the dotted, solid, and dashed lines represent the constraints obtained assuming cooling thresholds of $\vcut[25,\, 30,\, {\rm and}\, 35],$ respectively. Higher cooling thresholds produce more stringent constraints on the thermal relic mass.}% \label{fig:Results:Galform_fixed_vcut_zcut}% \vspace{-10pt}% \end{figure}% In \figref{fig:Results:Fiducial_Galform_constraint}, we plot our constraints on thermal relic {WDM}{} models assuming a fiducial model of reionization with \zreion[7] and \vcut[30]. This is a viable parametrization that is consistent with observations and resides in the centre of the parameter ranges that we explore. In this model, we rule out \emph{all} thermal relic {WDM}{} particle masses with \MthGFTopConstraint{} independently of the MW halo mass. When marginalising over the uncertainties in the estimate of the MW halo mass from \myref{callingham_mass_2019}, our constraints strengthen and we exclude with 95 percent confidence all models with \marginalisedGalformConstraint{}. Our fiducial constraints are considerably stronger than our model-independent result and produce more stringent constraints in different MW halo mass regimes compared with work by refs. \citep{kennedy_constraining_2014,jethwa_upper_2017}, who also model the effects of galaxy formation processes. More recently, \myref{nadler_milky_2020} carried out a similar analysis and obtained tighter constraints on the {WDM}{} particle mass than we find in this work. We discuss the reasons behind this and its implications in \secref{sec:Discussion}. In \figref{fig:Results:Fiducial_Galform_constraint}, we have also included for comparison the most conservative constraints derived from the \Lyman{\upalpha} forest by refs. \citep{viel_warm_2013,baur_lyman-alpha_2016,irsic_new_2017,safarzadeh_limit_2018,hsueh_sharp_2020}, which our results complement. In \figref{fig:Results:Galform_fixed_vcut_zcut}, we explore the effect on the constraints of varying \vcut{} or \zreion{} while holding the other parameter constant. The left panel shows the effect of varying the redshift at which reionization concludes while fixing \vcut[30]. An epoch of reionization that finishes later, characterized by a lower value of \zreion{}, allows more faint galaxies to form in low-mass DM haloes, which weakens the constraints that can be placed on thermal relic {WDM}{} models. The right panel shows the effect of curtailing further star formation in low-mass haloes after reionization finishes at \zreion[7]. As the \vcut{} cooling threshold increases, a larger fraction of the low-mass galaxy population is prevented from accreting new cold gas from the intergalactic medium after the end of reionization. Consequently, the reservoir of cold gas available for further star formation in these galaxies depletes over time, limiting how bright these objects become by \z[0]. When the cooling threshold is large, fewer faint galaxies evolve to become brighter than \MV[0] and populate the MW satellite galaxy luminosity function, leading to stronger constraints on the thermal relic mass. For completeness, in \appref{app:Galform_model_results} we provide the constraints obtained for the three values of \vcut{} assuming two scenarios with $\zreion[{6\, {\rm and}\, 8}]$, respectively. \section{Resolution effects in numerical convergence studies} \label{app:Convergence_tests} Numerical simulations are a useful tool to study the physical behaviour of cosmological models in the non-linear regime, where analytical approaches are unable to account fully for the complexity at these scales. While the dynamic range of such simulations is vast, spanning many orders of magnitude, \mbox{\textit{N}--body}{} simulations are limited by the resolution at which their smallest objects can be self-consistently modelled. It is important to understand whether the phenomena that are observed in the simulations occur for physical reasons, or whether they arise \emph{because} of this limitation. The traditional approach to identify the onset of resolution effects has been to conduct convergence studies, e.g.~\citep{efstathiou_numerical_1985,power_inner_2003}. These entail re-running the same simulation at different resolution levels and comparing the results: those that are unaffected by an increase in the resolution are deemed to be converged. A number of studies using several different \mbox{\textit{N}--body}{} simulations support this conclusion and suggest that the subhalo present-day mass function of DM haloes is converged down to approximately $100$ simulation particles per object, e.g.~\citep{springel_aquarius_2008,onions_subhaloes_2012,griffen_caterpillar_2016}. Some of these low-mass subhaloes could be disrupted by numerical effects from the limited resolution of the simulation \citep{van_den_bosch_dark_2018,green_tidal_2019,errani_asymptotic_2021,green_tidal_2021}. \Myref{onions_subhaloes_2012} also show that configuration space structure finders are ineffective at identifying all substructure near the centre of simulated haloes. This resolution-dependent deficiency of the halo finding algorithms implies that some substructures may be missed. These effects complicate attempts to understand and characterize the convergence of the subhalo \emph{peak} mass function, which is of interest for this study as peak mass correlates more strongly with the formation of a luminous component than the present-day halo mass. It also affects directly the calibration of the EPS formalism that we use to estimate the amount of substructure in MW-mass haloes. In the peak mass function, resolution limitations can also affect the high-mass end as even haloes with large peak mass can be excluded from the \z[0] halo catalogue if they fall below the resolution limit. This could occur after many orbits of the host during which the subhalo experiences continuous tidal stripping of mass. It is important to correct for missing and `prematurely disrupted' subhaloes as these can bias our results: as we discuss in the main text, under-predicting the true subhalo count produces overly stringent constraints on the {WDM}{} particle mass. We are also careful to distinguish these from the spurious haloes found in \mbox{\textit{N}--body}{} {WDM}{} simulations, which are produced by artificial fragmentation of filaments due to numerical effects and should be removed from the halo catalogues. The `prematurely destroyed' subhaloes may be recovered relatively easily by tracing their constituent particles through the simulations and identifying whether they survive to the present day. Details may be found in \extapp{B} of \myref{newton_total_2018}. Briefly, we use the \myref{simha_modelling_2017} merging scheme implemented in {\sc galform}{} to carry out this procedure. This tracks the most bound particle of objects that fall below the resolution limit from the last epoch at which they were associated with a resolved subhalo. From this, a population of substructures is recovered that contains the `prematurely destroyed' subhaloes and other objects that are disrupted by physical processes. We remove the latter from the recovered population if they satisfy one of the following criteria: \begin{enumerate} \item A time has elapsed after the subhalo fell below the resolution limit, which is equal to or greater than the dynamical friction timescale. \item At any time, the subhalo passes within the halo tidal disruption radius. \end{enumerate} In both cases, the effects of tidal stripping and of interactions between orbiting subhaloes are ignored. The size of this correction to the \mbox{{\sc COCO}}{} suite is not easy to ascertain as \mbox{{\sc COCO}}{} does not have counterpart simulations with different resolution levels with which to conduct a similar convergence study. Instead, we use the Aquarius{} suite \citep{springel_aquarius_2008}, the constituent simulations of which span a range of resolution levels that encompass that of \mbox{{\sc COCO}}{}, to estimate the size of the effect of excluding the prematurely destroyed subhaloes. \begin{figure}% \centering% \includegraphics[width=0.5\columnwidth]{aquarius_convergence_test.pdf}% \vspace{-10pt}% \caption{Cumulative subhalo peak mass functions of the Aquarius{} A halo simulated at different levels of resolution (coloured lines) and stacked \mbox{{\sc COCO-COLD}}{} haloes (grey lines) with masses $\MNFW{}\geq\Msun[1.5\times10^{12}]$. The dashed lines show the original, uncorrected number counts prior to recovering the `prematurely destroyed' subhalo population. The solid lines show the number counts after adding this population to the original one. The resolution level of the \mbox{{\sc COCO}}{} suite lies between Aquarius{} Level~3 and Level~4. }% \label{fig:Appendix:Aq_convergence_test}% \vspace{-20pt}% \end{figure}% In \figref{fig:Appendix:Aq_convergence_test}, we compare the subhalo peak mass functions of the Aquarius{} A halo simulated at four different resolution levels: 2, 3, 4, and 5. Aq~Level~5 is simulated coarsely, with a DM particle mass, $m_{\rm p}=\Msun[3.14\times10^6]$. The simulation resolution improves with decreasing level number, such that Aq~Level~2 is simulated with a DM particle mass, $m_{\rm p}=\Msun[1.37\times10^4]$ (i.e. a factor of ${\sim}200$ times better mass resolution). The figure shows the subhalo count before and after recovering the population of missing and prematurely destroyed subhaloes. At high halo mass, the original and `corrected' curves are consistent with the highest resolution simulation. As the resolution degrades, the lower-resolution simulations peel away from the Level~2 curves, with the lowest-resolution simulation turning off at the highest value of \ensuremath{M_\mathrm{peak}}{}. This demonstrates the major consequence of limited resolution, which is particularly acute for low-mass objects: in the cases considered here for haloes with mass $\MNFW\geq\Msun[{1.5\times10^{12}}]$, restoring the missing population increases the total subhalo abundance by an order of magnitude. However, as we discussed earlier, resolution effects are not confined to the low-mass regime and can also affect higher masses. Massive haloes can experience considerable tidal stripping after being accreted by a host, which can lead to their exclusion from the \z[0] halo catalogue. The resulting discrepancy between the original and corrected mass functions at high masses indicates that this population of `missing' objects composes a non-negligible fraction of the subhaloes even in the high-mass regime. Therefore, `traditional' convergence studies that do not account for missing and prematurely destroyed subhaloes cannot properly characterize these numerical effects on the peak mass function. In \figref{fig:Appendix:Aq_convergence_test}, we also plot for comparison the average mass function of \mbox{{\sc COCO-COLD}}{} haloes with masses similar to the Aquarius{}~A halo. The \mbox{{\sc COCO-COLD}}{} and \mbox{{\sc COCO-WARM}}{} simulations have a DM particle mass resolution that lies between that of the Aq~Level~$3$ and Level~$4$ runs. Comparing the subhalo mass functions of the incomplete subhalo catalogues of \mbox{{\sc COCO-COLD}}{} and Aq~Level~3 suggests that subhaloes with $\ensuremath{M_\mathrm{peak}}{} \gtrsim \Msun[3\times10^8]$ are resolved well. However, after recovering the prematurely destroyed subhaloes, a comparison of the mass functions implies consistency at masses $\ensuremath{M_\mathrm{peak}}{} \gtrsim \Msun[5\times10^6]$, approximately two orders of magnitude better than before. This is consistent with the correction to the Aq~Level~4 simulation, which suggests that the same correction for prematurely disrupted subhaloes that we have shown to work well for the Aquarius{} Level~2 to 5 runs is also applicable to the two \mbox{{\sc COCO}}{} simulations. \section{Calibrating the Galform merger tree algorithm} \label{app:PCH_algorithm_adjustment} Monte Carlo merger trees are generated within {\sc galform}{} using an implementation of the \myref{parkinson_generating_2008} merger tree algorithm, which iteratively splits the present-day halo mass into different progenitor haloes as it progresses to higher redshifts. The algorithm depends on three free parameters: $G_0{=}0.57$, a normalization constant; $\gamma_1{=}0.38$, which controls the mass distribution of the progenitor haloes; and $\gamma_2{=}-0.01$, which controls the halo-splitting rate. \Myref{parkinson_generating_2008} calibrated these parameters by comparing the Monte Carlo progenitor halo mass functions at several redshifts with those from the Millennium simulation \citep{springel_simulations_2005}. This follows the evolution of $2160^3$ particles with mass, $m_p = \cMsun[8.6\times10^8]$, resolving the halo mass function to $\cMsun[{\sim}1.7\times10^{10}]$, which is three orders of magnitude larger than the regime of interest for this study. The merger trees produced from the best-fitting free parameter values derived from the Millennium simulation predict a factor of two times more galaxies at the faint end of the cumulative luminosity function in MW-mass haloes than is obtained by applying {\sc galform}{} to the \mbox{{\sc COCO}}{} suite. \begin{figure}% \centering% \includegraphics[width=\textwidth]{zcut7_vcut30_LFs.pdf}% \vspace{-10pt}% \caption{Cumulative satellite galaxy luminosity functions produced by our fiducial {\sc galform}{} model with \zreion[7] and \vcut[30] for haloes with masses in the range ${\MNFW{}=\Msun[{\left[1,\, 1.5\right]\times10^{12}}]}$. Results for the \keV[3.3] thermal relic {WDM}{} model and {CDM}{} model are shown in the left and right panels, respectively. The mean luminosity functions produced from {\sc galform}{} applied to the \mbox{{\sc COCO}}{} simulations are represented by blue solid lines and error bars, which indicate their \percent{68} scatter. The solid purple lines represent the mean luminosity functions from {\sc galform}{} Monte Carlo realizations of each DM model and the corresponding shaded regions show their \percent{68} scatter. The green `corrected' Monte Carlo luminosity function is obtained by remapping the \MV{} of Monte Carlo satellite galaxies using the remapping relationships discussed in the text and shown in \figref{fig:Appendix:PCH-Tree_discrepancies:MV_remapping}. }% \label{fig:Appendix:PCH-Tree_discrepancies:LFs}% \vspace{-20pt}% \end{figure}% To attempt to address this overestimate, we performed the \myref{parkinson_generating_2008} calibration procedure using the \mbox{{\sc COCO}}{} simulations and found best-fitting values of $G_0{=}0.75$, $\gamma_1{=}0.1$ and $\gamma_2{=}-0.12$. The resulting Monte Carlo merger trees produce a better match with the \mbox{{\sc COCO}}{} merger trees; however, they remain discrepant across the range in satellite brightness. Consequently, when applying {\sc galform}{} on the new Monte Carlo merger trees, this produces an overestimate of the faint end of the cumulative satellite galaxy luminosity function by a factor of ${\sim}1.6$ compared with that obtained by applying {\sc galform}{} on the \mbox{{\sc COCO}}{} merger trees directly (cf. the \mbox{{\sc COCO}}{}+{\sc galform}{} and Monte Carlo luminosity functions in \figref{fig:Appendix:PCH-Tree_discrepancies:LFs}). This discrepancy can be improved self-consistently only by altering the \myref{parkinson_generating_2008} algorithm, which would require more thorough investigation and possibly the introduction of one or more additional free parameters; this is beyond the scope of this work. Instead, to obtain a satellite luminosity function for the Monte Carlo merger trees that is in agreement with the cosmological predictions, we map the satellite magnitude, \MV{}, predicted in the `Monte Carlo merger trees + {\sc galform}{}' case to that of the `\mbox{{\sc COCO}}{} + {\sc galform}{}' case by matching objects at fixed abundance, i.e. fixed \Nsat{} per host. By carrying out this procedure, we construct a remapping relationship between the `old' \MV{} and new values that are consistent with the \mbox{\textit{N}--body}{} results. In \figref{fig:Appendix:PCH-Tree_discrepancies:MV_remapping}, we plot these relationships calculated for the {CDM}{} and \keV[3.3] thermal relic {WDM}{} models in three bins in halo mass. For clarity, we plot only the remapping functions obtained for our fiducial {\sc galform}{} model with \zreion[7] and \vcut[30]. The error bars ({CDM}{}) and shaded region ({WDM}{}) indicate the bootstrapped \percent{68} confidence intervals on the remapping relationships in the halo mass bin ${\MNFW{}=\Msun[{\left[1,\, 1.5\right]\times10^{12}}]}$ and are representative of the uncertainties on the remapping functions in the other halo mass bins. The remapping functions are in excellent agreement across the range in halo mass in both DM models, and across almost the entire range in satellite brightness, although there is a small discrepancy between the {CDM}{} and {WDM}{} functions at the faint end. This corresponds to low-mass subhaloes near the cut-off scale in the {WDM}{} power spectrum, whose properties differ the most from their equal mass {CDM}{} counterparts. The differences in the formation histories of such objects in {WDM}{} and {CDM}{} models are modest \citep{lovell_properties_2014}, which explains the similarly modest discrepancy between the remapping functions of these models calculated here. We find similar results for the other parametrizations of reionization that we consider (not shown). Therefore, when calculating the results presented in \secref{sec:Galaxy_formation:Modelling_galform} we use the {CDM}{} remapping relationships appropriate for each parametrization of reionization to adjust the {\sc galform}{}-produced absolute magnitudes of dwarf satellite galaxies. To check the remapping technique, we plot the corrected `Monte Carlo merger trees + {\sc galform}{}' satellite luminosity function as a green curve in \figref{fig:Appendix:PCH-Tree_discrepancies:LFs}. By construction, the mean satellite count matches the \mbox{{\sc COCO}}{} predictions. More importantly, the \percent{68} scatter (represented by the green shaded region) is also in good agreement with the \mbox{\textit{N}--body}{} results despite this not having been calibrated. \begin{figure}% \centering% \includegraphics[width=0.5\textwidth]{zcut7_vcut30_mv_remapping.pdf}% \vspace{-17pt}% \caption{Functions to remap the \MV{} values of Monte Carlo {\sc galform}{} satellite galaxies to new values that are consistent with the luminosity functions of {\sc galform}{} applied to the \mbox{{\sc COCO}}{} suite. Only the functions for our fiducial {\sc galform}{} model are shown, which are similar to the other parametrizations considered in this study. The dashed lines represent the remapping functions for the \keV[3.3] thermal relic {WDM}{} model used in \mbox{{\sc COCO-WARM}}{}, and the solid lines show the functions for the {CDM}{} model. In both cases, the lines are coloured by halo mass bin: ${\MNFW{}=\Msun[{\left[0.5,\, 1.0\right]\times10^{12}}]}$ (blue), \Msun[{\left[1.0,\, 1.5\right]\times10^{12}}] (purple) and \Msun[{\left[1.5,\, 2.0\right]\times10^{12}}] (green). The error bars ({CDM}{}) and shaded region ({WDM}{}) indicate the bootstrapped \percent{68} confidence intervals on the remapping relationships in the medium halo mass bin and are representative of the uncertainty in the other bins. The remapping functions are in excellent agreement across halo masses, apart from a small discrepancy at faint magnitudes. }% \label{fig:Appendix:PCH-Tree_discrepancies:MV_remapping}% \vspace{-20pt}% \end{figure}% \section{Thermal relic mass constraints for different Galform results} \label{app:Galform_model_results} Reionization plays an important role in the formation of low-mass dwarf galaxies and shapes the star formation history of the Universe more widely. In {\sc galform}{}, reionization is described in terms of two key variables: the redshift by which reionization has ceased, \zreion{}, and the circular velocity cooling threshold, \vcut{}, below which galaxies and DM haloes are prevented after reionization from accreting cool gas from the intergalactic medium with which they might form more stars. To understand better how reionization affects the constraints on the thermal relic particle mass, we considered nine parameter combinations that span the allowed parameter range given our current observational constraints on reionization and galaxy formation models: $\vcut[{\left[25,\, 30,\, 35\right]}]$ and $\zreion[{\left[6,\,7,\, 8\right]}]$. In the main paper, we showed how the DM particle mass constraints change when varying \vcut{} assuming \zreion[7], and when varying \zreion{} assuming \vcut[30], the results of which are presented in \figrefs{fig:Results:Fiducial_Galform_constraint}{fig:Results:Galform_fixed_vcut_zcut}. Here, we provide the constraints for parameter combinations assuming \zreion[6] and \zreion[8] (see \figref{fig:Appendix:Galform_constraints:non_fid_z}, left and right panels, respectively). In both cases, we also plot our fiducial constraint as a thicker solid line to facilitate easier comparison with these results. The dependence of the constraints on \zreion{} and \vcut{} demonstrated in \figref{fig:Results:Galform_fixed_vcut_zcut} also holds for the parameter choices shown here. If reionization finishes later (\figref{fig:Appendix:Galform_constraints:non_fid_z} left panel), the strength of the constraints weakens considerably and the choice of \vcut{} becomes significantly more important. In \tabref{tab:Appendix:Galform_constraints:galform_model_constraints}, we provide the particle masses at and below which thermal relic {WDM}{} models are excluded at \percent{95} confidence for each combination of reionization parameters that we consider in this study. \begin{figure}% \centering% \includegraphics[width=\textwidth]{non_fiducial_z_horizontal.pdf}% \vspace{-10pt}% \caption{Constraints on the thermal relic particle mass obtained assuming three values of \vcut{} for \zreion[6] (left panel) and \zreion[8] (right panel) within the {\sc galform}{} galaxy formation model. As in \figref{fig:Results:Fiducial_Galform_constraint}, parameter combinations to the left of and beneath the envelopes are ruled out with \percent{95} confidence. The thicker solid lines indicate the constraint envelope of our fiducial model with \zreion[7] and \vcut[30]. The shaded regions indicate the \percent{68} confidence interval on the mass of the MW halo from \citet{callingham_mass_2019}. }% \label{fig:Appendix:Galform_constraints:non_fid_z}% \vspace{-10pt}% \end{figure}% \begin{table}% \centering% \caption{Mass thresholds, \ensuremath{m_{\rm th}}{}, at and below which thermal relic models are excluded at \percent{95} confidence, for each {\sc galform}{} model considered in this study.}% \label{tab:Appendix:Galform_constraints:galform_model_constraints}% \begin{tabularx}{\columnwidth}{YYYY}% \hline \vcut{} & \multicolumn{3}{c}{\zreion{}} \\ & $6$ & $7$ & $8$ \\ \hline \kms[25] & \keV[2.86{}] & \keV[3.12{}] & \keV[3.49{}] \\ \kms[30] & \keV[3.37{}] & \keV[3.99{}] & \keV[5.26{}] \\ \kms[35] & \keV[3.52{}] & \keV[4.37{}] & \keV[5.82{}] \\ \hline \end{tabularx}% \end{table}% \section{Introduction} \label{sec:Introduction} \input{Sections/01_Introduction} \section{Methodology} \label{sec:Methods} \input{Sections/02_Methods} \section{Constraints on the thermal relic mass} \label{sec:Constraints_on_mth} \input{Sections/03_Constraints_on_mth} \section{The effects of galaxy formation processes} \label{sec:Galaxy_formation} \input{Sections/04_Galaxy_formation_processes} \section{Discussion} \label{sec:Discussion} \input{Sections/05_Discussion} \section{Conclusions} \label{sec:Conclusions} \input{Sections/06_Conclusions}
{ "redpajama_set_name": "RedPajamaArXiv" }
8,955
Through my frustration of googling MUST DIE! and finding nothing but results ab0ut the movie involving John Tucker, I aim to bring to you, the reader, the beauty that is Water Temple. This song is the most awesome/dangerous combination of fairylike samples of Zelda (the legend of…) and triplet filled dubstep since the sliced bread/bread slicer combo! The opening is euphoric, and is optimally experienced, yes I say experienced (you can't merely listen) loud, with large speakers or powerful headphones. Kick-back, and make sure you've stretched beforehand, because this song will make you move.
{ "redpajama_set_name": "RedPajamaC4" }
335
\section{Introduction} Theories of connections are playing an increasingly important role in the current description of all fundamental interactions of Nature (including gravity \cite{AA1}). They are also of interest from a purely mathematical viewpoint. In particular, many of the recent advances in the understanding of the topology of low dimensional manifolds have come from these theories. In the standard functional analytic approach, developed in the context of the Yang-Mills theory, one equips the space of connections with the structure of a Hilbert-Riemann manifold (see, e.g., \cite{AM}). This structure is gauge-invariant. However, the construction uses a fixed Riemannian metric on the underlying space-time manifold. For diffeomorphism invariant theories --such as general relativity-- this is, unfortunately, a serious drawback. A second limitation of this approach comes from the fact that, so far, it has led to relatively few examples of interesting, gauge-invariant measures on spaces of connections, and none that is diffeomorphism invariant. Hence, to deal with theories such as quantum general relativity, a gauge and diffeomorphism invariant extension of these standard techniques is needed. For the functional integration part of the theory, such an extension was carried out in a series of papers over the past two years [3-8]. (For earlier work with the same philosophy, see \cite{KK}.) The purpose of this article is to develop differential geometry along the same lines. Our constructions will be generally motivated by certain heuristic results in a non-perturbative approach quantum gravity based on connections, loops and holonomies. \cite{AA3, RS}. Reciprocally, our results will be useful in making this approach rigorous \cite{ALMMT2, AA2} in that they provide the well-defined measures and differential operators that are needed in a rigorous treatment. There is thus a synergetic exchange of ideas and techniques between the heuristic and rigorous treatments. As background material, we will first present some physical considerations and then discuss our approach from a mathematical perspective. Fix an n-dimensional manifold $M$ and consider the space ${\cal A}$ of smooth connections on a given principal bundle $B(M, G)$ over $M$. Following the standard terminology, we will refer to $G$ as the structure group and denote the space of smooth vertical automorphisms of $B(M,G)$ by ${\cal G}$. This ${\cal G}$ is the group of {\it local} gauge transformations. If $M$ is taken to be a Cauchy surface in a Lorentzian space-time, the quotient ${\A/{\cal G}}$ serves as the physical configuration space of the classical gauge theory. If $M$ represents the Euclidean space-time, ${\A/{\cal G}}$ is the space of physically distinct classical histories. Because of the presence of an infinite number of degrees of freedom, to go over to quantum field theory, one has to enlarge ${\A/{\cal G}}$ appropriately. Unfortunately, since ${\A/{\cal G}}$ is non-linear, with complicated topology, a canonical mathematical extension is not available. For example, the simple idea of substituting the smooth connections and gauge transformations in ${\A/{\cal G}}$ by distributional ones does not work because the space of distributional connections does not support the action of distributional local gauge transformations. Recently, one such extension was introduced \cite{AI} using the basic representation theory of $C^\star$-algebras. The ideas underlying this approach can be summarized as follows. One first considers the space ${\cal HA}$ of functions on ${\A/{\cal G}}$ obtained by taking finite complex linear combinations of finite products of Wilson loop functions $W_{\a}(A)$ around closed loops $\a$. (Recall that the Wilson loop functions are traces of holonomies of connections around closed loops; $W_{\a}(A) = {\rm Tr}\ {\cal P}\ \oint_{\a} A dl$. Since they are gauge invariant, they project down unambiguously to ${\A/{\cal G}}$.) ${\cal HA}$ can then be completed in a natural fashion to obtain a $C^\star$-algebra $\overline{\cal HA}$. This is the algebra of configuration observables. Hence, to obtain the Hilbert space of physical states, one has to select of an appropriate representation of $\overline{\cal HA}$. It turns out that every cyclic representation of $\overline{\cal HA}$ by operators on a Hilbert space is of a specific type \cite{AI}: The Hilbert space is simply $L^2(\B, \mu)$ for some regular, Borel measure $\mu$ on a certain completion $\B$ of ${\A/{\cal G}}$ and, as one might expect of configuration operators, the Wilson loop operators act just by multiplication. Therefore, the space $\B$ is a candidate for the required extension of the classical configuration. To define physically interesting operators, one needs to develop differential geometry on $\B$. For example, the momentum operators would correspond to suitable vector fields on $\B$ and the kinetic term in the Hamiltonian would be given by a Laplacian. The problem of introducing these operators is coupled to that of finding suitable measures on $\B$ because these operators have to be essentially self-adjoint on the underlying Hilbert space. >From a mathematical perspective, $\B$ is just the Gel'fand spectrum of the Abelian $C^\star$-algebra $\overline{\cal HA}$; it is a compact, Hausdorff topological space and, as the notation suggests, ${\A/{\cal G}}$ is densely embedded in it. The basic techniques for exploring the structure of this space were introduced in \cite{AL1}. It was shown that $\B$ is very large: In particular, every connection on {\it every} $G$-bundle over $M$ defines a point in $\B$. (Note incidentally that this implies that $\B$ is independent of the initial choice of the principal bundle $B(M, G)$ made in the construction of the holonomy algebra ${\cal HA}$.) Furthermore, there are points which do not correspond to {\it any} smooth connection; these are the generalized connections (defined on generalized principal $G$-bundles \cite{Le}) which are relevant only to the quantum theory. Finally, there is a precise sense in which this space provides a ``universal home'' for measures that arise from lattice gauge theories \cite{AMM}. In specific theories, such as Yang-Mills, the support of the relevant measures is likely to be significantly smaller. For diffeomorphism invariant theories, on the other hand, there are indications that it would be essential to use the whole space. In particular, it is known that $\B$ admits large families of measures which are invariant under the induced action of Diff$(M)$ \cite{AL1, B1, B2, ALMMT2,AL2} and therefore likely to feature prominently in non-perturbative quantum general relativity \cite{ALMMT2, AA2}. Many of these are {\it faithful} indicating that all of $\B$ would be relevant to quantum gravity. Thus, the space $\B$ is large enough to be useful is a variety of contexts. Indeed, at first sight, one might be concerned that it is too large to be physically useful. For example, by construction, it has the structure {\it only} of a topological space; it is not even a manifold. How can one then hope to introduce the basic quantum operators on $L^2(\B, \mu)$? In absence of a well-defined manifold structure on the quantum configuration space, it may seem impossible to introduce vector fields on it, let alone the Laplacian or the operators needed in quantum gravity! Is there a danger that $\B$ is so large that it is mathematically uninteresting? Fortunately, it turns out that, although it is large, $\B$ is ``controllable''. The key reason is that the $C^\star$-algebra $\overline{\cal HA}$ is rather special, being generated by the Wilson loop observables. As a consequence, its spectrum, $\B$, can also be obtained as the projective limit of a projective family of compact, Hausdorff, {\it analytic manifolds} \cite{AL1, B1, B2, MM, AL2}. Standard projective constructions therefore enable us to induce on $\B$ various notions from differential geometry. Thus, it appears that a desired balance is struck: While it is large enough to serve as a ``universal home'' for measures, $\B$ is, at the same time, small enough to be mathematically interesting and physically useful. This is the main message of this paper. The material is organized as follows. In section 2, we recall from \cite{AL2} the essential results from projective techniques. In section 3, we use these results to construct three projective families of compact, Hausdorff, analytic manifolds, and show that $\B$ can be obtained as the projective limit of one of these families. Since the members of the family are all manifolds, each is equipped with the standard differential geometric structure. Using projective techniques, sections 4 and 5 then carry this structure to the projective limits. Thus, the notions of forms, volume forms, vector fields and their Lie-derivatives and divergence of vector fields with respect to volume forms can be defined on $\B$. The vector fields which are compatible with the measure (in the sense that their divergence with respect to the measure is well-defined) lead to essentially self-adjoint momentum operators in the quantum theory. In section 6, we turn to Riemannian geometry. Given an additional structure on the underlying manifold $M$ --called an edge-metric-- we define a Laplacian operator on the $C^2$-functions on $\B$ and construct the associate heat kernels as well as the heat kernel measures. In section 7, we point out that $\B$ admits a natural (degenerate) contravariant metric and use it to introduce a Laplace-like operator. Since this construction does not use any background structure on $M$, the action of the operator respects diffeomorphism invariance. It could thus define a natural observable in diffeomorphism invariant quantum theories. Another example is a third order differential operator representing the ``volume observable'' in quantum gravity. Section 8 puts the analysis of this paper in the context of the earlier work in the subject. A striking aspect of this approach to geometry on $\B$ is that its general spirit is the same as that of non-commutative geometry and quantum groups: even though there is no underlying differentiable manifold, geometrical notions can be developed by exploiting the properties of the {\it algebra} of functions. On the one hand, the situation with respect to $\B$ is simpler because the algebra in question is Abelian. On the other hand, we are dealing with very large, infinite dimensional spaces. As indicated above, a primary motivation for this work comes from the mathematical problems encountered in a non-perturbative approach to quantum gravity \cite{AA3} and our results can be used to solve a number of these problems \cite{ALMMT2, AA2}. However, there some indications that, to deal satisfactorily with the issue of ``framed loops and graphs'' that may arise in regularization of certain operators, one may have to replace the structure group $SL(2, C)$ with its quantum version $SL(2)_q$. Our algebraic approach is well-suited for an eventual extension along these lines. \section{Projective Techniques: general framework} In this section, we recall from \cite{MM,AL2} some general results on projective limits which will be used throughout the rest of the paper. We begin with the notion of a projective family. The first object we need is a set $L$ of {\it labels}. The only structure $L$ has is the following: it is a partially ordered, directed set. That is, it is a set equipped with a relation `$\ge$' such that, for all $\g, \g'$ and $\g''$ in $L$ we have: \begin{equation} \label{2.1a} \g \ge \g\ ;\quad \g \ge \g'\ {\rm and}\ \g'\ge \g \Rightarrow \g =\g'\ ; \quad \g\ge\g'\ {\rm and}\ \ \g'\ge\g''\ \Rightarrow \g\ge\g''\ ; \end{equation} and, given any $\g', \g'' \in L$, there exists $\g \in L$ such that \begin{equation} \label{2.1b} \g \ge \g' \quad {\rm and} \quad \g \ge \g''\ . \end{equation} A {\it projective family} $({\cal X}_\g,p_{\g\g'})_{\g,\g'\in L}$ consists of sets ${\cal X}_\g$ indexed by elements of $L$, together with a family of surjective {\it projections}, \begin{equation} p_{\g\g'}\:\ {\cal X}_{\g'}\rightarrow {\cal X}_{\g},\end{equation} assigned uniquely to pairs $(\g',\g)$ whenever $\g'\ge\g$ such that \begin{equation}\label{2.3} p_{\g\g'}\circ p_{\g'\g''} = p_{\g\g''}.\end{equation} A familiar example of a projective family is the following. Fix a locally convex, topological vector space $V$. Let the label set $L$ consist of finite dimensional subspaces $\g$ of $V^\star$, the topological dual of $V$. This is obviously a partially ordered and directed set. Every $\g$ defines a unique sub-space $\tilde{\g}$ of $V$ via: $\tilde{v} \in \tilde{\g}\ \ {\rm iff}\ \ <v, \tilde{v}> =0 \ \forall v\in \g$. The projective family can now be constructed by setting ${\cal X}_\g = V/\tilde\g$. Each ${\cal X}_\g$ is a finite dimensional vector space and, for $\g'\ge \g$, $p_{\g \g'}$ are the obvious projections. Integration theory over infinite dimensional topological spaces can be developed starting from this projective family \cite{K,DM}. In this paper, we wish to consider projective families which are in a certain sense complementary to this example and which are tailored to the kinematically non-linear spaces of interest. In our case, ${\cal X}_\g$ will all be topological, compact, Hausdorff spaces and the projections $p_{\g\g'}$ will be continuous. The resulting pairs $({\cal X}_\g, p_{\g \g'})_{\g,\g' \in L}$ are said to constitute a {\it compact Hausdorff projective family}. In the application of this framework to gauge theories, the labels $\g$ can be thought of as ``floating'' lattices (i.e., which are not necessarily rectangular) and the members ${\cal X}_{\g}$ of the projective family, as the spaces of configurations/histories associated with these lattices. The continuum theory will be recovered in the (projective) limit as one considers lattices with increasing number of loops of arbitrary complexity. Note that in the projective family there will, in general, be no set ${{\overline \X}}$ which can be regarded as the largest, from which we can project to any of the ${\cal X}_\g$. However, such a set does emerge in an appropriate limit, which we now define. The {\it projective limit} ${\overline \X}$ of a projective family $({\cal X}_\g, p_{\g\g'})_{\g,\g'\in L}$ is the subset of the Cartesian product $\times_{\g\in L}{\cal X}_\g$ that satisfies certain consistency conditions: \begin{equation} \label{2.4} {\overline \X}\ :=\ \{(x_\g)_{\g\in L}\in \times_{\g\in L}{\cal X}_\g\ :\ \g'\ge \g \Rightarrow p_{\g\g'}x_{\g'} = x_\g\}. \end{equation} (In applications to gauge theory, this is the limit that gives us the continuum theory.) One can show that ${\overline \X}$, endowed with the topology that descends from the Cartesian product, {\it is itself a compact, Hausdorff space}. Finally, as expected, one can project from the limit to any member of the family: we have \begin{equation}\label{2.5} p_{\g}\ : \ {\overline \X} \rightarrow {\cal X}_{\g}, \ \ p_{\g} ((x_{\g'})_{\g'\in L}):= x_{\g}\ .\end{equation} Next, we introduce certain function spaces. For each $\g$ consider the space $C^0({\cal X}_\g)$ of the complex valued, continuous functions on ${\cal X}_\g$. In the union $$\bigcup_{\g\in L} C^0({\cal X}_{\g})$$ let us define the following equivalence relation. Given $f_{\g_i}\in C^0({\cal X}_{\g_i})$, $i=1,2$, we will say: \begin{equation} \label{2.6} f_{\g_1} \ \sim\ f_{\g_2}\ \ \ {\rm if}\ \ \ p_{\g_1\g_3}^\star \ f_{\g_1}\ =\ p_{\g_2\g_3}^\star\ f_{\g_2}\end{equation} for every $\g_3\ \ge \g_1,\g_2$, where $p^\star_{\g_1\g_3}$ denotes the pull-back map from the space of functions on ${\cal X}_{\g_1}$ to the space of functions on ${\cal X}_{\g_3}$. (Note that to be equivalent, it is in fact sufficient that the equality (\ref{2.6}) holds {\it just for one} $\g_3\ \ge \g_1,\g_2$.) Using the equivalence relation we can introduce the set of {\it cylindrical functions} associated with the projective family $({\cal X}_\g,p_{\g\g'})_{\g,\g'\in L}$, \begin{equation}\label{2.7} {\rm Cyl}^0({\overline \X}) \ := \ \big( \bigcup_{\g\in L}C^0({\cal X}_\g)\ \big) \ /\ \sim.\end{equation} The quotient just gets rid of a redundancy: pull-backs of functions from a smaller set to a larger set are now identified with the functions on the smaller set. Note that an element of ${\rm Cyl}^0({\overline \X})$ determines, through the projections (\ref{2.5}), a function on ${\overline \X}$. Hence, there is a natural embedding $$ {\rm Cyl}^0({\overline \X})\ \rightarrow\ C^0({\overline \X}), $$ which is dense in the sup-norm. Thus, modulo the completion, ${\rm Cyl}^0({\overline \X})$ may be identified with the algebra of continuous functions on ${\overline \X}$ \cite{AL2}. This fact will motivate, in section 3, our definition of $C^n$ functions on the projective completion. Next, let us illustrate how one can introduce interesting structures on the projective limit. Since each ${\cal X}_{\g}$ in our family as well as the projective limit ${\overline \X}$ is a compact, Hausdorff space, we can use the standard machinery of measure theory on each of these spaces. The natural question is: What is the relation between measures on ${\cal X}_{\g}$ and those on ${\overline \X}$? To analyze this issue, let us begin with a definition. Let us assign to each $\g \in L$ a regular Borel, probability (i.e., normalized) measure, $\mu_\g$ on ${\cal X}_{\g}$. We will say that this constitutes a {\it consistent family of measures} if \begin{equation}\label{2.8} (p_{\g\g'})_{\star}\ \mu_{\g'}\ =\ \mu_{\g}\ . \end{equation} Using this notion, we can now characterize measures on ${\overline \X}$ \cite{AL2}: \begin{theorem}{\rm :} Let $({\cal X}_\g, p_{\g \g'})_{\g\g' \in L}$ be a compact, Hausdorff projective family and ${\overline \X}$ be its projective limit; \smallskip \noindent (a) Suppose $\mu$ is a regular Borel, probability measure on ${\overline \X}$. Then $\mu$ defines a consistent family of regular, Borel, probability measures, given by: \begin{equation}\label{2.22} \mu_\g\:=\ {p_\g}_\star\mu;\end{equation} \smallskip \noindent (b) Suppose $(\mu_\g)_{\g,\g'\in L}$ is a consistent family of regular, Borel, probability measures. Then there is a unique regular, Borel, probability measure $\mu$ on ${\overline \X}$ such that $({p_\g})_\star\ \mu = \mu_\g$; \smallskip \noindent(c) $\mu$ is faithful if $\mu_\g\ :=\ ({p_\g})_\star\ \mu$ is faithful for every $\g\in L$. \end{theorem} \noindent This is an illustration of the general strategy we will follow in sections 4-7 to introduce interesting structures on the projective limit; they will correspond to families of {\it consistent} structures on the projective family. \section{Projective families for spaces of connections.} We will now apply the general techniques of section 2 to obtain three projective families, each member of which is a compact, Hausdorff, analytic manifold. Fix an n-dimensional, analytic manifold $M$ and a smooth principal fiber bundle $B(M, G)$ with the structure group $G$ which we will assume to be a compact and connected Lie group. Let ${\cal A}$ denote the space of smooth connections on $B$ and ${\cal G}$ the group of smooth vertical automorphisms of $B$ (i.e., the group of local gauge transformations). The projective limits of the three families will provide us with completions ${\overline \A}$, ${\overline \G}$ and $\B$ of the spaces ${\cal A}$, ${\cal G}$ and ${\A/{\cal G}}$. As the notation suggests, $\B$ will turn out to be naturally isomorphic with the Gel'fand spectrum of the holonomy algebra of \cite{AI}, mentioned in section 1. The label set of all three families will be the same. Section 3.1 introduces this set and sections 3.2--3.4 discuss the three families and their projective limits. The results of this section follow from a rather straightforward combination of the results of \cite{AL1,B1,B2,MM,AL2}. Therefore, we will generally skip the detailed proofs and aim at presenting only the final structure which is used heavily in the subsequent sections. \subsection{Set of labels.} The set $L$ of labels will consist of graphs in $M$. To obtain a precise characterization of this set, let us begin with some definitions. By an unparametrized oriented analytic edge in $M$ we shall mean an equivalence class of maps \begin{equation} e\ :\ [0,1]\ \rightarrow\ M, \end{equation} where two maps $e$ and $e'$ are considered as equivalent if they differ only by a reparametrization, or, more precisely if $e'$ can be written as \begin{equation} e'\ =\ e\circ f,\ \ \ {\rm where} \ \ \ f\ :\ [0,1]\ \rightarrow \ [0,1], \end{equation} is an analytic orientation preserving bijection. We will also consider unoriented edges for which the requirement that $f$ preserve orientation will be dropped. The end points of an edge will be referred to as {\it vertices}. (If the edges are oriented, each $e$ has a well-defined initial and a well-defined final vertex.) A (oriented) {\it graph} $\g$ in $M$ is a set of finite, unparametrized (oriented) analytic edges which have the following properties: \begin{enumerate} \item every $e\in\g$ is diffeomorphic with the closed interval $[0,1]$; \item if $e_1,e_2\in\g$, with $e_1\not= e_2$, the intersection $e_1\cap e_2$ is contained in the set of vertices of $e_1,e_2$, \item every $e\in\g$ is at both sides connected with another element of $\g$. \end{enumerate} \noindent (Note that the last condition ensures that each graph is closed.) The set of all the graphs in $M$ will be denoted by $\Gra(M)$. This is our set of labels. As we saw in section 2, the set of labels must be a partially ordered, directed set. On our set of graphs, the partial order, $\ge$, is defined just by the inclusion relation: \begin{equation} \g'\ \ge\ \g \end{equation} whenever each edge of $\g$ can be expressed as a composition of edges of $\g'$ {\it and} each vertex in $\g$ is a vertex of $\g'$. To see that the set is directed, we use the analyticity of edges: it is easy to check that, given any two graphs $\g_1, \g_2\in \Gra(M)$, there exists $\g\in \Gra(M)$ such that \begin{equation} \g\ge\g_1\ \ \ {\rm and}\ \ \ \g\ge\g_2. \end{equation} (In fact, given $\g_1$ and $\g_2$, there exists a {\it minimal} upper bound $\g$.) This property is no longer satisfied if one weakens the definition and only requires that the edges be smooth. \subsection{The projective family for ${\cal A}$.} We are now ready to introduce our first projective family. Fix a graph $\g\in\Gra(M)$. To construct the corresponding space ${\cal A}_\g$ in the projective family, restrict the bundle $B$ to the bundle over $\g$, which we will denote by $B_{\g}$. Clearly, $B_{\g}$ is the union of smooth bundles $B_e$ over the edges of $\g$, $B_{\g}\ =\ \bigcup_{e\in\g} B_{e}$. For every edge $e\in \g$, any connection $A\in{\cal A}$ restricts to a smooth connection $A_e$ on $B_e$. The collection $(A_e)_{e\in\g} =: A_{|\g}$ will be referred to as the restriction of $A$ to $\g$. Denote by $\widehat{{\cal G}}^\g$ the subgroup of ${\cal G}$ which consists of those vertical automorphisms of $B$ which act as the identity in the fibers of $B$ over the vertices of $\g$. Now, since the action of ${\cal G}$ on ${\cal A}$ is equivariant with the restriction map ${\cal A}\ \rightarrow\ {\cal A}_{|\g}$, we can define the required space ${\cal A}_{\g}$ as follows: \begin{equation}\label{quo} {\cal A}_\g\ :=\ ({\cal A}/\widehat{{\cal G}}^{\g})_{\g}. \end{equation} Note that ${\cal A}_\g$ naturally decomposes into the Cartesian product \begin{equation} {\cal A}_\g\ =\ \times_{e\in\g} {\cal A}_e \end{equation} where ${\cal A}_e$ is defined by replacing $\g$ in (\ref{quo}) with a single edge $e$. Next, let us equip ${\cal A}_\g$ with the structure of a differential manifold. Note first that, given an orientation of $e$, a component $A_e$ of $(A_e)_{e\in \g}=A_\g\in{\cal A}_\g$, may be identified with the parallel transport map along the edge $e$ which carries the fibre over its initial vertex into the fiber over its final vertex. Hence, if we fix over each vertex of $\g$ a point in $B$ and orient each edge of $\g$, we have natural maps \begin{equation} \label{diff} \Lambda_e\ :\ {\cal A}_e\ \rightarrow\ G, \quad{\rm and}\quad \Lambda_\g\ :\ {\cal A}_\g \rightarrow\ G^E, \end{equation} where $E$ is the number of edges in $\g$. The map can easily be shown to be a bijection. We shall refer to $\Lambda_e$ (or $\Lambda_\g$) as a group valued chart for ${\cal A}_e$ or (${\cal A}_\g$). Now, since $G$ is a compact, connected Lie group, $G^E$ is a compact, Hausdorff, analytic manifold. Hence, the map $\Lambda_\g$ can be used to endow ${\cal A}_\g$ with the same structure. Finally, we introduce the required projection maps. Note first that, for each $\g\in \Gra(M)$, there is a natural projection map $\pi_{\g}$ \begin{equation} \label{proj2} \pi_{\g}\:\ {\cal A}\ \rightarrow \ {\cal A}_{\g }. \end{equation} defined by (\ref{quo}), which is surjective. We now use this map to define projections $p_{\g\g'}$ between the members of our projective family. Let $\g'\ \ge\ \g$. Then, we set \begin{equation} p_{\g\g'}\ :\ {\cal A}_{\g'}\ \rightarrow {\cal A}_\g \end{equation} to be the map defined by \begin{equation} \label{proj1} \pi_\g\ =\ p_{\g\g'} \circ \pi_{\g'}\ . \end{equation} We now have: \begin{proposition}{\rm :} \noindent (i) The map $\Lambda_\g$ of (\ref{diff}) is bijective and the analytic manifold structure defined on ${\cal A}_\g$ by $\Lambda_\g$ does not depend on the initial choice of points in the fibers of $B$ over the vertices of $\g$ and orientation of the edges made in its definition; \noindent (ii) For every pair of graphs $\g,\g'\in\Gra(M)$ such that $\g'\ge\g$, the map $p_{\g\g'}$ defined by (\ref{proj2}) and (\ref{proj1}) is surjective; \noindent (iii) For any three graphs $\g,\g',\g''\in\Gra(M)$ such that $\g''\ge\g'\ge\g$, \begin{equation} p_{\g\g''}\ =\ p_{\g\g'}\circ p_{\g'\g''} \ ; and, \end{equation} \noindent (iv) The maps $p_{\g\g'}$ are analytic. \end{proposition} The proofs are straightforward. It worth noting however, that to show the surjectivity in (i), one needs the assumption that the structure group $G$ is connected \cite{AL1}. On the other hand, compactness of $G$ is not used directly in proposition 1. Compactness is used, of course, in concluding that ${\cal A}_{\g}$ is compact. Thus, we have introduced a projective family $({\cal A}_\g, p_{\g\g'})_{\g,\g'\in \Gra(M)}$ of compact, Hausdorff, analytic manifolds, labelled by graphs in $M$. We will denote its projective limit by ${\overline \A}$. We will conclude this subsection by presenting a characterization of ${\overline \A}$. Note first that a connection $A\in{\cal A}$ naturally defines a point $A_\g \in {\cal A}_\g$ for each $\g\in\Gra(M)$ and that the resulting family $(A_\g)_{\g\in\Gra(M)}$ represents a point in ${\overline \A}$. Hence, we have a natural map \begin{equation} {\cal A}\ \rightarrow \ {\overline \A}, \end{equation} which is obviously an injection. There are however elements of ${\overline \A}$ which are not in the image of this map. In fact, ``most of'' ${\overline \A}$ lies outside ${\cal A}$. To represent a general element of ${\overline \A}$, we proceed as follows. Consider first a map $I$ which assigns to each oriented edge $e$ in $M$, an isomorphism \begin{equation} I(e)\ :\ B_{e_-}\ \rightarrow \ B_{e_+}, \end{equation} between the fibers $B_{e_\pm}$ over $e_\pm$, the final and the initial end points of $e$. Suppose, that this map $I$ satisfies the following two properties: \begin{equation} \label{24} I(e^{-1}) \ =\ [I(e)]^{-1}\quad {\rm and}\quad I(e_2\circ e_1)\ =\ I(e_2)\circ I(e_1)\ , \end{equation} whenever the composed path $e_2\circ e_1$ is again analytic. (Here $e^{-1}$ is the edge obtained from $e$ by inverting its orientation, and, if $e_{1+}=e_{2-}$, $e_2\circ e_1$ is the edge obtained by gluing edges $e_2, e_1$.) Then, we call $I$ a {\it generalized parallel transport} in $B$. Let us denote the space of all these generalized parallel transports by ${\cal P}(B)$. Every element of the projective limit ${\overline \A}$ defines uniquely an element $I_{\bar A}$ of ${\cal P}(B)$. Indeed, let ${\bar A} = (A_\g)_{\g\in\Gra(M)} \in {\overline \A}$. For an oriented edge $e$ in $M$ pick any graph $\g$ which contains $e$ as the product of its edges (for some orientation) and define \begin{equation}\label{I} I_{{\bar A}}(e)\ :=\ H(A_\g, e)\ , \end{equation} where the right hand side stands for the (ordinary) parallel transport defined by $A_\g \in {\cal A}_{\g}$. From the definition of the projective limit ${\overline \A}$ it is easy to see that (\ref{I}) gives rise to a well-defined map \begin{equation} \label{Hol} {\overline \A} \ni {\bar A}\ \mapsto\ I_{{\bar A}}\in{\cal P}(B) \ . \end{equation} Furthermore, it is straightforward to show the following properties of this map: \begin{proposition}{\rm :} The map (\ref{Hol}) defines a one-to-one correspondence between the projective limit ${\overline \A}$ and the space ${\cal P}(B)$ of generalized parallel transports in $B$. \end{proposition} This characterization leads us to regard ${\overline \A}$, heuristically, as the configuration space of all possible ``floating '' lattices in $M$, prior to the removal of gauge freedom at the vertices (see (\ref{quo})). \subsection{The projective family for ${\cal G}$.} As we just noted, in the projective family constructed in the last section, there is still a remaining gauge freedom: given a graph $\g$, the restrictions of the vertical automorphisms of the bundle $B$ to the vertices of $\g$ still act non-trivially on ${\cal A}_{\g}$. In this sub-section, we will construct a projective family $({\cal G}_{\g}, p_{\g\g'})$ from these restricted gauge transformations. In the next section, we will use the two families to construct the physically relevant quotient projective family. Given a graph $\g$, the restricted gauge freedom is the image of the following projection \begin{equation} \label{proj3} \tilde{\pi}_\g\ :\ {\cal G}\ \rightarrow\ {\cal G}/\widehat{{\cal G}}^{\g}\ =:\ {\cal G}_\g\ . \end{equation} Clearly, the group ${\cal G}_\g$ has a natural action on ${\cal A}_\g$. Since ${\cal G}_\g$ consists essentially of the gauge transformations ``acting at the vertices'' of $\g$, (up to the natural isomorphism) one can write ${\cal G}_\g$ as the cartesian product group \begin{equation} {\cal G}_\g\ =\ \times_{v\in{\rm Ver}(\g)}{\cal G}_v \ , \end{equation} where ${\cal G}_v$ is, as before, the group of automorphisms of the fiber $\pi^{-1}(v)\subset B$ and ${\rm Ver}(\g)$ stands for the set of the vertices of $\g$. Now, each group ${\cal G}_v$ is isomorphic with the structure group $G$. Hence, if we fix a point in the fiber over each vertex of $\g$, we obtain an isomorphism \begin{equation} \label{diff2} \tilde{\Lambda}_\g\ :\ {\cal G}_\g\ \rightarrow\ G^V\ , \end{equation} where $V$ is the number of edges of $\g$. Finally, given any two graphs $\g'\ge\g$, the map $\tilde{\pi}_{\g}$ of (\ref{proj3}) factors into \begin{equation} \label{proj4} \tilde{\pi}_{\g} \ =\ p_{\g\g'}\circ \tilde{\pi}_{\g'}, \ \ \ p_{\g\g'}\ :\ {\cal G}_{\g'}\ \rightarrow\ {\cal G}_\g, \end{equation} and hence defines the maps $p_{\g\g'}$ uniquely. It is easy to verify that this machinery is sufficient to endow $({\cal G}_\g, p_{\g\g'})_{\g,\g' \in \Gra(M)}$ the structure of a compact, connected Lie group projective family. We have: \begin{proposition}{\rm :} \noindent(i) The family $({\cal G}_\g, p_{\g\g'})_{\g,\g'\in\Gra(M)}$ defined by (\ref{proj3}) and (\ref{proj4}) is a smooth projective family; \noindent(ii) the maps $p_{\g\g'}$ are Lie group homomorphisms; \noindent(iii) The projective limit ${\overline \G}$ of the family is a compact topological group with respect to the pointwise multiplication: let $(g_\g)_{\g\in\Gra(M)}, (h_\delta)_{\delta\in\Gra(M)} \in {\overline \G}$, then \begin{equation} (g_\g)_{\g\in\Gra(M)} (h_\delta)_{\delta\in\Gra(M)}\ :=\ (g_\g h_\g)_{\g\in\Gra(M)}. \end{equation} \noindent(iv) There is a natural topological group isomorphism \begin{equation} {\overline \G} \ \rightarrow \ \times_{x\inM}{\cal G}_x \end{equation} where the group on the right hand side is equipped with the product topology. \end{proposition} In view of the item (iv) above, we again have the expected embedding: \begin{equation} {\cal G} \ \rightarrow {\overline \G}\ , \end{equation} where the group ${\cal G}$ of automorphisms of $B$ is identified with the subgroup consisting of those families $(g_x)_{x\inM}\in{\overline \G}$ which are smooth in $x$. Let us equip ${\cal G}_\g = G^V$ with the measure $\mu_\g= (\mu_o)^V$, where $\mu_o$ is the Haar measure on ${\cal G}$. Then it is straightforward to verify that $(\mu_\g)_{\g\in\Gra(M)}$ is a consistent family of measures in the sense of section 2. Hence, it defines a regular, Borel Probability measure $\bar{\mu}_o$ on ${\overline \G}$. This is the just Haar measure on ${\overline \G}$. Thus, by enlarging the group ${\cal G}$ to ${\overline \G}$, one can obtain a compact group of generalized gauge transformations whose total volume is {\it finite} (actually, unit). This observation was first made by Baez \cite{B2}. \subsection{The quotient ${\overline \A}/{\overline \G}$ and the projective family for $\B$.} In the last two subsections, we constructed two projective families. Their projective limits, ${\overline \A}$ and ${\overline \G}$ are the completions of the spaces ${\cal A}$ and ${\cal G}$ of smooth connections and gauge transformations. The action of ${\cal G}$ on ${\cal A}$ can be naturally extended to an action of ${\overline \G}$ on ${\overline \A}$. Indeed, Let $(g_\g)_{\g\in\Gra(M)} \in {\overline \G}$ and $(A_\delta)_{\delta\in\Gra(M)}\in {\overline \A}$. Then, we set \begin{equation} (A_\delta)_{\delta\in\Gra(M)}\ \ (g_\g)_{\g\in\Gra(M)} \ :=\ (A_\delta g_\delta)_{\delta\in\Gra(M)}\in{\overline \A}\ , \end{equation} where $(A_\delta, g_\delta)\ \mapsto\ A_\delta g_\delta$ denotes the action of ${\cal G}_\delta$ in ${\cal A}_\delta$. Now, this action of ${\overline \G}$ on ${\overline \A}$ is continuous and ${\overline \G}$ is a compact topological group. Hence, the quotient, ${\overline \A}/{\overline \G}$, is a Hausdorff and compact space. This concludes the first part of this subsection. In the second part, we will examine the spaces ${\cal A}_{\g}/{\cal G}_{\g}$. Note first that the projections $p_{\g\g'}$ defined in (\ref{proj1}), (\ref{proj2}) descend to the projections of the quotients, \begin{equation} \label{proj5} p_{\g\g'}\ :\ {\cal A}_{\g'}/{\cal G}_{\g'} \ \rightarrow \ {\cal A}_{\g}/{\cal G}_{\g}\ . \end{equation} We thus have a new compact, Hausdorff, projective family $({\cal A}_\g/{\cal G}_\g, p_{\g\g'})_ {\g,\g'\in\Gra(M)}$. This family can also be obtained directly from the quotient ${\cal A}/{\cal G}$ by a procedure which is analogous to the one used in section 4.1: The space ${\cal A}_{\g}/{\cal G}_{\g}$ assigned to a graph $\g$ is just the image of the restriction map \begin{equation} \pi_\g \ : \ {\cal A}/{\cal G}\ \rightarrow \ ({\cal A}/{\cal G})_{|\g}\ =\ {\cal A}_{\g}/{\cal G}_{\g}. \end{equation} Therefore, it is natural to denote the projective limit of $({\cal A}_\g/{\cal G}_\g, p_{\g\g'})_{\g,\g'\in\Gra(M)}$ is by $\overline{{\cal A}/{\cal G}}$. The natural question now is: What is the relation between $\B$ and ${\overline \A}/{\overline \G}$? Note first that there is a natural map from ${\overline \A}/{\overline \G}$ to $\overline{{\cal A}/{\cal G}}$, namely \begin{equation} \label{iso} {\overline \A}/{\overline \G}\ni[(A_\g)_{\g\in\Gra(M)}] \mapsto\ ([A_\g])_{\g\in\Gra(M)}\in \overline {{\cal A}/{\cal G}}, \end{equation} where the square bracket denotes the operation of taking the orbit with respect to the corresponding group. Using the results of \cite{MM,AL2}, it is straightforward to show that: \begin{proposition} The map (\ref{iso}) defines a homeomorphism with respect to the quotient geometry on ${\overline \A}/{\overline \G}$ and the projective limit geometry on $\overline{{\cal A}/{\cal G}}$. \end{proposition} Finally, by combining the results of \cite{B1} and \cite{AL2}, one can show that the space $\B$ is naturally isomorphic to the Gel'fand spectrum of the holonomy $C^\star$-algebra $\overline{\cal HA}$ introduced in section 1. (Thus, there is no ambiguity in notation.) The space $\B$ thus serves as the quantum configuration space of the continuum gauge theory. This is the space of direct physical interest. We conclude with a number of remarks. \begin{enumerate} \item Since $\B$ is the Gel'fand spectrum of $\overline{\cal HA}$, it follows \cite{AL1} that there is a natural embedding: \begin{equation} \bigcup_{B'}\ ({\A/{\cal G}})_{B'}\ \rightarrow \B, \end{equation} where $({\A/{\cal G}})_{B'}$ denotes the quotient space of connections on a bundle $B'$ and $B'$ runs through all the $G$-principal fibre bundles over $M$. Thus, although it is not obvious from our construction, $\B$ is independent of the choice of the bundle $B$ we made in the beginning; it is tied only to the underlying manifold $M$. (See \cite{AI,AL1,B1,MM} for the bundle independent definitions.) \item Each member ${\cal A}_\g$ and ${\cal G}_\g$ of the first two projective families, we considered is a compact, analytic manifold. Unfortunately, the same is not true of the quotients ${\cal A}_\g/{\cal G}_\g$ which constitute the third family since the quotient construction introduces kinks and boundaries. Because of this, while discussing differential geometry, we will regard $\B$ as ${\overline \A}/{\overline \G}$ and deal with ${\overline \G}$ invariant structures on ${\overline \A}$. That is, it would be more convenient to work ``upstairs'' on ${\overline \A}$ even though $\B$ is the space of direct physical interest. This point was first emphasized by Baez \cite{B1, B2}. \item In the literature, one often fixes a base point $x_0$ in $M$ and uses the subgroup ${\cal G}_{x_0}$ of ${\cal G}$ consisting of vertical automorphisms which act as the identity on the fiber over $x_0\inM$ as the gauge group. In the present framework, this corresponds to considering the subgroups ${\cal G}_{\g,x_0}\subset{\cal G}_{\g}$ where $\g$ run through $\Gra(M)_{x_0}$, the space of graphs which have $x_0$ as a vertex. $\B$ can be recovered by taking the quotient of the projective limit of this family by the natural action of the gauge group at the base point. \end{enumerate} \section{Elements of the differential geometry on ${\overline \A}$} We are now ready to discuss differential geometry. We saw in section 2 that one can introduce a measure on the projective limit by specifying a consistent family of measures on the members of the projective family. The idea now is to use this strategy to introduce on the projective limits various structures from differential geometry. The object of our primary interest is $\B$. However, as indicated above, we will first introduce geometric structures of ${\overline \A}$. Those structures which are invariant under the action of ${\overline \G}$ on ${\overline \A}$ will descend to $\B = {\overline \A}/{\overline \G}$ and provide us, in section 5, with differential geometry on the quotient. In section 4.1, we introduce $C^n$ differential forms on ${\overline \A}$ and, in section 4.2, the $C^n$ volume forms. Section 4.3 is devoted to $C^n$ vector fields and their properties. Finally, in section 4.4, we combine these results to show how vector fields can be used to define ``momentum operators'' in the quantum theory. While we will focus on the projective family introduced in section 3.2, our analysis will go through for {\it any} projective family, the members of which are smooth compact manifolds. Throughout this section $C^n$ could in particular stand for $C^\infty$ or $C^\omega$. \subsection{Differential forms} Let us begin with functions. Results of section 2 imply that the projective limit ${\overline \A}$ of the family $({\cal A}_\g, p_{\g \g'})_{\g,\g'\in\Gra(M)}$ is a compact Hausdorff space. Hence, we have a well-defined algebra $C^0({\overline \A})$ of continuous functions on ${\overline \A}$. We now want to introduce the notion of $C^n$ functions on ${\overline \A}$. The problem is that ${\overline \A}$ does not have a natural manifold structure. Recall however that the algebra $C^0({\overline \A})$ could also be constructed directly from the projective family, without passing to the limit: We saw in section 2 that $C^0({\overline \A})$ is naturally isomorphic with the algebra ${\rm Cyl}^0 ({\overline \A})$ of cylindrical continuous functions. The idea now is to simply define differentiable functions on ${\overline \A}$ as cylindrical, differential functions on the projective family. This is possible because each member ${\cal A}_\g$ of the family has the structure of an analytic manifold, and the projections $p_{\g\g'}$ are all analytic. Thus, we can define $C^n$ cylindrical functions ${\rm Cyl}^n({\overline \A})$ to be \begin{equation} {\rm Cyl}^n({\overline \A})\ :=\ \bigcup_{\g\in\Gamma}C^n({\cal A}_\g)/\sim, \end{equation} where the equivalence relation is the same as in (\ref{2.6}) of section 2; as before, it removes the redundancy by identifying, if $\g'\ge \g$, the function $f$ on ${\cal A}_\g$ with its pull-back $f'$ on ${\cal A}_{\g'}$. Elements of ${\rm Cyl}^n({\overline \A})$ will serve as the $C^n$ functions on ${\overline \A}$. Note that if a cylindrical function $f\in {\rm Cyl}({\overline \A})$ can be represented by a function $f_\g\in C^n({\cal A}_\g)$, then all the representatives of $f$ are of the $C^n$ differentiability class. Next we consider higher order forms. The idea is again to use an equivalence relation $\sim$ to ``glue'' differential forms on $({\cal A}_\g)_{\g\in\Gra(M)}$ and obtain strings that can serve as differential forms on ${\overline \A}$. Consider $\bigcup_{\g\in\Gamma}\Omega ({\cal A}_\g)$, where $\Omega ({\cal A}_\g)$ denotes the Grassman algebra of all $C^n$ sections of the bundle of differential forms on ${\cal A}_\g$. Let us introduce the equivalence relation $\sim$ by extending (\ref{2.6}) in an obvious way: \begin{equation} \label{62} \Omega({\cal A}_{\g_1}) \ni \omega_{\g_1}\ \sim\ \omega_{\g_2}\in \Omega ({\cal A}_{\g_2})\ \ \ {\rm iff} \ \ \ p_{\g_1\g'}^\star\omega_{\g_1}\ = \ p_{\g_2\g'}^\star\omega_{\g_2}, \end{equation} for any $\g'\ge\g_1,\g_2$. (Again, if the equality above is true for a particular $\g'$ then it is true for every $\g'\ge \g_1, \g_2$.) The set of differential forms on ${\overline \A}$ we are seeking is now given by: \begin{equation} \Omega({\overline \A})\ :=\ (\bigcup_{\g\in\Gamma}\Omega({\cal A}_\g))\ / \sim. \end{equation} Clearly, $\Omega({\overline \A})$ contains well-defined subspaces $\Omega^m({\overline \A})$ of $m$--forms. Since the pullbacks $p_{\g\g'}^\star$ commute with the exterior derivatives, there is a natural, well-defined exterior derivative operation $d$ on ${\overline \A}$: \begin{equation} d\ : \Omega^n({\overline \A})\ \rightarrow\ \Omega^{n+1}({\overline \A}). \end{equation} One can use it to define and study the corresponding cohomology groups $H^n({\overline \A})$. Thus, although ${\overline \A}$ does not have a natural manifold structure, using the projective family and an algebraic approach to geometry, we can introduce on it $C^n$ differential forms and exterior calculus. \subsection{Volume forms.} Volume forms require a special treatment because they are not encompassed in the discussion of the previous section. To see this, recall that an element of $\Omega({\overline \A})$ is an assignment of a consistent family of $m$-forms, for some {\it fixed} $m$, to each ${\cal A}_\g$, with $\g\ge\g_o$ for some $\g_o$. On the other hand, since a volume form on ${\overline \A}$ is to enable us to integrate elements of ${\rm Cyl}^0({\overline \A})$, it should correspond to a consistent family of $d_\g$-forms on $({\cal A}_\g)_{\g\in \Gra(M)}$, where $d_\g$ is the dimension of the manifold ${\cal A}_\g$. That is, the rank of the form is no longer fixed but changes with the dimension of ${\cal A}_{\g}$. Thus, volume forms are analogous to the measures discussed in section 2 rather than to the n-forms discussed above. The procedure to introduce them is pretty obvious from our discussion of measures. A {\it $C^n$ volume form} on ${\overline \A}$ will be a family $(\nu_\g)_{\g\in\Gra(M)}$, where each $\nu_\g$ is a $C^n$ volume form with strictly positive volume on ${\cal A}_\g$, such that \begin{equation} \label{volcons} C^0({\cal A}_{\g'})\ni f_{\g'}\ \sim\ f_\g\in C^0({\cal A}_{\g})\ \ \Rightarrow \ \ \int_{{\cal A}_\g}\ f_\g\nu_\g =\ \int_{{\cal A}_{\g'}}\ f_{\g'}\nu_{\g'}\ , \end{equation} for all $\g'\ge \g$ and all functions $f_\g$ on ${\cal A}_\g$, where $f_{\g'} = p_{\g \g'}^\star f_\g$. Now, since ${\cal A}_\g$ are all compact, it follows from the discussion of measures in section 2 that this volume form automatically defines a regular Borel measure, say $\nu$, on ${\overline \A}$ and that this measure satisfies: \begin{equation} \int_{\overline \A}\ [f_\g]_\sim\ d\nu\ :=\ \int_{{\cal A}_\g}f_\g\nu_\g. \end{equation} The most natural volume form $\mu_o$ on ${\overline \A}$ is provided by the normalized, left and right invariant (i.e., Haar) volume form $\mu_H$ on the structure group $G$. Use the map $\Lambda_\g:{\cal A}_\g\ \rightarrow G^E$ defined in (\ref{diff}) to pull back to ${\cal A}_\g$ the product volume form $(\mu_H)^E$, induced on $G^E$ by $\mu_H$, to obtain \begin{equation} \label{vol} \mu^H_\g\ :=\ \Lambda_{\g}^{-1}{}_\star\ (\mu_H)^E. \end{equation} We then have \cite{AL1,B2,AL2}: \begin{proposition}{\rm :} \noindent(i) The form $\mu^H_\g$ of (\ref{vol}) is insensitive to the choice of the gauge over the vertices of $\g$, used in the definition (\ref{diff}) of the map $\Lambda_\g$; \noindent(ii) The family of volume forms $(\mu^H_\g)_{\g\in\Gra(M)}$ satisfies the consistency conditions (\ref{volcons}); \noindent(iii) The volume form $\mu_o$ defined on ${\overline \A}$ by $(\mu^H_\g)_{\g\in \Gra(M)}$ is invariant with respect to the action of (all) the automorphisms of the underlying bundle $B(M,G)$. \end{proposition} The push forward $\mu_o'$ of $\mu_o$, under the natural projection from ${\overline \A}$ to $\B$ is the induced Haar measure on $\B$ of \cite{AL1}, chronologically, the first measure introduced in this subject. It is invariant with respect to all the diffeomorphisms of $M$. The measure $\mu_o$ itself was first introduced in \cite{B1}. By now, several infinite families of measures have been introduced on ${\overline \A}$ (which can be pushed forward to $\B$) \cite{B1,B2,ALMMT2,ALMMT1}. These are reviewed in \cite{AL2}. In section 6, using heat kernel methods, we will introduce another infinite family of measures. These, as well as the measures introduced in \cite{AL2} arise from $C^{n}$ volume forms on ${\overline \A}$. \subsection{Vector fields.} Introduction of the notion of vector fields on ${\overline \A}$ is somewhat more subtle than that of $m$-forms because while one can pull-back forms, in general one can push forward only vectors (rather than vector fields). Hence, given $\g'\ge\g$, only certain vector fields on ${\cal A}_{\g'}$ can be pushed-forward through $(p_{\g\g'})^\star$. To obtain interesting examples, therefore, we now have to introduce an additional structure: vector fields on ${\overline \A}$ will be associated with a graph. A {\it smooth vector field $X^{(\g_o)}$ on ${\overline \A}$} is a family $(X_\g)_{\g\ge\g_o}$ where $X_\g$ is a smooth vector field on ${\cal A}_\g$ for all $\g\ge \g_o$, which satisfies the following consistency condition: \begin{equation} \label{vecons} (p_{\g\g'})_{\star}\ X_{\g'}\ =\ X_\g,\ \ \ {\rm whenever}\ \ \ \g'\ge\g\ge \g_0. \end{equation} It is natural to define a derivation $D$ on ${\rm Cyl}^n({\overline \A})$, as a linear and star preserving map, $D\:\ {\rm Cyl}^n({\overline \A})\ \rightarrow\ {\rm Cyl}^{n-1} ({\overline \A})$, such that for every $f,g\in {\rm Cyl}^n({\overline \A})$, the Leibniz rule holds, i.e., $D(fg)\ =\ D(f)g + D(g)f$. As one might expect, a vector field $X^{(\g_o)}$ defines a derivation, which we will denote also by $X^{(\g_o)}$. Indeed, given $f\in{\rm Cyl}^n({\overline \A})$ there exists $\g\ge\g_0$ such that $f\ =\ [f_\g]_\sim$. We simply set \begin{equation} X^{(\g_o)}(f)\:=\ [X_\g(f_\g)]_\sim\in{\rm Cyl}^{n-1}({\overline \A}), \end{equation} where $X_\g(f_\g)$ is the action of the vector field $X_\g$ on ${\cal A}_\g$ on the function $f_\g$, and note that the right hand side is independent on the choice of the representative. Finally, given any two vector fields, we can take their commutator. We have: \begin{proposition}{\rm :} Let $X^{(\g_1)} = (X_\g)_{\g\ge\g_1}$, $Y^{(\g_2)} = (Y_\g)_{\g\ge\g_2}$ be two vector fields on ${\overline \A}$. Then, the commutator $[X^{(\g_1)}, Y^{(\g_2)}]$ of the corresponding derivations is the derivation defined by a vector field $Z^{(\g_3)}$ on ${\overline \A}$ (where $\g_3\in \Gra(M)$ is any label satisfying $\g_3 \ge \g_1,\g_2$) given by \begin{equation} Z_\g\ =\ [X_\g,Z_\g], \end{equation} for any $\g\ge \g_3$. \end{proposition} For notational simplicity, from now on, we will drop the superscripts on the vector fields. \bigskip\noindent \subsection{Vector fields as momentum operators} We will first introduce the notion of compatibility between vector fields $X$ and volume forms $\mu$ on ${\overline \A}$ and then use it to define certain essentially self-adjoint operators $P(X)$ on $L^2({\overline \A},\mu)$. Let us begin by recalling that, given a manifold $\Sigma$ a vector field $V$ and a volume form $\nu$ thereon, the divergence ${\rm div}_\nu V$ is a function defined on $M$ by \begin{equation} L_V\ \nu\ =:\ ({\rm div}_\nu V) \nu, \end{equation} where $L_V$ denotes the standard Lie derivative. We will say that a vector field $X=(X_\g)_{\g\ge\g_0}$ on ${\overline \A}$ is {\it compatible with a volume form} $\mu=(\mu_\g)_{\g\in \Gra(M)}$ on ${\overline \A}$ if \begin{equation} \label{div} p_{\g\g'}^\star\ {\rm div}_{\mu_\g} X_{\g}\ =\ {\rm div}_{\mu_{\g'}}\ X_{\g'}, \end{equation} whenever $\g'\ge\g\ge\g_0$. Note, that if (\ref{div}) holds, the divergence ${\rm div}_{\mu_\g}\ X_\g$ is a cylindrical function, \begin{equation} {\rm div}_\mu X\:=\ [{\rm div}_{\mu_\g}\ X_\g]_\sim\in{\rm Cyl}^\infty({\overline \A}). \end{equation} We shall call it {\it the divergence of} $X$ with respect to a volume form $\mu$. The next proposition shows that the divergence of vector fields on ${\overline \A}$ has several of the properties of the usual, finite dimensional divergence. \begin{proposition}{\rm :} \noindent i) Let $X$ be a vector field and $\mu$, a smooth volume form on ${\overline \A}$ such that $X$ is compatible with $\mu$. Then, for every $f,g\in{\rm Cyl}^1({\overline \A})$, \begin{equation} \int_{\overline \A} f X(g)\mu \ =\ -\int_{\overline \A} (X(f)+ ({\rm div}_\mu\ X) f)g\ \mu. \end{equation} \noindent ii) Suppose that $Y$ is another vector field on ${\overline \A}$ which is compatible with $\mu$. Then, the commutator $[X,Y]$ also is compatible with $\mu$, and \begin{equation} {\rm div}_\mu [X, Y]\ =\ X({\rm div}_\mu\ Y)\ -\ Y({\rm div}_\mu\ X). \end{equation} \end{proposition} {\it Proof}{\rm :} The result follows immediately by using the properties of the usual divergence of vector fields $X_\g$ and $Y_\g$ on $({\cal A}_\g,\mu_\g)$ and the consistency conditions satisfied by $X_\g, Y_\g$ and $\mu_\g$. $\Box$ \bigskip We are now ready to introduce the momentum operators. Fix a smooth volume form $\mu \ =\ (\mu_\g)_{\g\in \Gra(M)}$ on ${\overline \A}$. In the Hilbert space $L^2({\overline \A},\mu)$, we define below a quantum representation of the Lie algebra of vector fields compatible with $\mu$. Let $X$ be such a vector field on ${\overline \A}$. We assign to $X$ the operator $(P(X), {\rm Cyl}^1({\overline \A}))$ as follows: \begin{equation} \label{P(X)} P(X)\ := \ iX\ +\ {i\over 2} ({\rm div}_\mu\ X)\ . \end{equation} (Here, ${\rm Cyl}^1({\overline \A})$ is the domain of the operator.) Clearly, $(P(X), {\rm Cyl}^1({\overline \A}))$ is a densely-defined operator on the Hilbert space $L^2({\overline \A}, \mu)$. Following the terminology used in quantum mechanics on manifolds, we will refer to $P(X)$ as the momentum operator associated with the vector field $X$. As one might expect from this analogy, the second term in the definition (\ref{P(X)}) of the momentum operators is necessary to ensure that it is a symmetric operator. To examine properties of this operator, we first need some general results. Let us therefore make a brief detour and work in a more general setting. Consider a family of Hilbert spaces $({\cal H}_\g, p_{\g \g'}^\star)_{\g, \g' \in \Gamma}$ where $\Gamma$ is any partially ordered and directed set of labels and \begin{equation} \label{projh} p_{\g \g'}^\star:\ {\cal H}_\g\ \rightarrow\ {\cal H}_{\g'}. \end{equation} is an inner-product preserving embedding defined for each ordered pair $\g'\ge\g\in\Gamma$. The maps (\ref{projh}) provide the union $\bigcup_{\g\in\Gamma}H_\g$ with an equivalence relation defined as in (\ref{2.6}). The Hermitian inner products $(.,.)_\g$ give rise to a unique Hermitian inner product on the vector space $(\bigcup_{\g\in \Gamma} H_\g) /\sim$ . For, if $\psi,\phi\in (\bigcup_{\g\in\Gamma} H_\g)/\sim$ , there exists a common label $\g\in\Gamma$ such that $\psi\ =\ [\psi_\g]_\sim$ and $\phi\ =\ [\phi_\g]_\sim$, with $\psi_\g, \phi_\g\in H_\g$, and we can set \begin{equation} (\psi, \phi)\ :=\ (\psi_\g,\phi_\g)_\g. \end{equation} It is easy to check that this inner product is Hermitian. Thus, we have a pre-Hilbert space. Let ${\cal H}$ denote its Cauchy completion: \begin{equation} {\cal H}\ =\ \overline{\bigcup_{\g\in\Gamma}{\cal H}_\g/\sim}. \end{equation} On this Hilbert space ${\cal H}$, consider an operator given by a family of operators $(O_\g, {\cal D}_{\g}(O_\g))_{\g\in\Gamma(O)}$, where $\Gamma(O)\subset\Gamma$ is a cofinal subset of labels (i.e., for every $\g\in\Gamma$ there is $\g'\in\Gamma(O)$ such that $\g'\ge\g$.) We will say that $(O_\g, {\cal D}_{\g}(O_\g))_{\g\in\Gamma(O)}$ is self consistent if the following two conditions are satisfied: \begin{equation} \label{01} p_{\g\g'}^\star{\cal D}_\g(O_\g)\ \subseteq\ {\cal D}_{\g'}(O_{\g'}) \end{equation} \begin{equation} \label{02} O_{\g'}\circ\ p_{\g\g'}^\star \ = \ p_{\g\g'}^\star\circ O_\g \end{equation} for every $\g'\ge\g$ such that $\g',\g\in\Gamma(O).$ Since the label set $\Gamma(O)\subset \Gamma$ is cofinal, a self consistent family of operators $(O_\g, {\cal D}_{\g}(O_\g))_{\g \in\Gamma({\cal P})}$ defines an operator $O$ in ${\cal H}$ via $O(\psi)\ :=\ [O_\g \psi_\g]_\sim$. A general result which we will apply to the momentum operators is the following. \begin{lemma}{\rm :} \label{selfadjointness} Let $(O_\g, {\cal D}_{\g}(O_\g))_{\g\in\Gamma(O)}$ be a self consistent family of operators and $\Gamma(O)$ be cofinal in $\Gamma$. Then, \noindent(i) $(O_\g, {\cal D}_{\g}(O_\g))_{\g\in\Gamma({\cal P})}$ defines uniquely an operator $O$ in ${\cal H}$ acting on a domain ${\cal D}(O) \:=\ \bigcup_{\g\in\Gamma(O)}{{\cal D}_\g(O_\g)}/\sim$ and such that for every $f_\g\in{\cal D}_\g(O_\g)$ \begin{equation} O([f_\g]_\sim)\ =\ [O_\g(f_\g)]_\sim; \end{equation} \noindent(ii) If $(O_\g, {\cal D}_\g(O_\g))$ is essentially self-adjoint in $H_\g$ for every $\g\in \Gamma(O)$, then the resulting operator $(O, {\cal D}(O))$ defined in (i) is also essentially self adjoint; \noindent(iii) If $(O_\g, {\cal D}_\g(O_\g))$ is essentially self-adjoint in $H_\g$ for every $\g\in \Gamma(O)$, then the family of the self dual extensions $({\tilde O}_\g, {\cal D}_\g({\tilde O}_\g))_{\g\in\Gamma(O)}$ is self consistent. \end{lemma} \bigskip {\it Proof}{\rm :} Part (i) is obvious from the above discussion. We will prove (ii) by showing that the ranges of the operators $O+iI$ and $O-iI$, where $I$ is the identity operator, are dense in ${\cal H}$. They are given by \begin{equation} \label{range} (O\pm iI) ( {\cal D}(O))\ =\ \bigcup_{\g\in \Gamma(O)}(O_\g\pm i I)({\cal D}_\g(O_\g))/\sim. \end{equation} But, as follows from the hypothesis, the range of each of the operators $O_\g\pm i I$ is dense in the corresponding $H_\g$. Hence indeed, the right hand side of (\ref{range}) is dense in ${\cal H}$. To show (iii), recall that the self-adjoint extension of an essentially self-adjoint operator is just its closure. Let $\g',\g\in\Gamma(O)$ and $\g'\ge\g$. Via the pullback $p_{\g\g'}^\star$, we may consider $H_{\g}$ as a subspace of ${\cal H}_{\g'}$. Since $(O_{\g'},{\cal D}_{\g'}(O_{\g'})$ is an extension of $(O_\g, {\cal D}_{\g}(O_{\g}))$, the closure $({\tilde O}_{\g'}, {\cal D}_{\g'} ({\tilde O}_{\g'}))$ is still an extension for $({\tilde O}_{\g}, {\cal D}_{\g}({\tilde O}_{\g}))$. This concludes the proof of the Lemma. $\Box$ \bigskip We can now return to the momentum operators $(P(X), {\rm Cyl}^1({\overline \A}))$ on $L^2({\overline \A}, \mu)$. \begin{theorem}{\rm :} \noindent (i)Let $X =(X_\g)_{\g\ge\g_0}$ and $\mu=(\mu_\g)_{\g\in\Gra(M)}$ be a smooth vector field and volume form on the projective limit ${\overline \A}$. Suppose $X$ is compatible with $\mu$; then, the operator $(P(Y), {\rm Cyl}^1({\overline \A}))$ of (\ref{P(X)}) is essentially self-adjoint on $L^2({\overline \A},\mu)$; \noindent (ii) Let $Y$ be another smooth vector field on the projective limit, also compatible with the measure $\mu$. Then, the vector field $[X,Y]$ also is compatible with $\mu$ and \begin{equation} P([X,Y])\ = \ i [P(X),P(Y)]. \end{equation} \end{theorem} {\it Proof}{\rm:} Part $(i)$ of the Theorem follows trivially from Lemma 1; we only have to substitute $L^2({\overline \A},\mu)$ for ${\cal H}$, $L^2({\cal A}_\g, \mu_\g)$ for $H_\g$, $\Gra(M)$ for $\Gamma$ and $((i(Y_\g+{1\over 2}{\rm div}_{\mu_\g}Y_\g))_{\g \ge\g_0}, \\ {\rm Cyl}^1({\overline \A}))$ for $(O_\g, {\cal D}_{\g}(O_\g) )_{\g\in\Gamma(O)}$. Finally, part $(ii)$ can be shown by a simple calculation using Proposition 7. $\Box$ \bigskip This concludes our discussion of the momentum operators. Most of the results of this section concern the case when a vector field is compatible with a volume form. In the next section we shall see that a natural symmetry condition implies that a vector field on ${\overline \A}$ is necessarily compatible with the Haar volume form. \section{Elements of differential geometry on $\B$} We now turn to $\B$, the space that we are directly interested in. \subsection{Forms and volume forms} Let us begin with functions. We know from section 3.4 that \begin{equation} \overline{{\cal A}/{\cal G}}\ =\ {\overline \A}/{\overline \G}\ . \end{equation} Therefore, we can drop the distinction between functions on $\overline{{\cal A}/{\cal G}}$ and ${\overline \G}$-invariant functions defined on ${\overline \A}$. In particular, we can identify the $C^\star$-algebra $C^0(\B)$ of continuous functions on $\B$ with the $C^\star$-subalgebra of ${\overline \G}$ invariant elements of $C^0({\overline \A})$. This suggests that we adopt the same strategy towards differentiable functions and forms. Therefore, we will let ${\rm Cyl}^n(\B)$ be the $\star$-subalgebra of ${\overline \G}$-invariant elements of ${\rm Cyl}^n({\cal A})$, and $\Omega(\B)$ be the sub-algebra consisting of ${\cal G}$-invariant elements of Grassmann algebra $\Omega({\overline \A})$. The operations of taking exterior product and exterior derivative is well-defined on $\Omega(\B)$. Similarly, by a volume form on $\B$, we shall mean a ${\cal G}$-invariant volume form on ${\overline \A}$. As noted in section 4.2, the induced Haar form on ${\overline \A}$ is ${\overline \G}$ invariant and provides us with a natural measure on $\B$. Furthermore, since ${\overline \G}$ is compact, we can extract the ${\overline \G}$-invariant part of {\it any} volume form $\nu$ on ${\overline \A}$ by an averaging procedure (see also \cite{B1}). \begin{proposition}{\rm :} Suppose $\nu\ =\ (\nu_\g)_{\g\in \Gra(M)}$ is a volume form on ${\overline \A}$. Then, if $R_{\bar g}$ denotes the action of $\bar{g}\in {\overline \G}$ on ${\overline \A}$, and $d\bar g$ denotes the Haar measure on ${\overline \G}$, \begin{equation} \overline {\nu}\ :=\ \int_{\overline \G}\ ({\rm R}_{{\bar g})_\star}\nu\ d{\bar g}, \end{equation} is a ${\overline \G}$ invariant volume form on ${\overline \A}$ such that for every ${\overline \G}$ invariant function $f\in C^0({\overline \A})$ \begin{equation} \int_{\overline \A}\ f\ \nu\ =\ \int_{\overline \A}\ f\ \overline{\nu}\ . \end{equation} \end{proposition} In terms of the projective family, we can write out the averaged volume form more explicitly. Let $\nu\ =\ (\nu_{\g})_{\g\in \Gra(M)}$. Then, the averaged volume-form $\overline {\nu}\ =\ (\overline{\nu_\g})_{\g\in\Gra(M)}$ where \begin{equation} \overline{\nu_\g}\ =\ \int_{G^V}\ ({\rm R}_{(g_1,...,g_V)})_\star\nu_\g \ \ dg_1\wedge...\wedge dg_n\ , \end{equation} where ${\rm R}_{(g_1,...,g_V)}$ denotes the action of $(g_1,...,g_V)\in {\cal G}_\g $ on ${\cal A}_\g$ (${\cal G}_\g$ being identified with $G^V$). \subsection{Vector fields on $\B$} The procedure that led us to forms on $\B$ can also be used to define vector fields on $\B$. Furthermore, for vector fields, one can obtain some general results which are directly useful in defining quantum operators. We will establish these results in this subsection and use them to obtain a complete characterization of vector fields on $\B$ in the next subsection. The group ${\overline \G}$ acts on vector fields on ${\overline \A}$ as follows. Let $X=(X_\g)_{\g\ge\g_0}$, and ${\bar g}=(g_\g)_{\g\in\Gra(M)}\in{\overline \G}$. Then \begin{equation} R_{\bar g}\star\ X\ :=\ (R_{g_\g\star}\ X_\g)_{\g\ge\g_0}\ , \end{equation} where, as before, $R_{g_\g}:{\cal A}_\g\rightarrow{\cal A}_\g$ is the action of ${\cal G}_\g$ on ${\cal A}_\g$. In the first part of this subsection, we will explore the relation between ${\overline \G}$-invariant vector fields and the induced Haar form $\mu_o$ on ${\overline \A}$. Since we are dealing here only with the Haar measure, for simplicity of notation, in this subsection, we will drop the (measure-)suffix on divergence. \begin{theorem}\label{div2}{\rm :} Let $X=(X_\g)_{\g\ge\g_0}$ be a $C^{n+1}$ vector field on ${\overline \A}$. If $X$ is ${\overline \G}$ invariant then it is compatible with the Haar measure $\mu_o$ on ${\overline \A}$ and ${\rm div}\ X\in{\rm Cyl}^{n}({\overline \A})$ is ${\overline \G}$ invariant. \end{theorem} {\it Proof}{\rm :} We need to show that, if $\g, \g' \ge\g_0$, then \begin{equation} {\rm div}\ X_{\g} \sim {\rm div}\ X_{\g'}\ . \end{equation} Since the family $(X_\g)_{\g\ge\g_0}$ is consistent, it is sufficient to show that if $\g_2\ge \g_1$, then \begin{equation} \label{diveq} p_{\g_1\g_2}^\star\ ({\rm div}\ (p_{\g_1\g_2})_\star\ X_{\g_2}) = {\rm div}\ X_{\g_2}\ . \end{equation} Now, the graph $\g_2$ consists of two types of edges: i) edges ${e}_1,...,{e_{E_3}}$, say, which are contained in $\g_1$, and ii) the remaining edges ${e'}_{E_3+1},...,{e'_{E_2}}$, say. The first set of edges forms a graph ${\g_3}\ge \g_1$ whose image coincides with that of $\g_1$. Therefore, in particular, we have $\g_3\ge {\g_0}$ and $X_{\g_3}$ is well-defined. Our strategy is to decompose the projection $p_{\g_1\g_2}$ as \begin{equation} p_{\g_1\g_2}\ =\ p_{\g_1\g_3}\circ p_{\g_3\g_2}\ . \end{equation} and prove that each of the two projections on the right hand side satisfies (\ref{diveq}), i.e., that we have \begin{equation} \label{diveq1} p_{\g_1\g_3}^\star\ ({\rm div}\ (p_{\g_1\g_3})_\star\ X_{\g_3}) = {\rm div}\ X_{\g_3}\ , \end{equation} and \begin{equation} \label{diveq2} p_{\g_3\g_2}^\star\ ({\rm div}\ (p_{\g_3\g_2})_\star\ X_{\g_2}) = {\rm div}\ X_{\g_2}\ . \end{equation} These two results will be established in two lemmas which will conclude the proof of the main part of the theorem. Once this part is established, the ${\overline \G}$-invariance of ${\rm div}(X)\in{\rm Cyl}^\infty({\overline \A})$ is obvious from the ${\overline \G}$-invariance of the vector field $X$ and of the measure $\mu_o$. \bigskip \begin{lemma}{\rm :} Let $\g_3\ge \g_1$ be such that the images of $\g_3$ and $\g_1$ in $M$ coincide. Let $X_{\g_3}$ be a ${\cal G}_{\g_3}$-invariant vector field on ${\cal A}_{\g_3}$ and $X_{\g_1}$, a ${\cal G}_{\g_1}$-invariant vector field on $A_{\g_1}$ such that $(p_{\g_1\g_3})_\star X_{\g_3}\ =\ X_{\g_1}$. Then, \begin{equation} p_{\g_1\g_3}^\star ({\rm div} X_{\g_1})\ =\ {\rm div} X_{\g_3}. \end{equation} \end{lemma} \medskip {\it Proof}{\rm :} Since $\g_3$ is obtained just by subdividing some of the edges of $\g_1$, it follows that the pull-back $p_{\g_1\g_3}^\star$ is an isomorphism of the $C^\star$-algebra of continuous and ${\cal G}_{\g_1}$ invariant functions on ${\cal A}_{\g_1}$ into the $C^\star$-algebra of continuous and ${\cal G}_{\g_3}$ invariant functions on ${\cal A}_{\g_3}$. Hence it defines an isomorphism between the corresponding Hilbert spaces. The vector fields $X_{\g_1}$ and $X_{\g_3}$ define the operators which are equal to each other via the isomorphism. Hence, their divergences are also equivalent as operators, and being smooth functions, are just equal to each other, modulo the pullback. \medskip Now, we turn to the second part, (\ref{diveq2}), of the proof. \begin{lemma}\label{3}{\rm :} Let $X= (X_\g)_{\g\ge \g_0}$ be a ${\cal G}$-invariant vector field on ${\overline \A}$. Let $\g_2 \ge\g_3 \ge \g_0$ be such that $\g_2$ is obtained by adding edges to $\g_3$ the images of all of which, except possibly the end points, lie outside the image of $\g_3$. Then, \begin{equation} p_{\g_3\g_2}^\star ({\rm div} X_{\g_3})\ =\ {\rm div} X_{\g_2}\ . \end{equation} \end{lemma} \medskip {\it Proof}{\rm :} Via appropriate parametrization we can set \begin{equation} {\cal A}_{\g_2}\ =\ {\cal A}_{\g_3}\times G^{E_2-E_3}, \end{equation} so that the map $p_{\g_3\g_2}$, becomes the obvious projection \begin{equation} p_{\g_3 \g_2}\ :\ \ {\cal A}_{\g_3}\times G^{E_2-E_3}\ \rightarrow\ {\cal A}_{\g_3}\ . \end{equation} Since $X_{\g_2}$ projects unambiguously to $X_{\g_3}$, it follows that we decompose $X_{\g_2}$ as: \begin{equation} X_{\g_2} = (X_{\g_3}, X_{E_3+1},...,X_{E_2}), \end{equation} where, for each choice of a point on ${\cal A}_{\g_3}$, and of variables $g_{E_3+j}$, $j\not= i$, we can regard $X_{E_3+i}$, as a vector field on $G$. (Here, $i= 1,...,E_2-E_3$, and ${\cal A}_{E_3 +k}$ is identified with $G$). We will now analyze the properties of these vector fields. Let us fix an edge $e_{E_3+i}$. We will now show that $X_{E_3+i}$ does not change as we vary $g_{E_3+1},..., g_{E_3+i-1}, g_{E_3+i+1},..., g_{E_2}$. Let us suppose that there exist some edges $e_{E_3+j}$ which are {\it removable} in the sense that one can obtain a closed graph $\g_4\ge \g_3$ after removing them. Clearly, $\g_2\ge\g_4 \ge \g_0$. Hence, there is a vector field $X_{\g_4}$ on ${\cal A}_{\g_4}$ such that $X_{\g_3}, X_{\g_4}, X_{\g_2}$ are all consistent. This implies that $X_{E_3 +i}$ does not change if we vary $g_{E_3+j}$. Now let us consider the case when edges of $\g_{3}$ are not removable. Then, we can construct a closed graph $\g_5$ \begin{equation} {\g_5} := {\g_2}\cup\{e_+,e_-\}\ , \end{equation} by adding two new edges $e_{\pm}$ to join the vertices of $e_{E_3 +i}$ to any two vertices of $\g_3$. Then, $\g_5\ge \g_2\ge \g_3$ and we have consistency of the vector fields $X_{\g_5}, X_{\g_2}, X_{\g_3}$. Clearly, the $X_{E_3+i}$ component of $X_{\g_5}$ coincides with the $X_{E_3 +i}$ component of $X_{\g_2}$. But in $X_{\g_5}$, all the edges $e_{E_3+ j}$ with $j \not= i$ are removable. Hence, $X_{E_3+i}$ does not depend on $g_{E_3+j}$ if $i\not=j$. Thus, we have shown that $X_{E_3 +i}$ is independent of $g_j$ if $i\not=j$. So far, in this lemma, we have only used the consistency of $(X_\g)_{\g\ge \g_0}$. We now use the ${\cal G}$-invariance of $X$ to show that (for each $g_{\g_3} \in {\cal A}_{\g_3}$) $X_{E_3+i}$ is a left invariant vector field on $G$. Let $v$ be a vertex of $e_{E_3+i}$ which is not contained in $\g_3$. For definiteness, let us suppose that it is the final vertex. Then, under the gauge transformation $a$ in the fiber over this vertex, we have: \begin{equation} (a_\star\ X_{\g_2})_{E+i}\ =\ (L_{a^{-1}})_\star\ X_{E+i}\ , \end{equation} where the left side is the $E_{3+i}$-th component of the vector field in the parenthesis and $L_a$ is the left action of $a$ on $G$. (Here, we have used our earlier result that $X_{E+i}$ does not depend on $g_{E+j}$ when $i\not= j$.) Now, from ${\cal G}_{\g_2}$ invariance of $X_{\g_2}$, it follows that \begin{equation} X_{E+i}\ =\ {{\rm L}_a}_\star X_{E+i}. \end{equation} This conclusion applies to any value of $i=1,...,E_2-E_3$. Thus, for each choice of $g_{\g_3}\in {\cal A}_{\g_3}$, $X_{E_3+i}$ are, in particular, divergence-free vector fields on $G$. We now collect these results to compute the divergence of ${\cal X}_{\g_2}$: \begin{equation} {\rm div} X_{\g_2}\ =\ {\rm div} X_{\g_3} + {\rm div} X_{E_3+1}\ + ...+ {\rm div} X_{E_2}\ = \ {\rm div} X_{\g_3}\ , \end{equation} where, for simplicity of notation, we have dropped the pull-back symbols. $\Box$ \bigskip Using this result and those of section 4.4, we have the following theorem on the operators on $L^2(\B,\mu_o)$ defined by ${\overline \G}$ invariant vector fields on ${\overline \A}$. \begin{theorem} Let $X$ be a ${\overline \G}$ invariant vector field on ${\overline \A}$. The operator \begin{equation} P(X) \:= i(X + {1\over2}{\rm div} X) \end{equation} with domain ${\rm Cyl}^1(\B)$ is essentially self-adjoint on $L^2(\B, mu_o)$. Suppose $Y$ is another ${\overline \G}$ invariant vector field on $\B$; then, $[X,Y]$ is also a ${\overline \G}$ invariant vector field on $\B$ and \begin{equation} P([X,Y])\ =\ i[P(X),P(Y)]. \end{equation} on ${\rm Cyl}^2(\B)$. \end{theorem} \bigskip\goodbreak \subsection{Characterization of vector fields on $\B$} In the previous subsection we showed that the ${\overline \G}$-invariant vector fields on ${\overline \A}$ have interesting properties. It is therefore of considerable interest to have control on the structure of such vector fields. Can one construct them explicitly? What is the available freedom? To answer such questions, we will now obtain a complete characterization of the ${\overline \G}$-invariant vector fields on ${\overline \A}$ in the case when $G$ is assumed to be semi-simple. Fix a graph $\g_0$. To construct a ${\overline \G}$-invariant vector field $X =(X_\g)_{\g\ge\g_0}$ on ${\overline \A}$, we have, first of all, to specify a ${\cal G}_{\g_0}$-invariant vector field on ${\cal A}_{\g_0}$. We want to analyze the freedom available in extending this vector field to ${\cal A}_\g$ for all $\g\ge \g_0$. Now the edges of any $\g\ge \g_0$ can be ordered in such a way that: \begin{enumerate} \item the first $n$ edges, ${e}_1,...,e_{n}$, are contained in $\g_0$, for an appropriate $n$; \item the next $m-n$ edges, ${e}_{n+1},...,e_{m}$, begin at (i.e., have one of their vertices) on $\g_0$, for some $m$; and \item the remaining edges, say, $e_{m+1}, ..., e_{k}$, do not intersect $\g_0$ at all. \end{enumerate} Hence, we can decompose ${\cal A}_\g$ as: \begin{equation} {{\cal A}_{\g}}= {\cal A}_{\g_1}\times G^{m-n}\times G^{k-m}\ , \end{equation} where $\g_1\geq\g_0$ is the graph formed by the first $n$ edges. Given $X_{\g_0}$, the projection ${\cal A}_{\g_1} \rightarrow {\cal A}_{\g_{0}}$ determines --via consistency conditions-- the vector $X_{\g_1}$ modulo a vector field tangent to the fibres of the projection. But the fibres are contained in the orbits of the symmetry group ${\cal G}_{\g_1}$. Hence, from the point of view of operators on cylindrical functions on $\B$, this ambiguity is irrelevant; there is no essential freedom in extending the vector field from ${\cal A}_{\g_0}$ to ${\cal A}_{\g_1}$. Next, consider the last $k-m$ components of $X_{\g}$ corresponding to the last set of edges. Since both vertices of these edges lie outside $\g_0$, the corresponding vector fields on $G$ have to be both right and left invariant. Since we have assumed that the gauge group is semi-simple, this implies that they must vanish. Thus, the essential freedom in extending $X_{\g_0}$ to $X_{\g}$ lies in the $m-n$ components of $X_\g$ associated with the second set of edges. Hence, using the notation of Lemma \ref{3}, we can express $X_\g$ as: \begin{equation} X_\g\ :=\ (X_{\g_0}, F([e_{n+1}]),..., F([e_{m}]),0,...,0)\ . \end{equation} Here, $F([e_i])$ with $i =n+1, ..., m$ is a function $$F([e_i])\ :\ {\cal A}_{\g_1}\ \rightarrow \ \Gamma(T({\cal A}_{e_i})) $$ whose values are ${\cal G}_{v_i}$ invariant vector fields on ${\cal A}_{e_i}$, where $v_i$ is the end of the edge $e_i$ which is not contained in $\g_0$. The ${\overline \G}$ invariance of $X$ implies that $F([e_i])$ should have certain transformation properties under the action of the groups ${\cal G}_v$ which act on the fibers over the vertices $v$ of $\g_0$. Given $a_v\in {\cal G}_v$ , we need: $F([e_i]) \circ a_v = (a_v)^{-1}_\star {F} ([e_i])$ if $v$ is the vertex of $e_i$ and $F([e_i])\circ a_v = F([e_i])$ otherwise. We can now summarize the information that is necessary and sufficient to define a ${\overline \G}$-invariant vector field $X$ on ${\overline \A}$. First, we need a graph $\g_0$. For each $x\in \g_0$, let $e_x$ be the set of germs of edges (i.e., the data at $x$ that is necessary and sufficient to specify edges) which do not overlap with any of the edges of $\g_0$ that pass through $x$. (Recall that edges are all analytic.) Let ${\cal P}_{\g_0}$ be the sheaf of germs of transversal edges over the $\g_0$ divided by the reparametrizations, i.e., set \begin{equation} {\cal P}_{\g_0}\ \ :=\ \ \bigcup_{x\in\g_0}{\cal P}_x \ . \end{equation} Next, given a point $x$ on $\g_0$ let $\g_x\ge \g_0$ be the graph obtained by cutting the edge on which $x$ lies into two at $x$. (If $x$ is a vertex of $\g_0$, then $\g_x =\g_0$.) Finally, choose a point in the fiber of the underlying bundle $B(M,G)$ over each point of every edge on $\g_0$. Up to this freedom, the group ${\cal G}_x$ is then identified with $G$. Then, the necessary and sufficient data for constructing a ${\overline \G}$-invariant vector field $X= (X_\g)_{\g\ge \g_0}$ (regarded as an operator on ${\overline \G}$-invariant function on ${\overline \A}$) is the following \begin{enumerate} \item A ${\cal G}_{\g_0}$-invariant vector field $X_{\g_0}$ on ${\cal A}_{\g_0}$; \item For every graph $\g_1$ obtained from $\g_0$ simply by subdividing the edges, a vector field $X_{\g_1}$ such that $p_{\g_0\g_1*}X_{\g_1} = X_{\g_0}$; \item A map from the set of germs of transversal edges ${\cal P}_{\g_0}$ into the Lie algebra (of left invariant vector fields on $G$) $LG$-valued functions on ${\cal A}_{\g_x}$, \begin{equation} F: \ \ {\cal P}_{\g_0} \rightarrow C^{n}({\cal A}_{\g_x})\otimes LG; \end{equation} which has the following transformation properties with respect to the group ${\cal G}_{\g_x}$: \begin{equation} F([e]_x)\circ a_v= \cases {F([e]_x), &if $v\not= x$\cr a^{-1}F([e]_x)a, & if $v=x$\cr} \end{equation} for every vertex $v$ of $\g_x$. \end{enumerate} Then, given an edge $e$ intersecting $\g$ at $x$ and the corresponding space ${\cal A}_{e}$, a value of $F([e])$ defines unambiguously a ${\cal G}_{v'}$ invariant vector field on ${\cal A}_e$, $v'$ standing for the other end of $e$ (because the remaining freedom of a gauge for ${\cal A}_e$ is covered by the action of ${\cal G}_{v'}$). Thus, there is a rich variety of ${\overline \G}$ invariant vector fields on ${\overline \A}$. The vector field is ${\overline \G}$ invariant If the co-metric tensor and the 1-form are ${\overline \G}$ invariant, so is the vector field. \medskip We will conclude this discussion of vector fields by pointing out that, if one is interested only in the action of the vector fields on cylindrical functions, apriori there appears to be some freedom in one's choice of the initial definition itself. There are at least three ways of modifying the definition we used. \begin{enumerate} \item First, we could have chosen another set of labels. Our definition used graphs $\g\ge\g_0$ for some $\g_0$. Instead, we could have labelled the vector fields by {\it any} cofinal $\Gra(M)$. However, it is not clear if all our results would go through in this more general setting. In particular, it is not obvious that the vector fields would then form a Lie algebra. \item Another possibility is to use the same labels ($\g\ge\g_0$ for some $\g_0$) but to weaken the consistency conditions slightly. Since we only want to act these vector fields $(X_\g)_{\g\ge\g_0}$ on functions which are ${\overline \G}$ invariant, it would suffice to require only that the consistency conditions are satisfied ``modulo the gauge directions''. That is, one might require only that each $X_\g$ is ${\cal G}_\g$ is invariant and $p_{\g\g'\star}X_{\g'} = X_\g$ if $\g'\ge \g$, {\it both modulo the directions tangent to the fibers of the group ${\cal G}_\g$}. However, then it is no longer clear that the notion of divergence of $X$ is well-defined. Further work is needed. \item Finally, throughout this paper, we have considered projective families labelled by graphs. Alternatively one can also consider projective families labelled by subgroups of the group of equivalence classes of closed, based loops in $M$, where two loops are equivalent if the holonomy of any connection around them, evaluated at the base point, is the same. (See \cite{MM,AL2}.) One can define ${\overline \G}$-invariant vector fields in this setting as well and the resulting momentum operators on $L^2(\B, \mu_o)$ are essentially the same as those introduced here. However, the proofs are more complicated since they essentially involve decomposing loops in to graphs used here. \end{enumerate} \section{Laplacians, heat equations and heat kernel measures on $\B$ associated with edge-metrics.} In the last two sections, we saw that, although ${\overline \A}$ and $\B$ initially arise only as compact topological spaces, using graphs on $M$ and the geometry of the Lie group $G$ one can introduce on them, quite successfully, structures normally associated with manifolds. Therefore, a natural question now arises: can one exploit the invariant Riemannian geometry on $G$ to define on $\B$ new structures. In this section, we will show that the answer is in the affirmative. \subsection{A Laplace operator.} Let us fix an (left and right) invariant metric tensor $k$ on $G$. The obvious strategy --which, e.g., successfully led us to the Haar volume form in section 4.2-- would be to use the fact that ${\cal A}_\g$ can be identified with $G^E$, to endow it with the product metric ${k}'_\g$ and let ${\Delta}'_\g$ be the associated Laplacian operator. (Here, as before, $E$ is the number of edges in the graph $\g$.) Unfortunately, this strategy fails: the resulting family of operators $({\Delta}'_\g)_{\g\in\Gra(M)}$ fails to be self consistent. (This is why we have used the prime in $k'$ and ${\Delta}'$.) This is a good illustration of the subtlety of the consistency conditions and brings out the non-triviality of the fact that the families that led us to forms, vector fields and the Haar measure turned out to be consistent. The ``minimal'' modification that leads to a Laplacian requires an additional ingredient: a metric on the space of edges on $M$. An {\it edge-metric on $M$}, will be a map which assigns to each edge (i.e., finite, analytic curve) $e$ in $M$ a non-negative number, $l(e)$ which is independent of the orientation of $e$ and additive, i.e. satisfies: \begin{equation} l(e^{-1})\ =\ l(e)\ \ \ {\rm and}\ \ \ l(e_1\circ e_2)= l(e_1) +l(e_2)\ . \end{equation} \medskip\noindent $l$ can be thought of as a generalized ``length'' function on the space of edges. The technique of using such ``an additive weight'' was suggested by certain methods employed by Kondracki and Klimek \cite{KK} in the context of 2-dimensional Yang-Mills theory. It is not difficult to construct edge-metrics explicitly. Two simple examples of such constructions are: \begin{enumerate} \item Introduce a Riemannian metric $g$ on $M$ and let $l(e)$ be the length of $e$. \item Fix a collection $s$ of analytic surfaces in $M$ and define $l(e)$ to be the number of isolated points of intersection between $e$ and $s$. \end{enumerate} \bigskip Given an edge-metric $l$, for each graph $\g$ we define on ${\cal A}_\g(=G^E)$ the following ``weighted'' Laplacian, \begin{equation} \Delta_{\g,(l)}\ :=\ l(e_1)\Delta_{e_1}+...+ l(e_E)\Delta_{e_E}\ , \end{equation} where $\Delta_{e_i}$ is a an operator which applied to a function $f_\g(g_{e_1},...,g_{e_E})$ acts (only) on the $G$-variable $g_{e_i}$ as the Laplacian of the metric $k$ on $G$. It is not obvious that this $\Delta_{\g,(l)}$ is well-defined since the isomorphism between ${\cal A}_\g$ by $G^E$ used in the above construction is not unique. However, two such isomorphisms are in essence related by an element of ${\cal G}_\g$ and since $\Delta$ is left and right invariant on $G$, $\Delta_{\g,(l)}$ is well-defined; it is insensitive to the ambiguity in the choice of $(g_{e_1},...,g_{e_E})$ that label the points of ${\cal A}_\g$. Furthermore, we have: \begin{theorem}{\rm :} \label{delta} \noindent $(i)$ The family of operators $(\Delta_{\g,(l)}, C^2({\cal A}_\g))_{\g\in \Gra(M)}$ is self consistent (in the sense of (\ref{01}) and (\ref{02})); \noindent $(ii)$ The operator $(\Delta_{(l)},{\rm Cyl}^2(\overline{{\cal A}}))$ defined in the Hilbert space $L^2({\overline \A}, \mu_o)$, where $\mu_o$ is the Haar volume form on ${\overline \A}$, by \begin{equation} \Delta_{(l)}\ ([f_\g]_\sim):=[\Delta_{\g,(l)}\ (f_\g)]_\sim \end{equation} is essentially self-adjoint; \noindent $(iii)$ the operator $\Delta_{(l)}$ is ${\overline \G}$ invariant and thus defines an essentially self-adjoint operator $(\Delta_{(l)},{\rm Cyl}^2(\B))$ in $L^2(\B,\mu'_o)$, where $\mu'_o$ is the push-forward of $\mu_o$ to $\B$. \end{theorem} {\it Proof}{\rm:} Note first that, for every $\g'\ge\g$, the projection $p_{\g\g'}:{\cal A}_{\g'}\rightarrow {\cal A}_\g$ can be written as a composition of projections, \begin{equation} p_{\g\g'}\ =\ p_{\g\g_1}\circ...\circ p_{\g_{n}\g'} \end{equation} each of the terms being of a one of the following three kinds: \begin{equation} (g_1,...,g_k, ...)\ \mapsto\ (g_1,...,(g_k)^{-1}, ...),\ \ \ (g_1,...,g_{k-1},g_k, ...)\ \mapsto\ (g_1,...,g_{k-1}, ...), \end{equation} \begin{equation} (g_1,...,g_{k-1},g_k)\ \mapsto\ (g_1,...,g_{k-1}g_k). \end{equation} The operators $\Delta_{l(e),\g}$ are automatically consistent with projections of the second class above; no conditions need to be imposed on $l(e_i)$. However, to be consistent with a projection of the third kind, the numbers $l(e)$ have to satisfy the following necessary and sufficient condition. Let $f$ be a function on $G$. Whenever we divide an edge $e$ into $e=e_2\circ e_1$, then \begin{equation} (l(e_1)\Delta_{e_1} \ +\ l(e_2)\Delta_{e_2})f(g_2, g_1)\ = \ (l(e)\Delta f)(g_2 g_1)\ , \end{equation} where $f(g_2, g_1) = f(g_2 g_1)$. A short calculation shows that the necessary condition for this to hold is precisely our second restriction on $l(e)$: $l(e)=l(e_1)+l(e_2)$. Similarly, our first restriction $l(e^{-1}) =l(e)$ is necessary and sufficient to ensure consistency with respect to the projections of the first type. This establishes part $(i)$ of the theorem. Part $(ii)$ follows easily from Lemma 1 and from the essential self--adjointness of the operators $(\Delta^{(l)}_\g, C^2({\cal A}_\g))$ defined in $L^2({\cal A}_\g, (\mu_H)^E)$. Part $(iii)$ is essentially obvious. \bigskip {\it Remark:} If $l(e)$ is non-zero for every non-trivial edge --as is the case for the our first example of $l(e)$-- we can introduce on each ${\cal A}_\g=G^E$ a block diagonal metric tensor $k_\g\ :=\ ({1\over l(e_1)}k,...,{1\over l(e_E)}k)$. The operator $\Delta^{(l)}_\g$ is just the Laplacian on ${\cal A}_\g$ defined by $k_\g$. \subsection{The heat equation and the associated volume forms.} Given an edge metric $l(e)$, we have a Laplacian on ${\overline \A}$. It is now natural to use these Laplacians to formulate heat equations on ${\overline \A}$, to find the corresponding heat-kernels and to construct the associated heat-kernel measures on ${\overline \A}$. These would be natural analogs of the usual Gaussian measures on topological vector spaces. As usual, heat equations will involve a parameter $t$ with $0<t$ and a Laplacian $\Delta_{(l)}$ on ${\overline \A}$. A 1-paramter family of functions $f_t\in{\rm Cyl}^2({\overline \A})$ will be said to be a solution to the heat equation on ${\overline \A}$ if it satisfies: \begin{equation} {d\over dt}f_t\ =\ \Delta_{(l)}\ f_t \ . \end{equation} Our main interest lies in defining a heat kernel for this equation. Recall first that the heat kernel $\rho_t$ on $G$ is the solution to the heat equation on $G$ satisfying the initial condition $\rho_{t=0} = \delta(g, I_G)$, where $I_G$ is the identity element of $G$ and $\delta$ is the Dirac distribution. As is well-known \cite{S}, for each $t$, $\rho_t$ is a positive, smooth function on $G$. The next step is to use $\rho_t$ to find a heat kernel $\rho_{\g,t}$ on $({\cal A}_{\g}, \Delta_{\g, (l)})$. This $\rho_{\g,t}$ is a function on ${\cal A}_\g \times {\cal A}_\g$. Using a coordinatization of ${\cal A}_\g$ induced by a group valued chart ${\cal A}_\g = G^E$, we can express $\rho_{\g,t}$ as: \begin{equation} \label{k1} \rho_{\g, t}(A_\g,B_\g)\ =\ \rho_{s_1}(b_1^{-1}a_1)\ \ ...\ \ \rho_{s_E}(b_E^{-1}a_E), \end{equation} where, $A_\g=(a_1,...,a_E)$, $B_\g=(b_1,...,b_E)$ and where $s_i$ with $i= 1, ..., E$ are positive numbers given by $s_i= l(e_i)t$. (Note that, since the heat kernel $\rho_t$ on $G$ is ${\rm Ad}(G)$ invariant, the above formula is independent of the choice of coordinates.) The family $(\rho_{\g,t})_{\g\in\Gra(M)}$ does not define a cylindrical function on ${\overline \A}$. However, we can consider the convolution of the heat kernel $\rho_{\g, t}$ with a function $f_\g\in C^0({\cal A}_\g)$. In the coordinatization used above, this reads: \begin{equation} \label{k2} (\rho_{t,\g}\star f_\g)(A_\g)\ =\ \int_{A_\g}\rho_{t,\g}(A_\g,B_\g) f(B_\g)\mu_H(B_\g)\ , \end{equation} where $\mu_H$ is the Haar measure on ${\cal A}_\g$. Now, the key point is that these convolutions do satisfy the consistency conditions: \begin{equation} f_{\g_1}\ \sim\ f_{\g_2}\quad \Rightarrow\quad \rho_{t,\g_1}\star f_{\g_1}\ \sim\ \rho_{t,\g_1}\star f_{\g_1}\ . \end{equation} Therefore, we {\it can} define a convolution map $\rho_t \star$ on ${\overline \A}$. \begin{theorem}{\rm :} \label{heat} Let $\Delta_{(l)}=(\Delta_{\g, (l)})_{\g, \g'\in\Gra(M)}$ be the Laplace operator on ${\overline \A}$ given by the edge-metric $l$ and let $(\rho_{\g, t})_{\g\in \Gra(M)}$ be the corresponding family of heat kernels, with $t\ge 0$. Then, \noindent(i)there exists a linear map $\rho_t\star\ :\ C^0({\overline \A})\ \rightarrow\ C^0({\overline \A})$ defined by \begin{equation} \label{convolution} \rho_t\star f\ = \ [\rho_{t,\g}\star f_\g]_\sim\ \ , \end{equation} which is continuous with respect to the sup-norm; \noindent(ii) If $f\in{\rm Cyl}^2({\overline \A})$ then \begin{equation} f_t\ :=\ \rho_t\star f \end{equation} solves the heat equation with the initial value $f_{t=0} =f$; \noindent(iii) The map $\rho_t\star$ carries $C^0(\B)$ (the ${\overline \G}$ invariant elements of $C^0({\overline \A})$) into $C^0(\B)$. \end{theorem} {\it Proof}{\rm :} To establish $(i)$, it is sufficient to prove the right hand side of (\ref{convolution}) is well defined. This would imply that $\rho_t^\star$ is well-defined on $C^0({\overline \A})$. Continuity follows from the explicit formula for the convolution. Let $f=[f_{\g_1}]_\sim = [f_{\g_2}]_\sim\in{\rm Cyl}^2({\overline \A})$. Choose any $\g'\ge\g_1,\g_2$. Using the consistency conditions satisfied by the family $(\Delta_{\g, (l)})_{\g,\g'\in\Gra(M)}$ which are guaranteed by Theorem \ref{delta}, and setting $i=1,2$, we have: \begin{equation} {d\over dt}\ p_{\g_i\g'}^\star\ (\rho_{t,\g_i}\star f_{\g_i})= p_{\g_i\g'}^\star\ (\Delta_{\g_i,(l)}\ \rho_{\g_i, t}\star f_{\g_i}) = \Delta_{\g', (l)}\ p_{\g_i\g'}^\star\ (\rho_{\g_i, t}\star f_{\g_i})\ , \end{equation} on ${\cal A}_{\g'}$. Thus, we conclude that the pull-backs of the two convolutions satisfy the heat equation on ${\cal A}_{\g'}$ with initial values $p_{\g_1\g'}^\star\ f_{\g_1}$ and $p_{\g_2\g'}^\star f_{\g_2}$ respectively. However, since $f_{\g_1}\sim f_{\g_2}$, the two initial values are equal. Hence, the two pulled back convolutions on ${\cal A}_{\g'}$ are equal. Finally, since \begin{equation} p_{\g_1\g'}^\star\ (\rho_{t,\g_1}\star f_{\g_1})\ =\ \rho_{t,\g'}\star p_{\g_1\g'}^\star f_{\g_1}\ =\ p_{\g_2\g'}^\star\ \rho_{t,\g_2}\star f_{\g_1} \ , \end{equation} we conclude that the right side of (\ref{convolution}) is well-defined. Part $(ii)$ is now obvious and part $(iii)$ follows from the ${\overline \G}$ symmetry of $\Delta^{(l)}$. $\Box$ \bigskip We can now define the heat-kernel volume forms on ${\overline \A}$ associated to the Laplace operator $\Delta_{l}$. These are the natural analogs of the Gaussian measures on Hilbert spaces, which can also be obtained as a projective limit of Gaussian volume forms on finite dimensional subspaces. Now, on a finite dimensional Hilbert space, the natural Gaussian functions $g(\vec{x})$ are given by the heat kernel, $g(\vec{x}) = \rho_{t}(o,\vec{x})$, and the Gaussian volume form is just the product of the translation invariant volume form by $g(\vec{x})$. The heat-kernels forms on ${\cal A}_{\g}$ will be obtained similarly by multiplying the Haar volume form by the heat kernel. Unlike a Hilbert space, however, ${\cal A}_{\g}$ does not have a preferred origin. Let us therefore first fix a point $\bar{A}_0=(A_{0\g})_{\g\in \Gra(M)}\in{\overline \A}$. Then, on each ${\cal A}_\g$, we define a volume form $\nu_\g$ which, when evaluated at the point $A_\g\in {\cal A}_\g$, is given by \begin{equation} \label{gaussian} \nu_{t,\g}(A_\g)\ :=\ \rho_{t,\g}(A_{0\g}, A_\g)\mu^H_\g(A_\g) \end{equation} where $\mu^H_\g(A_\g)$ denotes the Haar form on ${\cal A}_\g$ at $A_\g$. We have \begin{theorem}{\rm:} \noindent (i) The family $(\nu_{t,\g})_{\g\in\Gra(M)}$ of volume forms defined by (\ref{gaussian}) is consistent; \noindent (ii) the measure, $\nu_t\ :=\ (\nu_{t,\g})_{\g\in\Gra(M)}$, defined on ${\overline \A}$ by the volume form is faithful iff for every non-zero edge $e$ in $M$, $l(e)>0$. \end{theorem} The proof follows from Theorem \ref{heat}, and from the nonvanishing of the heat kernel on $G$. \medskip Thus, to define a heat-kernel volume form $\nu_t$ we have used the following input: an invariant metric tensor on $G$, an edge-metric $l$, a point $\bar{A}_0\in{\overline \A}$ and a `time' parameter $t>0$. In terms of the heat kernel of $\Delta^{(l)}$, the integral of a function $C^0({\overline \A})\ni f=[f_\g]_\sim$ with respect to $\nu_t$ is given by the formula \begin{equation} \label{gmeasure} \int_{{\overline \A}}f\ =\ (\rho_{t}\star f)(\bar{A}_0) , \end{equation} which is completely analogous to the standard formula for integrals of functions on Hibert spaces with respect to the Gaussian measures. Finally, let us consider the quotient $\B\ =\ {\overline \A}/{\overline \G} $. According to Theorem (\ref{heat}), the heat kernel $\rho_t$ naturally projects to this quotient and hence we have a well-defined Gaussian measure $(\ref{gmeasure})$ on $\B$. To have a ${\overline \G}$ invariant volume form on ${\overline \A}$ corresponding to this measure, we just average the Gaussian volume form on ${\overline \A}$, following the procedure of section 5.1, \begin{equation} \overline {\nu_t}\ =\ \int_{\overline \G} (R_{\bar g\star}\ \nu_t) d{\bar g}\ , \end{equation} where $d\bar{g}$, as before, is the Haar form on ${\overline \G}$. The resulting volume form $\overline {\nu_t}$ shall be referred to as a Gausian volume form on $\B$. Finally, on $\B$, there is a natural origin $[\bar{A}_0]$ whence the freedom is the choice of $\bar{A}_0$ can be eliminated. To see this, recall first that while constructing a heat kernel measure on a group, one picks the identity element as the fiducial point. The obvious analog in the present case would be to choose $\bar{A}_0\in{\overline \A}$ such that the parallel transport given by $\bar{A}_0$ (see Proposition \ref{Hol}) along any closed loop in $M$ is the identity. This does not single out $\bar{A}_0$ uniquely in ${\overline \A}$. However, all points $A_0\in {\overline \A}$ with this property project down to a single point in the quotient $\B={\overline \A}/{\overline \G}$. \bigskip The techniques introduced in this section can be extended in several directions. In the next subsection, we present one such extension, where heat kernel measures are defined on the projective limit of the family associated with {\it non-compact} structure groups $G$. Another extension has been carried out in \cite{ALMMT3} where diffeomorphism invariant heat kernels are introduced using, in place of the induced Haar measure on ${\overline \A}$, the diffeomorphism invariant measures introduced by Baez \cite{B1,B2}. \subsection{Non-compact structure groups.} We now wish to relax the assumption that the structure group be compact and let $G$ be any connected Lie group. We will see that we {\it can} extend the construction of the induced Haar measure as well as the heat kernel measures to this case, although there are now certain important subtleties. To begin with, the construction of the space $\B$ given in \cite{AI} does not go through, since the Wilson loop functions are now unboun[216zded and one can not endow them with the structure of a ${\cal C}^\star$-algebra in a straightforward fashion. However, following Mour\~ao and Marolf \cite{MM}, we can still introduce the projective family $({\cal A}_\g, {\cal G}_\g,p_{\g\g'})_{\g, \g'\Gra(M)}$ of analytic manifolds of section 3. As in section 3, the members ${\cal A}_{\g}$ of this family have the manifold structure of $G^{E}$ (where $E$ is the number of edges in $\g$) and are therefore no longer compact. Nonetheless, the projective limit is well-defined and the notion of self-consistent families of measures $(\mu_{\g})_{\g\in\Gra(M)}$ is still valid. Each such family enables us to integrate cylindrical functions. However, since the projective limit is no longer compact, the proof \cite{AL2} that each self-consistent family defines a regular Borel measure $\mu$ on the limit does not go through. Thus, in general, the family only provides us with cylindrical measures on the projective limit. Nonetheless, the construction of these measures is non-trivial since the consistency conditions have to be solved and the result is both structurally interesting and practically useful. Let us first construct some natural measures on ${\overline \A}$. In the previous constructions, we began with the Haar measure on $G$. However, since $G$ is now non-compact, its Haar measure can not be normalized and we need to use another measure as our starting point. Fix any probability measure $d\mu_1$ on $G$ which is ${\rm Ad}(G)$ invariant and has the following two properties: \begin{equation} \int_{G^2} f(gh)d\mu_1(g)d\mu_1(h) \ =\ \int_G f(g)d\mu_1(g),\ \ {\rm and} \ \ d\mu_1(g)\ =\ d\mu_1(g^{-1}) \end{equation} for every $f\in C^0_0(G)$ (the space of continuous functions on $G$ with compact support). Then, the procedure introduced in \cite{AL1} provides us with a family of measures $(\mu_\g)_{\g\in\Gra(M)}$, each $\mu_{\g}$ being the product measure on ${\cal A}_\g=G^E$. In spite of the fact that ${\cal A}_{\g}$ are non-compact, results of \cite{AL1} required to ensure the self consistency of the family are still applicable. We thus have an integration theory on ${\overline \A}$. The resulting cylindrical measure is again faithful and invariant under the induced action of Diff$(M)$. Another possibility is to use the Baez construction \cite{B1} which leads to a diffeomorphism invariant integration on $\B$ from almost any measure on $G$. The resulting cylindrical measures, however, fail to be faithful. A third possibility is to repeat the procedure of gluing measures on ${\cal A}_{\g}$ starting, however, from an appropriately generalized heat kernel measure on $G$. Given a measure $\mu$ on $G$, let us define an integral kernel, $\rho$ via: \begin{equation} (\rho_t\star f)(g)\ :=\ \int_G f(gh^{-1})d\mu(h) \ . \end{equation} A 1-parameter family of measures $\mu_t$ on $G$ will be said to constitute a {\it generalized heat kernel measure} if the resulting integral kernels $\rho_t$ satisfy the following conditions: \begin{equation} \rho_r\star \rho_s\star f\ =\ \rho_{r+s}\star f, \ \ \rho_t\star R_{g\star}f\ =\ R_{g\star}\rho_t\star f,\ \ \rho_t\star I_{\star}f\ =\ I_{\star}\rho_t\star f \ , \end{equation} where $R$ and $I$ denote, respectively, the right action of $G$ in $G$ and the inversion map $g\mapsto g^{-1}$, and where $f\in C^0_0(G)$ Given an edge-metric $l$ on $M$, we can now repeat the construction of the previous subsection and obtain a self consistent family of measures $(\mu_{\g})_{\g\in{\Gra(M)}}$. Thus, for a graph $\g$, define the following ``generalized heat evolution'' \begin{equation} \label{he} (\rho_{t,\g}\star f_\g)(A_\g)\ =\ \int_{{\cal A}_\g} f_\g(a_1 b_1^{-1}, ...,a_E b_E^{-1})\ d\mu_{s_1}(b_1)...d\mu_{s_E}(b_E) \end{equation} where $s_i = l(e_i)t$ and where we have set $A_\g=(a_1,...,a_E)$, $B_\g=(b_1,...,b_E)$ using an identification ${\cal A}_\g=G^E$. It is then straightforward to establish the following result. \begin{proposition}{\rm :} The family of heat evolutions (\ref{he}) in self consistent, i.e., if $f$ is a cylindrical function, $f=[f_{\g_1}]_\sim= [f_{\g_2}]_\sim$, then \begin{equation} [\rho_{t,\g_1}\star f_{\g_1}]_\sim\ =\ [\rho_{t,\g_2}\star f_{\g_2}]_\sim. \end{equation} \end{proposition} Using the resulting heat evolution for ${\overline \A}$ we can just define the corresponding heat kernel integral to be \begin{equation} \int_{\overline \A} =\ \rho_t\star f({\cal A}_0) \end{equation} as before. If the group $G$ is compact, this formula defines a regular Borel measure on ${\overline \A}$. In the general case, we only have a cylindrical measure. A case of special interest is when $G$ is the complexification of a compact, connected Lie group. It is discussed in detail in \cite{ALMMT3}. \bigskip\goodbreak \section{Natural geometry on $\B$.} Let us begin by spelling out what we mean by ``natural structures'' on ${\overline \A}$ and on $\B$. A natural structure on $\overline{\A}$ will be taken to be a structure which is invariant under the induced action of the automorphism group of the underlying bundle $B(M,G)$. (Note that {\it all} automorphisms are now included; not just the vertical ones.) In the context of the quotient, $\B$, a natural structure will have to be invariant under the induced action of the group Diff($M$) of diffeomorphisms on $M$. The action of Diff($M$) on $\B$ may be viewed as follows. Let $\Phi\in {\rm Diff}(M)$. Given a graph $\g=\{e_1,...,e_E\}$ choose any orientation of the edges and consider the graph $\Phi(\g) = \{\Phi(e_1),...,\Phi(e_E)\}$ with the corresponding orientation. Pick any trivialization of $B(M,G)$ over the vertices of $\g$ and $\Phi(\g)$ and define \begin{equation} \Phi_\g\ :\ {\cal A}_\g/{\cal G}_\g\ \rightarrow\ {\cal A}_{\Phi(\g)}/{\cal G}_{\Phi(\g)},\ \ \ \Phi_\g\ :=\ \Lambda_{\Phi(\g)}^{-1} \circ \Lambda_\g, \end{equation} where $\Lambda_\g$ is the group valued chart defined in (\ref{diff}). It is easy to see, that the family of maps $\Phi_\g$ defines uniquely a map \begin{equation} \overline{\Phi}\ :\ \B\ \rightarrow\ \B. \end{equation} This is the action of ${\rm Diff}(M)$ on $\B$. \bigskip In this section we will introduce a natural contravariant, second rank, symmetric tensor field $k_o$ on $\B$ and, using it, define a natural second order differential operator. In the conventional approaches, by contrast, the introduction of such Laplace-type operators on (completions of) ${\A/{\cal G}}$ always involve, to our knowledge, the use of a background metric on $M$ (see, e.g. \cite{AM}). \goodbreak \subsection{Co-metric tensors.} Let us suppose that we are given, at each point $A_\g$ of ${\cal A}_\g$, a real, bilinear, symmetric, contravariant tensor $k_\g$: \begin{equation} k_\g \ :\ \bigwedge{}_{A_\g}\times \bigwedge{}_{A_\g}\ \rightarrow \ {\rm I\kern -.2em R}, \end{equation} for all graphs $\g$, where $\bigwedge_{A_\g}$ denotes the cotangent space of ${\cal A}_\g$ at $A_\g$. Such a family, $(k_\g)_{\g\in L}$, will be called a {\it co-meric tensor} on $\overline{\A}$ if it satisfies the following consistency condition, \begin{equation} k_{\g'}(p_{\g\g'}^*\omega_\g\wedge p_{\g\g'}^*\nu_\g)\ =\ k_{\g}(\omega_\g\wedge\nu_\g), \end{equation} for every $\omega_\g,\nu_\g\in\bigwedge_{A_\g}({\cal A}_\g)$, whenever $\g'\ge\g$. \bigskip \noindent {\it Example.} Recall from Remark 1 at the end of section 6.1 that, given a path metric $l$, and an edge $e$, one can introduce on ${\cal A}_e$ a contravariant metric $l(e)k_e$, where $k_e$ is the contravariant metric induced on ${\cal A}_e$ by a fixed Killing form on $G$. Then, given a graph $\g$, ${\cal A}_\g=\times_{e\in\g}{\cal A}_e$ is equipped with the product contravariant metric $$k^{(l)}_\g\ :=\ \sum_{e\in\g} l(e)k_e.$$ It is not hard to check that $(k^{(l)}_\g)_{\g\in L}$ is a co-metric $k^{(l)}$ on ${\overline \A}$. Notice, that if the edge metric vanishes for some edges $e_0$, the co-metric is degenerate but continues to be well defined. \bigskip Given a co-metric $k=(k_\g)_{\g\in L}$ and a differential 1-form $\omega = [\omega_\g]_\sim \in \Omega^1({\overline \A})$, we define for each $\omega_\g$ the vector field $k_\g(\omega_\g)$ on ${\cal A}_\g$ determined by the condition that, for any 1-form $\nu$ on ${\cal A}_\g$, $$ \nu(k_\g(\omega_\g))\ =\ k_\g(\nu, \omega_\g) $$ at every $A_\g\in{\cal A}_\g$. It is easy to see that the family of vector fields $(k_{\g'}(\omega_{\g'}))_{\g'\ge \g}$ $=: k(\omega)$ defines a vector field on ${\overline \A}$ in the sense of section 4. Let us suppose that we are given a co-metric tensor $k$ and a 1-form $\omega$ on $\overline{\A}$ both of which are ${\overline \G}$ invariant. Then so is the vector field $k(\omega)$. Hence, we can apply the results of section 5 to obtain \begin{proposition}{\rm :} \label{d*} Suppose $k$ is a ${\overline \G}$ invariant co-metric tensor on ${\overline \A}$ and $\omega\in\Omega^1(\B)$. Then the vector field $k(\omega)$ is compatible with the natural Haar volume form $\mu_o$ on $\overline{\A}$. \end{proposition} Hence, assuming the necessary differentiability, $k(\omega)$ has a well defined divergence with respect to the Haar volume form $\mu_o$ on $\overline{\A}$, ${\rm div}_{\mu_o} k(\omega) = [{\rm div} k_\g(\omega_\g)]_\sim$. Consequently, with every ${\overline \G}$ invariant co-metric on ${\overline \A}$ we may associate a second-order differential operator \begin{equation} \label{d*d} \Delta^{(k)}\ :\ {\rm Cyl}^2(\B)\ \mapsto {\rm Cyl}^0(\B);\ \ \ \Delta^{(k)}\ f = {\rm div}_{\mu_o}[k(df)] \end{equation} with a well-defined action on the space of ${\overline \G}$ invariant cylindrical functions. If $k_\g$ is non-degenerate, $\Delta^{(k)}$ is the Laplacian it defines. (In particular, the operator defined by the co-metric $k^{(l)}$ via (\ref{d*d}) is precisely the Laplacian constructed from the edge-metric $l$ in section 6.) Hence, for simplicity of notation, we will refer to $\Delta^{(k)}$ as the {\it Laplace operator corresponding to $k$} even in the degenerate case. \goodbreak \subsection{A natural co-metric tensor.} We will now show that $\B$ admits a {\it natural} (non-trivial) co-metric. Let us begin by fixing an invariant contravariant metric on the gauge group $G$ (i.e., an invariant scalar product in the cotangent space at each point of $G$). This is the only ``input'' that is needed. The idea is to use this $k$ to assign a scalar product $k_v$ to each vertex $v$ of a graph $\g$ and set $k_\g= \sum_v k_v$. Given a vertex $v$, choose an orientation of $\g$ such that all the edges at $v$ are outgoing. Use a group valued chart $\Lambda_\g$ (see (\ref{diff})) to define for each edge $e$ in $\g$ a map \begin{equation} G\ \rightarrow\ {\cal A}_e\ \rightarrow\ \times_{e'\in\g}{\cal A}_{e'}. \end{equation} Through the corresponding pull-back map every $\omega\in\bigwedge_{A_\g}$ defines a 1-form $\omega_e$ on $G$ for every edge $e$. Let $k_e$ and $k_{ee'}$ be two bilinear forms in $\bigwedge_{A_\g}$ defined by \begin{equation} k_e(\omega,\nu)\ =\ k(\omega_e, \nu_{e}),\ \ \ k_{ee'}(\omega,\nu)\ =\ k(\omega_e, \nu_{e'})\ . \end{equation} (Note that $k_e$ is as in the example in the last subsection.) Both, $k_e$ and $k_{ee'}$ are insensitive to a change of a group valued chart since the field $k$ on $G$ is left and right invariant. Now, for $k_v$ we set \begin{equation} \label{sum} k_v\ :=\ {1\over 2} \sum_e k_e\ + {1\over 2} \sum_{ee'} k_{ee'} \end{equation} where the first sum is carried over all the edges passing through $v$ and, in the second, $e,e'$ range through all the pairs of edges which form an analytic arc at $v$ (i.e, such that $e'\circ e^{-1}$ is analytic at $v$). Finally, we define $k_\g$ simply by summing over all the vertices $v$ of $\g$. \begin{equation} \label{k} k_\g\ :=\ \sum_v k_v \ . \end{equation} We then have \begin{proposition}{\rm :} \label{naturalk} The family $k_o = (k_\g)_{\g\in L}$ is a natural co-metric tensor on $\overline{\A}$; $k_o$ is ${\overline \G}$ invariant, and defines a natural co-metric on $\B$. \end{proposition} The proof consists just of checking the consistency conditions and proceeds as the previous proofs of this property. Note that consistency holds only because we have added the terms $k_{ee'}$, assigned to each pair of edges which constitute a single edge of a smaller graph. (Unfortunately, however, this is also the source of the potential degeneracy of $k_o$.) The diffeomorphism invariance of $k_o$ is obvious. \goodbreak \subsection{A natural Laplacian} We can now use the natural co-metric to define a natural Laplacian. That is, using Propositions \ref{d*} and \ref{naturalk} and Eq. (\ref{d*d}), we can assign to $k_o$ the operator $(\Delta{\!}^o \equiv \Delta^{(k_o)},\ {\rm Cyl}^2(\B))$. In this subsection, we will write out its detailed expression and discuss some of its properties. Let us fix a group valued chart on ${\cal A}_\g$. The operator $\Delta{\!}^o_{\g}$ representing $\Delta{\!}^o$ in $C^2({\cal A}_\g/{\cal G}_\g)$ is given by the sum \begin{equation}\label{D1} \Delta{\!}^o_{\g}\ =\ \sum_v \Delta_v \end{equation} where each $\Delta_v$ will involve products of derivatives on the copies ${\cal A}_e$ corresponding to edges incident at $v$. To calculate $\Delta_v$, we will orient $\g$ such that the edges at $v$ are all outgoing. Let $R_{i}$, $i=1,...,n$ be a basis of left invariant vector fields on $G$, $\theta^i$ the dual basis and let $k^{ij}=k(\theta^i,\theta^j)$ denote the components of $k$ in this basis. Let $e$ be an edge with vertex $v$. Denote by $R_{ei}$ the vector field on ${\cal A}_e$ corresponding to $R_i$ via the group valued chart. Next, identify $R_{ei}$ with the vector field $(0,...,0,R_{ei},0,...,0)$ on ${\cal A}_\g=\times_{e'\in\g}{\cal A}_{e'}$. Then, the expression of $\Delta_v$ reads \begin{equation} \label{D2} \Delta_v\ =\ {1\over 2} \sum_e k^{ij} R_{ei}R_{ej} +{1\over 2}\sum_{ee'} k^{ij} R_{ei}R_{e'j} \ , \end{equation} where the sums are as in the expression (\ref{sum}) of $k_v$. The consistency of the family of operators $(\Delta_{\g}{\!}^o, {\rm Cyl}^2 ({\cal A}_\g/{\cal G}_\g))$ defined by (\ref{D1}) and (\ref{D2}) is assured by Theorem \ref{div2}. However, one can also verify the consistency directly from these two equations. Indeed, if a vertex $v$ belongs to only two edges, say $e$ and $e'$, which form an analytic arc (and are oriented to be outgoing), then \begin{equation} R_{ei}f_\g\ =\ -R_{e'i}f_\g \end{equation} so that $\Delta_v=0$ at this vertex $v$. This ensures \begin{equation} \label{c} p_{\g\g'}{}_\star \Delta{\!}^o_{\g'}\ =\ \Delta{\!}^o_\g, \end{equation} when $\g'$ is obtained from $\g$ by splitting an edge. On the other hand, if $\g'$ is constructed by adding an extra edge, say $e''$, to $\g$ then the projection kills every term containing $R_{e''i}$. The remaining terms coincide with those of $\Delta_\g$. Furthermore, in this argument, we need not restrict ourselves to ${\cal G}_\g$ invariant functions. This shows that the vector field $k_o(df)$ is compatible with the natural volume form $\mu_o$ on ${\overline \A}$ for {\it every} $f\in{\rm Cyl}^2(\overline{\A})$. Hence, the operator \begin{equation} \label{Dext} \Delta{\!}^o\ :\ {\rm Cyl}^2(\overline{\A})\ \mapsto\ {\rm Cyl}(\overline{\A}); \ \ \Delta{\!}^o f = {\rm div}_{\mu_o}[k_o(df)] \end{equation} is also well defined. As in the Sec 6, the Laplace operator $\Delta{\!}^o$ can be used to define a semi-group of transformations $\rho_{t\star}:{\rm Cyl}(\overline{\A}) \rightarrow {\rm Cyl}(\overline{\A}) $ such that $f_t:=\rho_{t\star} f$ solves the corresponding heat equation. In this case, $\rho_{t\star}$ is given by the family $(\rho_\g)_{\g\in L}$ of certain generalized heat kernels. The family and the transformations $\rho_{t\star}$ coincide with those introduced in \cite{ALMMT3}. In fact the constructions of this subsection were motivated directly by the results of \cite{ALMMT3}. \bigskip We will conclude with two remarks. \begin{enumerate} \item If a vector field $X$ on $\overline{\A}$ is compatible with a volume form $\mu$, then so is $Y= hX$ for any $h\in{\rm Cyl}^1(\overline{\A})$ (and ${\rm div}(Y)=h{\rm div}(X) +X(h)$). From this and from the existence of the operator (\ref{Dext}), we have: \begin{proposition}{\rm :} For every (differentiable) 1-form $\omega\in\Omega^1(\overline{\A})$, the vector field $k_o(\omega)$ is compatible with the Haar volume form $\mu_o$. \end{proposition} \noindent The map $$ d^\star\ :\ \Omega^1(\overline{\A})\ \mapsto\ \Omega^0(\overline{\A}),\ \ d^\star \omega = {\rm div}_{\mu_o} [k_o(\omega)] $$ can be thought of as a co-derivative defined on $\overline{\A}$ by the geometry $(k_o,\mu_o)$. It is possible to extend $d^*$ to act on $\Omega^n(\overline{\A})$. \item Consider the special case when the structure group $G$ is $SU(2)$ and $M$ is an oriented 3-dimensional manifold. There exists on $\overline{\A}$ a natural {\it third order} (up to the squer root) differential operator $({q}, {\rm Cyl}^3({\overline \A}))$ defined by a consistent family of operators $(q_\g, C^3({\cal A}_\g))_{\g\in L}$ on $C^3({\cal A}_\g)$. To obtain $q_\g$, we begin as before, by defining operators $q_v$ associated with the vertices of $\g$. Given a vertex $v$ of a graph $\g$ and a triplet of edges $(e,e',e'')$ concident on this vertex, let $\epsilon(e,e',e'')$ be $0$ whenever their tangents at $v$ are linearly dependent, and $\pm 1$ otherwise, depending on the orientation of the ordered triplet. To the vertex $v$, let us assign an operator acting on $C^3({\cal A}_\g)$ as \begin{equation} q_v\ =\ \textstyle{1\over (3!)^2}\ \ \mu_H(R_i,R_j,R_k) \displaystyle{\sum_{(e,e',e'')}} \ \epsilon(e,e, e'')\ R_{ei}R_{e'j}R_{e''k} \end{equation} where we use the same orientation, notation and group valued charts as in (\ref{D2}), and where the summation ranges over all the ordered triples $(e,e',e'')$ of edges incident at $v$. In terms of these operators, we can now define $q_\g$. Set \begin{equation} q_\g\ :=\ \sum_v |q_v|^{1\over 2} \end{equation} where $v$ runs through all the vertices of $\g$ (at which three or more vertices have to be incident to contribute). We then have \begin{proposition}{\rm :} The family $(q_\g, C^3({\cal A}_\g))_{\g\in L}$ of operators is self consistent. The operator $({q}, {\rm Cyl}^3(\overline{\A}))$ is natural, ${\overline \G}$ invariant and defines a natural operator on ${\rm Cyl}^3(\B)$. \end{proposition} \noindent This operator is closely related to the total volume (of $M$) operator of Riemannian quantum gravity \cite{RS,ALMMT2}. The detailed derivation of this operator as well as the local area and the analytic formulae for the Hamiltonian operators will be discussed in a forthcoming paper \cite{AL4}. \end{enumerate} \section{Discussion} In more conventional approaches to gauge theories, one begins with ${\A/{\cal G}}$, introduces suitable Sobolev norms and completes it to obtain a Hilbert-Riemann manifold (see, e.g., \cite{AM}). However, since the Sobolev norms use a metric on the underlying manifold $M$ the resulting Hilbert-Riemann geometry fails to be invariant with respect to induced action of Diff$(M)$. Hence, to deal with diffeomorphism invariant theories such as general relativity, a new approach is needed, an approach in which the basic constructions are not tied to a background structure on $M$. Such a framework was introduced in \cite{AI,AL1} and extended in [5,8] to a theory of diffeomorphism invariant measures. Here, the basic assumption is that the algebra of Wilson loop variables should be faithfully represented in the quantum theory and it leads one directly to the completion $\B$. Neither the construction of $\B$ nor the introduction of a large family of measures on it requires a metric on $M$. This is a key strength of the approach. The physical concepts it leads to are also significantly different from the perturbative treatments of the more conventional approaches: loopy excitations and flux lines are brought to forefront rather than wave-like excitations and notion of particles. The framework {\it has been} successful in dealing with Yang-Mills theory in 2 space-time dimensions \cite{ALMMT1} without a significant new input. This success may, however, be primarily due to the fact that in 2 dimensions, the Yang-Mills theory is invariant under all volume preserving diffeomorphisms. In higher dimensions, it remains to be seen whether the Yang-Mills theory admits interesting phases in which the algebra of the Wilson loop operators is faithfully represented. If it does, they would be captured on $\B$, e.g., through Laplacians and measures which depend on an edge-metric $l(e)$, which itself would be constructed from a metric on $M$. We expect, however, that the key applications of the framework would be to diffeomorphism invariant theories. A central mathematical problem for such theories is that of developing differential geometry on the quantum configuration space, again without reference to a background structure on $M$. This task was undertaken in the last four sections. In particular, we have shown that, although $\B$ initially arises only as a compact Hausdorff topological space, because it can be recovered as a projective limit of a family of {\it analytic manifolds}, using algebraic methods, one can induce on it a rich geometric structure. There are strong indications that this structure will play a key role in non-perturbative quantum gravity. Specifically, it appears that results of this paper can be used to give a rigorous meaning to various operators that have played an important role in various heuristic treatments \cite{AA3,RS}. We will conclude by indicating how the results of this paper fit in the general picture provided by the available literature on the structure of $\B$. As mentioned in the Introduction, $\B$ first arose as the Gel'fand spectrum of an Abelian $C^\star$ algebra $\overline{\cal HA}$ constructed from the so-called Wilson loop functions on the space ${\A/{\cal G}}$ of smooth connections modulo gauge transformations \cite{AI}. A complete and useful characterization of this space \cite{AL1} can be summarized as follows. Fix a base point $x_o$ on the underlying manifold $M$ and consider piecewise analytic, closed loops beginning and ending at $x_o$. Consider two loops as equivalent if the holonomies of any smooth connection around them, evaluated at $x_o$, are equal. Call each equivalence class a {\it hoop} (a holonomic-loop). The space ${\cal HG}{}$ of hoops has the structure of a group, which is called the {\it hoop group}. (It turns out that in the piecewise analytic case, ${\cal HG}{}$ is largely independent of the structure group $G$. More precisely, there are only two hoop groups; one for the case when $G$ is Abelian and the other when it is non-Abelian.) The characterization of $\B$ can now be stated quite simply, in purely algebraic terms: $\B$ is the space of all homomorphisms from ${\cal HG}{}$ to $G$. Using subgroups of ${\cal HG}{}$ which are generated by a finite number of independent hoops, one can then introduce the notion of cylindrical functions on $\B$. (The $C^\star$-algebra of all continuous cylindrical functions turns out to be isomorphic with the holonomy $C^\star$-algebra with which we began.) Using these functions, one can define cylindrical measures. The Haar measure on $G$ provides a natural cylindrical measure, which can be then shown to be a regular Borel measure on $\B$ \cite{AL1}. Marolf and Mour\~ao \cite{MM} obtained another characterization of $\B$: using various techniques employed in \cite{AL1}, they introduced a projective family of compact Hausdorff spaces and showed that its projective limit is naturally isomorphic to $\B$. This result influenced the further developments significantly. Indeed, as we saw in this paper, it is a projective limit picture of $\B$ that naturally leads to differential geometry. The label set of the family they used is, however, different from the one we used in this paper: it is the set of all finitely generated subgroups of the hoop group. At a sufficiently general level, the two families are equivalent, the subgroups of the hoop group being recovered in the second picture as the fundamental groups of graphs. For a discussion of measures and integration theory, the family labelled by subgroups of the hoop group is just as convenient as the one we used. Indeed, it was employed successfully to investigate the support of the measure $\mu_0$ in \cite{MM} and to considerably simplify the proofs of the two characterization of $\B$, discussed above, in \cite {AL2}. For introducing differential geometry, however, this projective family appears to be less convenient. The shift from hoops to graphs was suggested and explored by Baez \cite{B1,B2}. Independently, the graphs were introduced to analyze the so-called ``strip-operators'' (which serve as gauge invariant momentum operators) as a consistent family of vector fields in \cite{Le2}. Baez also pointed out that, even while dealing with gauge-invariant structures, it is technically more convenient to work ``upstairs'', in the full space of connections, rather than in the space of gauge equivalent ones. Both these shifts of emphasis led to key simplifications in the various constructions of this paper. Baez's main motivation, however, came from integration theory. His main result was two-folds: he discovered powerful theorems that simplify the task of obtaining measures and, using them, obtained a large class of diffeomorphism invariant measures on $\B$. In his discussion, it turned out to be convenient to de-emphasize $\B$ itself and focus instead on the space ${\cal A}$ of smooth connections. In particular, he regarded measures on ${\overline \A}$ primarily as defining ``generalized measures'' on ${\cal A}$. At first this change of focus appears to simplify the matters considerably since one seems to be dealing only with the familiar space of {\it smooth} connections. One may thus be tempted to ignore $\B$ altogether! The impression, however, misleading for a number of reasons. First, as Marolf and Mour\~ao \cite{MM} have shown, the support of the natural measure $\mu_o$ is concentrated precisely on ${\overline \A}$--${\cal A}$; the space ${\cal A}$ is contained in a set of zero $\mu_o$-measure. The situation is likely to be the same for other interesting measures on $\B$. Thus, the extension from ${\cal A}$ to ${\overline \A}$ is not an irrelevant technicality. Second, without recourse to ${\overline \A}$, it is difficult to have a control on just how general the class of ``generalized measures'' on ${\cal A}$ is. Is it perhaps ``too general'' to be relevant to physics? A degree of control comes precisely from the fact that this class corresponds to regular Borel measures on ${\overline \A}$. One can thus rephrase the question of relevance of these measures in more manageable terms: Is ${\overline \A}$ ``too large'' to be physically useful and mathematically interesting? We saw in this paper that, with this rephrasing, the question can be analyzed quite effectively. The answer turned out to be in the negative; although ${\overline \A}$ is very big, it is small enough to admit a rich geometry. Finally, all our geometrical structures naturally reside on the projective limit ${\overline \A}$ of our family of compact analytic manifolds. It would have been difficult and awkward to analyze them directly as generalized structures on ${\cal A}$. Thus, there is no easy way out of making the completions ${\overline \A}$ and $\B$. To summarize, for diffeomorphism invariant theories, there is no easy substitute for the extended spaces ${\overline \A}$ and $\B$; one has to learn to deal directly with {\it quantum} configuration spaces. Fortunately, this task is made manageable because there are three characterizations of $\B$: One as a Gel'fand spectrum of an Abelian $C^\star$-algebra and two as projective limits. The three ways of constructing $\B$ are complementary, and together they show that $\B$ has a surprisingly rich structure. In particular, differential geometry developed in this paper makes it feasible to use $\B$ to analyze concrete mathematical problems of diffeomorphism invariant theories. \bigskip\goodbreak {\bf Acknowledgment}: We would like to thank John Baez, David Groisser, Piotr Hajac, Witold Kondracki, Donald Marolf, Jose Mourao, Tomasz Mrowka, Alan Rendall, Carlo Rovelli, Lee Smolin, Clifford Taubes, and Thomas Thiemann for discussions. Jerzy Lewandowski is grateful to Center for Gravitational Physics at Penn State and Erwin Schr\"odinger Institute for Mathematical Physics in Vienna, where most of this work was completed, for warm hospitality. This work was supported in part by the NSF grants 93-96246 and PHY91-07007, the Eberly Research Fund of Penn State University, the Isaac Newton Institute, the Erwin Shr\"odinger Institute and by the KBN grant 2-P302 11207. \bigskip \goodbreak
{ "redpajama_set_name": "RedPajamaArXiv" }
3,948
Q: creating column names using paste0() inside an lapply() in R i have some data: myData <- mtcars %>% group_by(cyl) %>% group_map(~ head(.x, 5L)) which i am transforming using an lapply function myDataNew <- lapply(myData, function(x) { x <- transform(x, ratio = hp/gear) x <- filter(x, !carb == 2) }) which work all fine and as well as intended. However i would like to give different column names, such that every newly created column name will have "an flag". flag <- "calculation1" myDataNew <- lapply(myData, function(x, flag) { x <- transform(x, paste0("ratio-", flag) = hp/gear) x <- filter(x, !carb == 2) }) here i have learned that the paste0() function doesnt work inside an lapply(). How can i achieve the intended outcome: myDataNew [[1]] mpg disp hp drat wt qsec vs am gear carb ratio-calculation1 1 22.8 108.0 93 3.85 2.32 18.61 1 1 4 1 23.25 2 32.4 78.7 66 4.08 2.20 19.47 1 1 4 1 16.50 [[2]] mpg disp hp drat wt qsec vs am gear carb ratio-calculation1 1 21.0 160.0 110 3.90 2.620 16.46 0 1 4 4 27.50000 2 21.0 160.0 110 3.90 2.875 17.02 0 1 4 4 27.50000 3 21.4 258.0 110 3.08 3.215 19.44 1 0 3 1 36.66667 4 18.1 225.0 105 2.76 3.460 20.22 1 0 3 1 35.00000 5 19.2 167.6 123 3.92 3.440 18.30 1 0 4 4 30.75000 [[3]] mpg disp hp drat wt qsec vs am gear carb ratio-calculation1 1 14.3 360.0 245 3.21 3.57 15.84 0 0 3 4 81.66667 2 16.4 275.8 180 3.07 4.07 17.40 0 0 3 3 60.00000 3 17.3 275.8 180 3.07 3.73 17.60 0 0 3 3 60.00000 4 15.2 275.8 180 3.07 3.78 18.00 0 0 3 3 60.00000 A: You are mixing base R and dplyr. To do this in dplyr, use : library(dplyr) library(rlang) lapply(myData, function(x) { x %>% mutate(!!paste0("ratio-", flag) := hp/gear) %>% filter(carb != 2) }) Or to keep it in base R : lapply(myData, function(x) { x[[paste0("ratio-", flag)]] = x$hp/x$gear subset(x, carb != 2) })
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,140
\section{Introduction} \label{intro} In laparoscopic surgery, patient's abdomen is visualized by a camera which is inserted into the body through small incisions. High quality of the captured video is necessary to keep a clear visualization for the operating surgeons as well as for navigation systems~\citep{stoyanov2012surgical, andrea2018validation}. However, the perceptual quality can be significantly degraded by smoke caused by such tool as laser ablation. The surgeons' visibility is inevitably impacted by this degradation. Furthermore, the computer vision technique based surgical navigation systems are mainly designed for clear videos~\citep{andrea2018validation,wang2018liver}, smoke would influence the performance. Therefore, in order to maintain a clear operation field, it becomes necessary to remove the smoke from laparoscopic images by smoke evacuation techniques~\citep{lawrentschuk2010laparoscopic} and by computer vision algorithms~\citep{wang2018variational}. Especially, a real time automatic image processing based method is desired which would not introduce any extra hardware to the surgical procedure. Moreover, the algorithm can be embedded to computer assisted surgical navigation workflow easily. \par The majority of existing smoke removal algorithms are based on simplified physical models or assumptions about input image data which limit their practical application. In this paper, our goal is to avoid using any assumptions or simplified models and perform real-time smoke removal end-to-end. It became possible due to the development of deep learning algorithms oriented on conditional image generation. These algorithms utilize the adversarial process to learn mapping between two image domains in a supervised manner. In case of smoke removal task, these two domains are images with and without smoke correspondingly. \par Thus, in our work, we perform analysis of image generation errors and propose a new loss function which allows to optimize image quality during the training and produces data without noise and artifacts. We utilize a set of smoke-free images cast with synthetic smoke to train the network, and further evaluate its performance on real-world data. \par The remainder of this paper is structured as follows. In Section~\ref{sec:related work}, we review the related work on smoke removal and image generation using GANs. Next, in Section~\ref{sec:method}, we describe our proposed method. Section~\ref{sec:results} presents the training of the network and discusses the experimental results. Finally, the conclusions are drawn in Section~\ref{sec:conclusions}. \section{Related works} \label{sec:related work} \subsection{Laparoscopic Smoke Removal} In this part, we group the smoke removal methods to traditional approaches and deep learning approaches. \paragraph{Traditional approaches:} As dehazing and desmoking problem share some similarity, traditional desmoking approaches~\citep{wang2018variational,kotwal2016joint,baid2017joint,tchakaa2017chromaticity,luo2017vision} follow similar strategy as dehazing approaches. In those literatures, the atmospheric scattering model~\citep{narasimhan2002vision} described in Equation (\ref{eq:scattering}) is widely used. \begin{equation} \label{eq:scattering} \mathbf{I}(x,y)=\mathbf{J}(x,y)t(x,y)+\mathbf{A}(1-t(x,y)), \end{equation} where $\mathbf{I}$ is the observed haze image, $\mathbf{J}$ represents the haze-free image, $\mathbf{A}$ is the global atmospheric light and $t$ is the medium transmission map. In~\citep{kotwal2016joint}, desmoking and denoising problem is formulated to probabilistic graphical model and then it is extended in~\citep{baid2017joint} for desmoking, denoising and specular removal. In~\citep{tchakaa2017chromaticity}, a dark channel prior dehazing method originally proposed in~\citep{he2011single} is modified for smoke removal purpose. In~\citep{luo2017vision}, Luo \textit{et al.} propose to estimate atmospheric veil ($\textbf{A}(1-t(x,y))$) directly instead of calculating $t$. In~\citep{wang2018variational}, Wang \textit{et al.} present a variational method to estimate the atmospheric veil. The approaches proposed in~\citep{luo2017vision,wang2018variational} show promising results, but the performance degraded for dense and heterogeneous smoke. Moreover, all the methods rely on some assumptions, therefore, the methods' performance degenerates when the assumptions are wrong. \paragraph{Deep Learning approaches:} In~\citep{sabri2018}, Bolkar \textit{et al.} propose the first deep learning desmoking approach. Synthetic dataset created by Perlin noise is generated and used for fine tuning AODNet~\citep{li2017aod}. Later, in~\citep{chen2018unsupervised}, a conditional Generative Adversarial Network is trained by Blender\footnote{ \url{https://www.blender.org/}} generated synthetic dataset for desmoking. These deep learning based methods show promising direction for developing real-time smoke removal algorithms. \subsection{Image generation using GANs} The deconvolutional (``transposed convolutional'') layers of a CNN (Convolutional Neural Network) have made possible the generation of an output of the same size as an input image. However, L1- or L2-loss used as similarity metric leads to a prediction of blurred images. Generative Adversarial Networks (GANs)~\citep{goodfellow2014generative} solve this issue by adding an adversarial loss, implemented as separate CNN with a binary output (discriminator), that allows achieving a photo-realistic quality of a synthesis. Recently, GANs demonstrated state-of-the-art performance in numerous computer vision tasks, such as image mapping~\citep{isola2017image}, video generation~\citep{wang2018video}, segmentation~\citep{luc2016semantic}, inpainting~\citep{iizuka2017globally}, \emph{etc}. Furthermore, a few GAN-based models were proposed recently for image dehazing~\citep{bharath2018single, li2018single} \par Isola \textit{et al.}~\citep{isola2017image} first demonstrated the great potential of supervised image-to-image translation with the algorithm called pix2pix. It was successfully applied to image colorization, segmentation, generation of images from edges and segmented labels. Pix2pix utilizes L1-loss of U-Net~\citep{ronneberger2015u} shaped generator together with an adversarial loss of discriminator. On the other hand, Perceptual Adversarial Network (PAN)~\citep{wang2018perceptual} and pix2pixHD~\citep{wang2017high}, apply similar technique complemented by "perceptual loss". The latter was first presented by Johnson~\textit{et al.}~\citep{johnson2016perceptual} with an objective to cover not only a pixel-wise similarity but also a similarity of high-level features. A perceptual loss is created as a concatenation of activations extracted from different layers of pretrained VGG-16 (pix2pixHD) or directly from discriminator (PAN). Nevertheless, despite being called "perceptual", this metric has no relation to the human visual system that we discuss further in the text. \par While pix2pix and PAN learn mapping in a supervised manner, algorithms like CycleGAN~\citep{zhu2017unpaired}, DiscoGAN~\citep{kim2017learning}, and UNIT~\citep{liu2017unsupervised} utilize cycle loss which allows using unpaired data. This approach is not covered in the scope of this work, but it has high potential to be applied for smoke removal when trained using unrelated batches of images with and without smoke. \section{Methodology} \label{sec:method} \begin{figure}[b!] \floatconts {fig:artifacts} {\caption{(a) input image; (b) raw PAN output; (c) a magnitude spectrum of the Fourier transform of (b); (d) output generated by PAN with added MS-SSIM loss.}} {\subfigure[]{\includegraphics[width=0.45\linewidth]{fig1a}} \quad \subfigure[]{\includegraphics[width=0.45\linewidth]{fig1b}} \subfigure[]{\includegraphics[width=0.45\linewidth]{fig1c}} \quad \subfigure[]{\includegraphics[width=0.45\linewidth]{fig1d}} } \end{figure} \subsection{Model architecture and loss function} \label{subsec:metod} As a baseline for the implementation of our approach, we used PAN proposed by Wang \textit{et al.}~\citep{wang2018perceptual}. It has similar architecture to pix2pix except it employs a more efficient loss function. Fig.~\ref{fig:artifacts} (b) illustrates the main drawback of the results obtained when PAN is applied to desmoking straightforward: strongly manifested periodic noise (grid structure). It also manifests in Fourier spectrum of the images (Fig.~\ref{fig:artifacts} (c)). Considering that standard approach to removing spatial periodic noise is to apply image filtering in the frequency space, we designed a mask which maximally covers undesired peaks. Nevertheless, this approach did not improve the image quality significantly, even with various shapes of the mask tested. The failure of classical image processing in Fourier domain and the desire to keep the model end-to-end motivated us to improve the original PAN algorithm. Since the main issue is the quality of the output images (structural artifacts), the logical step was to add perceptual image quality metric to the loss function and minimize it during training. Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM)~\citep{wang2004image} are two most commonly used full-reference image quality metrics. The significant advantage of them compared to less popular image quality metrics is their differentiability which allows to use them for gradient computation. However, PSNR (Eq. (\ref{eq:psnr})) was proven~\citep{zhang2012comprehensive} to not correlate well with human perception. Moreover, it is highly related to L2 metric (RMSE) which is not suitable for image translation since it produces blurry results~\citep{isola2017image}. \begin{equation} \label{eq:psnr} PSNR=20log_{10}(max(I))-10log_{10}MSE \end{equation} SSIM is a perceptual image similarity metric which was proposed as an alternative to MSE (Mean Square Error) and PSNR in order to increase correlation with subjective evaluation. For original and reconstructed images $I$ and $J$, SSIM is defined as: \begin{equation} SSIM(I,J)=\frac{(2\mu _{I}\mu_{J}+c_{1})(2\sigma _{IJ}+c_{2})}{(\mu_{I}^{2}+\mu_{J}^{2}+c_{1})(\sigma_{I}^{2}+\sigma_{J}^{2}+c_{1})}, \end{equation} where $\mu_I$, $\mu_J$ and $\sigma_I$, $\sigma_J$ are mean and variance of images $I$ and $J$ correspondingly, while $\sigma _{IJ}$ is covariance of the images. Therefore, corresponding loss function was defined as: \begin{equation} L_{SSIM}=-mean(SSIM(I,J)). \end{equation} Since SSIM changes in range 0 $\sim$ 1, where higher value corresponds to higher similarity, it has to be inverted in order to join it with other losses which are minimized during an optimization. \begin{figure}[t!] \floatconts {fig:model} {\caption{PAN Framework complemented by MS-SSIM image quality loss. ``nAsB" denotes A filters of stride B. The detailed description of the layers is in the text.}} {\includegraphics[width=1\linewidth]{fig2}} \end{figure} While SSIM processes windows of specified size $n\times n$, its extension Multi-Scale SSIM (MS-SSIM)~\citep{zhang2012comprehensive} takes into account windows of different sizes and allows covering a bigger range of spatial frequencies that lead to better results. So, in addition to SSIM we also used MS-SSIM loss which is computed in the analogous way. The complete framework of the algorithm is illustrated in Figure~\ref{fig:model}. The generator $G$ is a U-Net-like convolutional network with skip connections. The discriminator $D$ is a conventional CNN-based binary classifier. The encoding layers of both $G$ and $D$ consist of $Convolution$ layer followed by $BatchNorm$, and $LeakyReLU$. The size of filters in $Convolution$ layer is $3\times3$, whereas their number and stride are marked in Figure~\ref{fig:model}. The decoding layers follows the same template but with $DeConvolution$ of size $4\times4$ instead of $Convolution$. The complex loss function is constructed as a linear combination of the SSIM (MS-SSIM) loss, "perceptual" loss extracted from the discriminator's layers, and an adversarial loss. \subsection{Implementation Details} The model is implemented in Pytorch 0.4 and its code is publicly accessible\footnote{ \url{https://github.com/acecreamu/ssim-pan}}. The training was performed using single 4GB GTX980 GPU. Original images were re-sized and zero-padded in order to match 256x256 input size. In our experiments we used ADAM optimizer with a learning rate of 0.0002 and momentum 0.5, batch size 4, and 50 training epochs. All the other parameters' values can be found in the code provided. \section{Results and discussion} \label{sec:results} \subsection{Dataset} There exist no labeled datasets for desmoking. Therefore, the synthetic dataset from~\citep{wang2019multiscale}~\footnote{ \url{http://hamlyn.doc.ic.ac.uk/vision/}} is used to train our network. Manually selected smoke-free images from~\citep{chen2018unsupervised} are used as the groundtruth images, then Adobe Photoshop\footnote{ \url{https://www.adobe.com/products/photoshop.html}} is used to render clouds to simulate the smoke images appearances. As a result, 7,500 smoke-free images with smoke of three different densities produced 22,500 synthetic image pairs which were used for training. The evaluation, however, was performed using both types of images: with synthetic smoke (ground truth available) and with real smoke (no ground truth)\footnote{ The data with real smoke has been captured dynamically during surgery.}. \begin{table}[b!] \floatconts {table:1} {\caption{Quantitative evaluation results.}} { \begin{tabular}{l p{1cm} c c p{1cm} c c} \hline\hline & \multicolumn{1}{c}{} & \multicolumn{2}{c}{CIEDE2000} & \multicolumn{1}{c}{} & \multicolumn{2}{c}{RMSE} \\ & & mean & std & & mean & std \\ \hline RDCP~\citep{tchakaa2017chromaticity} & & 12.9 & 1.32 & & 35.0 & \textbf{4.55} \\ DCP~\citep{he2011single} & & 11.5 & 2.13 & & 40.6 & 6.98 \\ VAR~\citep{wang2018variational} & & 10.8 & 2.18 & & 38.1 & 8.00 \\ EVID~\citep{galdran2015enhanced} & & 7.41 & 1.40 & & 24.2 & 4.81 \\ Proposed & & \textbf{3.89} & \textbf{1.15} & &\textbf{4.6} & 4.71 \\ \hline\hline \end{tabular} } \end{table} \subsection{Experimental results} According to the availability of the source code and the suitability of it for smoke removal, we compare the proposed method with four following methods: physical model based dark channel prior (DCP)~\citep{he2011single} and refined dark channel prior (R-DCP)~\citep{tchakaa2017chromaticity} methods, mild physical constraint based variational approach EVID~\citep{galdran2015enhanced} and recently proposed desmoking approach VAR~\citep{wang2018variational}. 300 real images with added synthetic smoke have been used for quantitative evaluation due to the direct availability of ground truth. As shown in Table~\ref{table:1}, we report results in terms of colorimetric difference CIEDE2000~\citep{doi:10.1002/col.1049} (which describes accuracy of color reconstruction for human visual system) as well as RMSE of pixel values (which is important for computational algorithms). Our proposed method outperforms the other approaches in terms of CIEDE2000 and RMSE. \begin{figure*}[t!] \floatconts {fig:result_pan_qualitative} {\caption{Qualitative comparison. (a) Input smoke images and desmoked ones by: (b) DCP~\citep{he2011single}, (c) VAR~\citep{wang2018variational}, (d) EVID~\citep{galdran2015enhanced}, (e) proposed method. Zoom is required.}} {\includegraphics[width=1\linewidth]{fig3}} \end{figure*} \begin{figure}[hb!] \floatconts {fig:votes} {\caption{The results of perceptual evaluation by real surgeons.}} {\includegraphics[width=1.05\linewidth]{fig4}} \end{figure} Qualitative result obtained from real smoke images is illustrated in Fig.~\ref{fig:result_pan_qualitative}. It can be clearly seen that the proposed approach outperforms all the other techniques and produces significant visibility enhancement even in the cases with very dense smoke. Another beneficial property is preserving of color information similar to original. Additional improvement of the results is expected in case of using larger and more diverse dataset for training. \paragraph{Perceptual evaluation.} The real data does not contain ground truth smoke-free images which makes quantitative comparison of the results troublesome. However, since our primary goals is to enhance visual data used by clinicians for simplification of the surgery process, the most relevant evaluation possible is a perceptual experiment with real doctors. This motivated us to gather subjective responses and define which algorithm is more likely to be chosen for real application. Surgeons' time is expensive, so we designed our experiment as a short online-survey for the sake of easier accessibility and reaching a larger number of participants. The survey is available online\footnote{ \url{https://www.surveymonkey.com/r/2XL9JPH}} and can be used to find more outputs and evaluate them personally. The participant base consisted of 45 surgeons who kindly followed email-invitation. Each trial presented 10 image-choice questions split into two questions: \emph{``Which image do you prefer?"} and \emph{``Which image would be the most useful during a surgery?"}. Figure~\ref{fig:votes} illustrates that our approach has earned the largest number of votes in both tasks, even though EVID method by \citep{galdran2015enhanced} is a strong competitor. \begin{figure}[t!] \floatconts {fig:result_other} {\caption{The comparison of original and modified versions of PAN illustrated on other datasets.}} {\includegraphics[width=1\linewidth]{fig5}} \end{figure} \subsection{Discussion} In this paper, we show how image quality metrics can improve the results of supervised image-to-image translation in medical domain. This is not a unique case. The usage of image quality metrics in deep learning was first discussed by Dosovitskiy \textit{et al.}~\citep{dosovitskiy2016generating}. Following works~\citep{snell2017learning,zhao2017loss} illustrated its application to image restoration and super-resolution. However, it has never been applied to GANs where it is especially useful. From the other side, Odena \textit{et al.}~\citep{odena2016deconvolution} relate above-mentioned artifacts (discussed in~\ref{subsec:metod}) to the uneven overlap of deconvolutional filters and propose to solve it by preliminary resizing of an image. Isola\footnote{https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/78\#issuecomment-322908732} stated that this operation applied to pix2pix can increase training time up to 4 times. On the other hand, our solution increases time of one training epoch only by 13\%, which is annihilated by faster convergence in a fewer number of epochs. The additional results of applying the proposed method to the datasets from other domains (edge2shoes~\citep{isola2017image} and SRD~\citep{qu2017deshadownet}) are demonstrated in Fig.~\ref{fig:result_other} and compared to the output of the original PAN from the same epoch. As can be seen, the proposed approach achieves significantly better perceptual image quality regardless of the domain of application. \section{Conclusions} \label{sec:conclusions} In this work, we present the novel GAN-based approach for smoke removal from laparoscopic images. We show how the end-to-end model trained on synthetic data can demonstrate a remarkable performance on real-world images. We used the standard pix2pix-like architecture complemented by ``perceptual" loss and MS-SSIM loss to obtain an effective image enhancement method which is useful for clinicians operating surgery as well as for computer vision algorithms. Further development may include gathering larger real-world dataset, modification of the code for a real-time application, and application of similar unsupervised methods (CycleGAN, UNIT, \emph{etc.}).
{ "redpajama_set_name": "RedPajamaArXiv" }
4,264
{"url":"https:\/\/physics.stackexchange.com\/questions\/251503\/is-there-a-bayesian-theory-of-deterministic-signal-prequel-and-motivation-for-m","text":"# Is there a Bayesian theory of deterministic signal? Prequel and motivation for my previous question\n\nThis is a prequel to my question:\n\nWhat's the probability distribution of a deterministic signal or how to marginalize dynamical systems?\n\nClearly my question looks at the same time fairly elementary but completely unexpected, even crazy, out of nowhere. That's probably the reason why I did not get any answer nor comment after more than 200 views, only one upvote (thanks!).\n\nTherefore I'd like to explain what's the motivation behind it and why it might be an important and fundamental question.\n\nIndeed, the starting point is to acknowledge that the theory of signal we know well is in fact the theory of random signal, as developed by Shannon and many others, but that the\/a theory of deterministic signal might not be so well-known. Even, it is not yet clear, at least to myself, whether such theory exists or not!\n\nTo see it, let's consider a basic, practical problem in signal processing taken from J. D. Scargle, Bayesian estimation of time series lags and structure:\n\nhttp:\/\/arxiv.org\/abs\/math\/0111127\n\nConsider the problem of estimating the time delay between two discrete-time signals corrupted by additive noise.\n\nHence we record some samples from a first signal ${X_m}$ corrupted by additive noise ${B_X}$\n\n${X_m} = {S_m} + {B_X}$\n\nwhere ${S_m}$ is the theoretical signal and a second, time-delayed signal ${Y_m}$ also corrupted by additive noise ${B_Y}$\n\n${Y_m} = {S_{m - \\tau }} + {B_Y}$\n\nand we want to estimate the time delay $\\tau$ between both theoretical signals.\n\nLet $D$ be our experimental data. Assuming both noises to be zero-mean Gaussian with standard deviations ${\\sigma ^X}$ and ${\\sigma ^Y}$, Bayes rule writes\n\n$p\\left( {\\left. {\\tau ,{S_m},{\\sigma ^X},{\\sigma ^Y}} \\right|D} \\right) \\propto p\\left( {\\tau ,{S_m},{\\sigma ^X},{\\sigma ^Y}} \\right)p\\left( {\\left. D \\right|\\tau ,{S_m},{\\sigma ^X},{\\sigma ^Y}} \\right)$\n\nso that $\\tau$ has marginal posterior probability distribution\n\n$p\\left( {\\left. \\tau \\right|D} \\right) = \\int\\limits_{{S_m}} {\\int\\limits_{{\\sigma ^X}} {\\int\\limits_{{\\sigma ^Y}} {p\\left( {\\left. {\\tau ,{S_m},{\\sigma ^X},{\\sigma ^Y}} \\right|D} \\right){\\text{d}}{\\sigma ^X}{\\text{d}}{\\sigma ^Y}{{\\text{d}}^M}{S_m}} } }$\n\nfrom which we can get standard Bayesian estimators such as the maximum a posteriori estimator (MAP) since $\\tau$ is discrete.\n\nSo it remains to assign the prior probability distribution\n\n$p\\left( {\\tau ,{S_m},{\\sigma ^X},{\\sigma ^Y}} \\right) = p\\left( \\tau \\right)p\\left( {{S_m}} \\right)p\\left( {{\\sigma ^X}} \\right)p\\left( {{\\sigma ^Y}} \\right)$\n\nin particular the prior probability distribution for the samples of the theoretical signal\n\n$p\\left( {{S_m}} \\right)$\n\nAssigning for instance with Scargle a prior uniform distribution on ${\\mathbb{R}^M}$ or on some interval ${\\left[ {{S_0},{S_1}} \\right]^M}$ (eq. 26), we finally prove (probabilistically) that the classical cross-correlation function\n\n${\\gamma _{X,Y}}\\left( \\tau \\right) = \\sum\\limits_{m = 1}^M {{x_m}{y_{m + \\tau }}}$ (eq. 70)\n\nis a sufficient statistics for the problem of estimating the time delay\/lag between two random signals corrupted by Gaussian noises.\n\nNow, consider the very same problem but with a deterministic theoretical signal $S\\left( m \\right)$ instead of a random one.\n\nTo my mind, a sufficient statistics for this second problem is not expected, before proceeding to its calculation, to be the usual cross-correlation function because the cross-correlation or the covariance between two samples from two deterministic signals and, a fortiori, the cross-correlation function or the cross-covariance function between those samples do not make much sense for deterministic signals for a simple and good reason I believe.\n\nIndeed, the cross-correlation ${\\gamma _{X,Y}}\\left( 0 \\right)$ of two samples ${x_1},{x_2},...,{x_M}$ and ${y_1},{y_2},...,{y_M}$\n\n${\\gamma _{X,Y}}\\left( 0 \\right) = \\sum\\limits_{m = 1}^M {{x_m}{y_m}}$\n\nis, by definition, invariant by permutation of the times points\/indices $m$: for any permutation $\\sigma$ over $\\left\\{ {1,2,...,M} \\right\\}$, we have\n\n$\\sum\\limits_{m = 1}^M {{x_m}{y_m}} = \\sum\\limits_{m = 1}^M {{x_{\\sigma \\left( m \\right)}}{y_{\\sigma \\left( m \\right)}}}$\n\nHence the order of the samples does not matter at all, as expected if they were assumed to be i.i.d. in the frequentist framework or De Finetti-exchangeable in the Bayesian framework, as Scargle did.\n\nBut obviously, for deterministic signals the order of the signals' samples does and should matter: they define the chronological order, i.e. the time. And without time\/chronological order, no deterministic signal. So, time is not expected to disappear in our statistics (more precisely, for a given delay $\\tau$ time disappears in the cross-correlation or covariance since they are invariant under permutation. But then it reappears in the cross-correlation or the covariance functions since they are functions of the time!?).\n\nHence, for a deterministic signal, the sufficient statistics for our time delay estimation problem, a hypothetical \"deterministic cross-correlation function\" is expected to be something quite different from the classical cross-correlation function. In particular, it is not expected to be invariant under permutation of the time points for a given delay $\\tau$.\n\nMoreover, it is well-known that cross-correlation or covariance (functions) are in general not suitable statistics for quantifying dependencies between (nonlinear) deterministic signals: in some cases they can be completely blind to some nonlinear effects. More suitable statistics do exist (e.g. nonlinear dependencies) but as far as I know they may lack rock-solid theoretical foundations and are not derived from probability theory.\n\nIt is worth observing that standard mathematical notations precisely handle the difference between both problems:\n\n\u2022 If the signal is random, then time plays essentially no role. So we have a stochastic process, i.e. a collection of random variables indexed by time that we denote ${S_m}$;\n\n\u2022 If the signal is deterministic, then it is a function of the time that we denote $S\\left( m \\right)$.\n\nSo, consider $M$ evenly sampled samples from a discrete-time real deterministic signal\n\n$s\\left( {1} \\right),s\\left( {2} \\right),...,s\\left( {M} \\right)$\n\nBy the standard definition of a discrete-time deterministic dynamical system, there exists:\n\n\u2022 a phase space $\\Gamma$, e.g. $\\Gamma \\subset \\mathbb{R} {^d}$\n\u2022 an initial condition $z\\left( 1 \\right)\\in \\Gamma$\n\u2022 a state-space equation $f:\\Gamma \\to \\Gamma$ such as $z\\left( {m + 1} \\right) = f\\left[ {z\\left( m \\right)} \\right]$\n\u2022 an output or observation equation $g:\\Gamma \\to \\mathbb{R}$ such as $s\\left( m \\right) = g\\left[ {z\\left( m \\right)} \\right]$\n\nHence, by definition we have\n\n$\\left[ {s\\left( {1} \\right),s\\left( {2} \\right),...,s\\left( {M} \\right)} \\right] = \\left\\{ {g\\left( {{z_1}} \\right),g\\left[ {f\\left( {{z_1}} \\right)} \\right],...,g\\left[ {{f^{M - 1}}\\left( {{z_1}} \\right)} \\right]} \\right\\}$\n\nor, in probabilistic notations\n\n$p\\left[ {\\left. {s\\left( {1} \\right),s\\left( {2} \\right),...,s\\left( {M} \\right)} \\right|{z_1},f,g} \\right] = \\prod\\limits_{m = 1}^M {\\delta \\left\\{ {g\\left[ {{f^{m - 1}}\\left( {{z_1}} \\right)} \\right] - s\\left( {m} \\right)} \\right\\}}$\n\nTherefore, by total probability and the product rule, the marginal prior probability distribution of $M$ samples from a deterministic signal, should it ever exists, is formally given by\n\n$p\\left[ {s\\left( 1 \\right),s\\left( 2 \\right),...,s\\left( M \\right)} \\right] = \\int\\limits_{{\\mathbb{R}^\\Gamma }} {\\int\\limits_{{\\Gamma ^\\Gamma }} {\\int\\limits_\\Gamma {{\\text{D}}g{\\text{D}}f{{\\text{d}}^d}{z_1}\\prod\\limits_{m = 1}^M {\\delta \\left\\{ {g\\left[ {{f^{m - 1}}\\left( {{z_1}} \\right)} \\right] - s\\left( m \\right)} \\right\\}p\\left( {{z_1},f,g} \\right)} } } }$\n\n\"Of course\", should it be unknown a priori, we may also need to marginalize the phase space $\\Gamma$ itself! But I told to myself that marginalizing the dynamical system\/state-space equation $f$ and the output\/observation equation $g$ was enough in a first step!\n\nShould we be able to define and compute this joint prior probability, we could derive, at least in principle, our \"deterministic cross-correlation function\" for our time delay estimation problem by applying the rules of probability theory.\n\nTo sum up,\n\n\u2022 either such marginal prior probability distributions are usual, i.i.d. or exchangeable, joint probability distributions such as Scargle's uniform distribution. In this case, the classical theory of signal would work for both random and deterministic signals;\n\n\u2022 or such marginal prior probability distributions, once computed from the joint prior distribution $p\\left( {{z_1},f,g,\\Gamma } \\right)$, are something quite different from usual joint probability distributions because time still plays an essential role. In this case, there would exist two different theories of signal, one for random signals, which we know well, and another one for deterministic signals waiting to be developed, to the best of my knowledge, if we can ever define and compute those unusual functional integrals.\n\nSo, the following questions arise:\n\n\u2022 (Conditionally on phase space $\\Gamma$,) Can we define functional probability distributions over the set of all dynamical systems\/state-space equations\/functions $f$ acting on $\\Gamma$? If true, how to integrate over\/marginalize them?\n\n\u2022 (Conditionally on phase space $\\Gamma$,) Can we define functional probability distributions over the set of all output\/observation equations\/functions $g$ from $\\Gamma$ to e.g. $\\mathbb{R}$? If true, how to integrate over\/marginalize them?\n\n\u2022 (Conditionally on phase space $\\Gamma$ and both previous questions,) Can we compute the marginal prior probability distribution of $M$ samples from a discrete-time deterministic signal from default, basic joint prior probability distributions $p\\left( {{z_1},f,g } \\right)$ such as the uniform distribution on $\\Gamma \\times {\\Gamma ^\\Gamma } \\times {\\Gamma ^\\mathbb{R}}$?\n\n\u2022 Can we define probability distributions over the set of all phase spaces in, say, the set of all Cartesian powers of $\\mathbb{R}$?","date":"2019-08-18 17:58:07","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9068192839622498, \"perplexity\": 506.4619064367605}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-35\/segments\/1566027313987.32\/warc\/CC-MAIN-20190818165510-20190818191510-00001.warc.gz\"}"}
null
null
\section{Introduction} \label{sec:intr} The properties of phase-transitions in a pure system can be modified due to quenched disorder. This problem has been studied in most details at a second-order transition point\cite{harris,ccfs}, but much less is known when the transition is of first order\cite{Cardy99}. When the disorder is coupled to the local energy density, such as for bond disorder, there is a general tendency that the latent heat at the transition point is reduced\cite{imrywortis}. In two-dimensional systems with nearest-neighbour (or short-range (SR)) interactions any amount of bond disorder is enough to turn the transition into second order\cite{aizenmanwehr}. The new universality class of the problem, however, remains unknown and numerical investigations are needed to identify the properties of the emergent random fixed point\cite{pottsmc,pottstm,jacobsenpicco,ai03,long2d}. In three- and higher dimensional SR systems, however, weak disorder is generally irrelevant, thus the phase-transition stays discontinuous and only for strong enough disorder will it turn to a second-order one. This type of problem has been numerically studied for the $q$-state Potts model with $q>2$\cite{uzelac,pottssite,pottsbond,mai05,mai06}. In particular a mapping between the random-field Ising model (RFIM) and the Potts model in the $q \to \infty$ limit has been used to predict some tricritical exponents of the latter random model\cite{pottstm,mai06}. Homogeneous, i.e. nonrandom systems with long-range (LR) interactions could have an ordered phase\cite{dyson} and a first-order transition, too, even if the system is one-dimensional. This happens, among others for the $q$-state Potts chain\cite{Wu} with power-law interactions \begin{equation} J(r) \approx J r^{-(1+\sigma)}\;, \label{J(r)} \end{equation} where $r$ is the distance between the sites and the exponent is $\sigma>0$ to have extensive total energy (for $\sigma<0$ one should divide $J$ by $L^\sigma$). According to numerical results\cite{pure} the transition in the LR Potts chain is of first order for sufficiently large values of $q$, where the limiting value $q_c=q_c(\sigma)$ is an increasing function of $\sigma$. On the other hand the transition for $q<q_c(\sigma)$ is of second order. Low-dimensional LR models with power-law interactions became the subject of intensive research recently, after it has been noticed that the decay exponent, $\sigma$ in the problems plays the role of some kind of effective dimensionality of the analogous SR model. Among the classical problems studied so far we mention the non-random Ising model in one- and two-dimensions\cite{fisher_me,sak,bloete,picco,parisi}, the non-random Potts chain\cite{pure}, the Ising spin-glass model\cite{kotliar,cecile_sg,cecile_sg1,cecile_sg2} and the RFIM in one dimension\cite{bray,weir,rodgers,aizenman,cassandro,monthus11,leuzzi,dewenter}. For quantum models we mention investigations of the transverse-field Ising model both with pure\cite{porras,deng,hauke,peter,nebendahl,wall,cannas,dutta,dalmonte,koffel,hauke1} and random couplings\cite{jki,cecile,jki1,kji} and the Anderson localization problem\cite{cecile_anderson}, for reaction-diffusion type models the contact process and similar models with\cite{jki1} and without\cite{mollison,janssen,howard,ginelli,fiore,linder,grassberger,ginelli1,adamek,hinrichsen} quenched disorder. The critical properties of LR models are often unusual. Here we mention that the classical Ising chain for $\sigma=1$, as well as other one-dimensional discrete spin models with LR interaction have a so-called mixed-order (MO) phase transition\cite{anderson1969exact,thouless1969long,dyson1971ising,cardy1981one,aizenman1988discontinuity,slurink1983roughening,bar2014mixed}, at which point the order-parameter has a jump, but at the same time the correlation length is divergent. We note that recently MO transitions have been observed in other problems, too\cite{PS1966,fisher1966effect,blossey1995diverging,fisher1984walks,gross1985mean,toninelli2006jamming,toninelli2007toninelli,schwarz2006onset,liu2012core,liu2012extraordinary,zia2012extraordinary,tian2012nature,bizhani2012discontinuous,sheinman2014discontinuous,bar2014mixed1,kji}. In the present paper we consider LR models, having a first-order transition in their non-random version and study the effect of quenched disorder on the phase-transition properties of the system. To be specific, we consider the LR Potts model in one dimension for large values of $q$ (actually we consider the $q \to \infty$ limit), when the transition of the pure model is of first order for all values of the decay exponents, $\sigma > 0$. We have random nearest-neighbour couplings with a variance $\Delta^2$, but the long-range forces are non-random and follow the behaviour in Eq.(\ref{J(r)}). We study the phase-transition of the system for different values of the effective dimensionality ($\sigma$) and the strength of disorder ($\Delta$). The free energy and the magnetization of a given random sample is calculated exactly by a computer algorithm, which works in polynomial time\cite{aips02}. We follow the temperature dependence of the average magnetization in relatively large finite samples and the location of the phase-transition point and its properties are analyzed by finite-size extrapolation methods. The structure of the paper is the following. The model and some results are summarized in Sec.\ref{sec:model}. Numerical results at different points of the phase diagram are presented in Sec.\ref{sec:numerical} and analyzed by finite-size scaling. We close our paper with a discussion in Sec.\ref{sec:discussion}. \section{Model and some results} \label{sec:model} We consider the ferromagnetic $q$-state Potts-model\cite{Wu} in a one-dimensional periodic lattice with long-range interactions defined by the Hamiltonian: \begin{equation} {\cal H}=-\sum_i J_i \delta(s_i,s_{i+1})-\sum_{i < j+1} J_{ij} \delta(s_i,s_{j})\;. \label{hamiltonian} \end{equation} Here $s_i=1,2,\dots,q$ is a Potts-spin variable at site $i=1,2,\dots,L$ and the long-range interaction, $J_{ij}$, has a power-law dependence as in Eq.(\ref{J(r)}) with $r=\min\left(L-\left|i-j\right|,\left|i-j\right|\right)$. The nearest neighbour couplings, $J_i \equiv J_{i,i+1}$, are random variables. For simplicity we take $J_i$ from a bimodal distribution, being either $J_-=J-\Delta$ or $J_+=J+\Delta$ with equal probability. In the following we set the energy-scale to $J=1$ and restrict ourselves to $0<\Delta<1$. \subsection{The large-$q$ limit} \label{sec:large-q} In this paper we consider the $q \to \infty$ limit of the model, when the reduced free-energy in the Fortuin-Kasteleyn representation\cite{kasteleyn} is dominated by a single graph\cite{JRI01}, the so called optimal graph, $G$, and given by: \begin{equation} -\beta f L= {\rm max}_G W(G),\quad W(G)= \left[c(G) + \beta \sum_{ij \in G} J_{ij} \right]\;, \label{max} \end{equation} Here $c(G)$ stands for the number of connected components of $G$ and $\beta=1/(T \ln q)$, with the temperature $T$. In the \textit{homogeneous nonrandom model} with $\Delta=0$ there are only trivial optimal graphs as shown in appendix for any $\sigma$. In the low-temperature phase, $T < T_c$, it is the fully connected graph with $W(G_c)=-\beta L f_{\rm hom}=1+L \beta {\cal Z}_L(1+\sigma)$, with \begin{eqnarray} {\cal Z}_L(1+\sigma)&=&\frac{1}{L}\sum_{i=1}^{L/2}(L+1-i)i^{-(1+\sigma)} \nonumber \\ &=&\left(1+\frac{1}{L}\right) \zeta_{L/2}(1+\sigma)-\frac{1}{L}\zeta_{L/2}(\sigma)\;, \label{zeta} \end{eqnarray} where we have assumed that $L$ is \textit{even}. Here $\zeta_{L/2}(\alpha)=\sum_{i=1}^{L/2} i^{-\alpha}$ and for $L \to \infty$ we have the Riemann-zeta function, $\zeta(\alpha)$. In the high-temperature phase, $T > T_c$, the optimal graph is the empty graph with $W(G_e)=-\beta L f_{\rm hom}=L$. The phase-transition point in the thermodynamic limit is given by $\beta_c=1/\zeta(1+\sigma)$ where the phase transition is of first order having the maximal jump in the magnetization. In the limit where $\sigma$ goes to infinity one recovers the disordered SR Potts chain. In that case and for finite size $L$, there are non trivial optimal sets. But in the thermodynamical limit, the magnetization still jumps from zero to one for the bimodal distribution. This is shown in the appendix. \subsection{Stability analysis of the random model} \label{sec:stability} Here we start with weak disorder, $\Delta \ll 1$, and estimate the characteristic function of non-homogeneous optimal graphs. First let us consider an island of $l +1 \le \frac{L}{2}$ consecutive sites, which are fully connected within the sea of isolated points. The corresponding characteristic function is given by: \begin{equation} W(G_1)=L-l+\beta [(l+1) \zeta_l(1+\sigma)-\zeta_l(\sigma)] + \beta \Delta \epsilon(l)\;, \end{equation} where $\epsilon(n)$ is the sum of $n$ random numbers with mean zero and variance unity, thus $\epsilon(n) \sim \sqrt{n}$ for large $n$. At the transition point of the pure system, $\beta=\beta_c=1/\zeta(1+\sigma)$, the new diagram is the optimal set, i.e. $W(G_1)>W(G_e)$, provided: $\Delta > [l (\zeta(1+\sigma)-\zeta_{l-1}(1+\sigma))+\zeta_l(\sigma)]/\epsilon(l-1)$. For large-$l$ the r.h.s. of this inequality scales as: $l^{1-\sigma}/l^{1/2} \sim l^{1/2-\sigma}$, thus we have the condition \begin{equation} \Delta > C l^{1/2-\sigma},\quad l \gg 1\;. \label{Delta} \end{equation} Consequently for a decay exponent $\sigma > 1/2$ there is a new, non-homogeneous optimal set and the (phase-transition) properties of the system are modified by any small amount of disorder, at least in the thermodynamic limit. On the contrary for $\sigma < 1/2$ the transition, at least for small $\Delta$ stays first order and it could be changed only by strong enough disorder, i.e. for large $\Delta$. Next we study the stability of the fully connected graph $G_c$ and consider a diagram, $G_2$, in which in a fully connected sea of points there are $l$ disconnected sites. Its characteristic function is given by: \begin{eqnarray} &W(G_2)=l+1+\beta L {\cal Z}_L(1+\sigma) \\ &-\beta[ l (2\zeta_{L/2}(1+\sigma)-\zeta_{l}(1+\sigma))+\zeta_l(\sigma)] + \beta \Delta \epsilon(l+1) \nonumber \;. \end{eqnarray} At $\beta=\beta_c$ we have $W(G_2)<W(G_c)$, at least for weak disorder for any value of $\sigma > 0$. This means, that considering the stability of the two trivial optimal sets of the pure system at $\beta=\beta_c$, these are not symmetric. For $\sigma>1/2$ the empty diagram is unstable, while the fully connected graph is stable for weak disorder. We note that in the SR model both graphs become unstable at the same value of the dimensionality: $d \le 2$. In the LR model in the modified transition regime, $\sigma>1/2$, we can define a breaking-up length: \begin{equation} l^{*} \sim \Delta^{1/(1/2-\sigma)},\quad \sigma>1/2\;, \end{equation} which is the typical size of connected clusters. This means, that in a finite system one should have $L > l^{*}$ in order to be able to observe a new type of transition, otherwise there is a pseudo-first-order transitions in the finite system. \subsection{Relation with the RFIM} \label{sec:random} The previous stability analysis is based on the properties of an interface separating the two trivial optimal graphs and analogous reasoning due to Imry and Ma\cite{imry_ma} works for the RFIM, in which case the interface separates the ordered and disordered regions of the model. This mapping has been observed by Cardy and Jacobsen\cite{pottstm} and can be generalized for LR interactions in which case the RFIM in a one-dimensional lattice is defined by the Hamiltonian: \begin{equation} {\cal H}_{RFIM}=-\sum_i B_i S_i-\sum_{i < j} J_{ij} S_iS_{j}\;, \label{RFIM} \end{equation} in terms of $S_i=\pm 1$. Here $B_i$ is a random variable with zero mean and variance $\Delta^2$ and $J_{ij}$ is in the same form as in Eq.(\ref{J(r)}). The critical behavior of ${\cal H}_{RFIM}$ has been studied in the literature\cite{bray,weir,rodgers,aizenman,cassandro,monthus11,leuzzi,dewenter} and $\sigma$-dependent properties are found, which are summarized in the following. There is a ferromagnetic ordered phase in the system for $0<\sigma<1/2$ (which corresponds to phase-coexistence, i.e. first-order transition in the RBPM) and there is no spontaneous ordering for $\sigma > 1/2$ (which is analogous to the absence of first-order transition in the RBPM). The transition to the ferromagnetic ordered phase is mean-field (MF) type in the region $0<\sigma<1/3$, where the critical exponents are the MF ones: $\alpha_{RF}=0$, $\beta_{RF}=1/2$, $\gamma_{RF}=1$ and $\nu_{RF}=1/\sigma$. On the contrary for $1/3<\sigma<1/2$ the transition is non-MF: the critical exponent $\nu_{RF}$ is not known exactly, but we have the relations: \begin{equation} \frac{2-\alpha_{RF}}{\nu_{RF}}=1-\sigma,\quad\frac{\beta_{RF}}{\nu_{RF}}=\frac{1}{2}-\sigma,\quad \frac{\gamma_{RF}}{\nu_{RF}}=\sigma \end{equation} Cardy and Jacobsen\cite{pottstm} has conjectured relations between the magnetization exponents of the RFIM and the tricritical exponents in the energy sector of the RBPM, at least for SR models. If we assume the validity of these relations for LR interactions, too, we have for the correlation-length exponent of the RBPM at the tricritical point: \begin{equation} \nu=\frac{\nu_{RF}}{\beta_{RF}+\gamma_{RF}}\;. \end{equation} Thus the conjectured results are $\nu=\dfrac{2}{3\sigma}$ and $\nu=2$ in the MF-region and in the non-MF region, respectively. \section{Numerical calculation} \label{sec:numerical} \subsection{Preliminaries} \label{sec:preliminaries} As for systems with quenched disorder one should perform two averages: first, the thermal average for a given realization of disorder and second, averaging over the disorder realizations. For a given random sample of length $L$ the thermal average is obtained through the solution of the optimization problem given in Eq.(\ref{max}). Having the optimal graph of the sample, we have the free-energy as well as the structure of connected clusters in this graph. The magnetization of the sample, $m$, is given by the number of sites in the largest cluster, $N_{\rm max}$, as $m=N_{\rm max}/L$. The optimization process for a given sample is solved exactly by a combinatorial optimization algorithm which works polynomially in time\cite{aips02}. This makes us possible to treat relatively large samples up to $L=1024$ and in some cases up to $L=2048$. In the latter case the typical computational time of a sample in the complete temperature range is about 6-7 hours in a 2.4 GHz processor. A drawback of the calculation, that the possible graphs in the present problem are fully connected, having $L(L-1)/2$ possible edges and the algorithm needs so many iterations, which increases the computational time accordingly. In the second step of the averaging process we have considered several independent random samples, their typical number being a few $10000$, for $L=1024$ a few $1000$. \subsection{Magnetization profiles} \label{sec:profiles} \begin{figure} \begin{center} \includegraphics[width=9cm]{diagFI_pointsJC.pdf} \end{center} \caption{Schematic phase-diagram of the LR Potts chain with random nearest neighbour couplings in the $q \to \infty$ limit together with points of the phase diagram studied numerically. The border of the strongly $1^{\rm st}$ order regime in a finite system (dashed line) is calculated with $L=256$ and with $N^{\#}=600$ samples, see text. The plus sign refers to long range first order, the cross sign refers to mixed order, the circle to short range first order, and the square to second order transitions, respectively. } \label{fig:phasediag} \end{figure} Before entering in details to study the phase-diagram of the system we have made a rough estimate of the domain, in which the transition is very strongly first order. For this purpose we have analysed the phase-transition of $N^{\#}=600$ samples of length $L=256$. In the shaded area of the $\sigma - \Delta$ phase diagram in Fig.\ref{fig:phasediag} in all samples the transition is between the fully connected graph and the empty graph, thus the transition is maximally $1^{\rm st}$ order, as in the homogeneous system. Then, we have chosen a several points outside the strongly first-order regime, which are indicated in Fig.\ref{fig:phasediag}. The selected points can be devided into two groups: a set of points with relatively weak disorder, $\Delta=0.2$ and $\Delta=0.5$ and another set with quite strong disorder $\Delta=0.75$. At each point the calculation of the optimal graph is performed in the complete temperature range: we have monitored the temperature dependence of the magnetization and focused to its possible singular behaviour. These calculations are performed in finite systems with L=64,128,\dots 1024 and the actual properties of the singularity, thus the form of the phase transition is analyzed by finite-size extrapolation. \subsubsection{Weak and intermediate disorder regimes: $\Delta=0.2$ and $0.5$} \label{sec:weaker} \begin{figure} \begin{center} \includegraphics[width=9cm]{{M_D_0.200_s_1.000}.pdf} \end{center} \caption{Second-order transition at $\Delta=0.2$ and $\sigma=1$} \label{fig:M_0.2_1} \end{figure} For weak disorder with $\Delta=0.2$ we have studied the point of the phase-diagram with $\sigma=1$ (square on Fig.\ref{fig:phasediag}), i.e. at border of the LR regime, the magnetization profiles are shown in Fig.\ref{fig:M_0.2_1}. It is seen that due to disorder the first-order transition in the pure system is rounded: the jump in the magnetization is decreasing with increasing size and in the thermodynamic limit the jump is expected to disappear, $\Delta(L) \sim L^{-\beta/\nu}$, so that the limiting curve $\lim_{L \to \infty} m(L,T)=m(T)$ is continuous. However, its derivative at $T=T_c$ is expected to be divergent, so that $m(T)-m(T_c) \sim |T-T_c|^{\beta}$. The finite-size transition points are shifted as $T_c-T_c(L) \sim L^{-1/\nu}$. Note, that for $T<T_c$ ($T>T_c$) the profiles satisfy $m(L_1,T)>m(L_2,T)$ ($m(L_1,T)<m(L_2,T)$) for $L_1<L_2$ . We have studied also the temperature dependence of the average energy-density, which is shown in Fig.\ref{fig:E_0.2_1}. At the transition point in small finite systems there is a discontinuity of the energy-density, which seems to dissappear in the thermodynamic limit, but its first derivative, the specific heat is divergent: $C(T) \sim |T-T_c|^{-\alpha}$. Consequently the transition according to the Ehrenfest classification is of second-order. \begin{figure} \begin{center} \includegraphics[width=9cm]{{E_D_0.200_s_1.000}.pdf} \end{center} \caption{Energy for $\Delta=0.2$ and $\sigma=1$} \label{fig:E_0.2_1} \end{figure} For intermediate disorder, $\Delta=0.5$, two points are considered with $\sigma=0.75$ and $\sigma=1.0$, the calculated average magnetization profiles are presented in Figs. \ref{fig:M_0.5_0.75} and \ref{fig:M_0.5_1}. In both cases the transition seems to be of second-order, which is in agreement with the temperature dependence of the energy-densities. \begin{figure} \begin{center} \includegraphics[width=9cm]{{M_D_0.500_s_0.750}.pdf} \end{center} \caption{Second-order transition at $\Delta=0.5$ and $\sigma=0.75$} \label{fig:M_0.5_0.75} \end{figure} \begin{figure} \begin{center} \includegraphics[width=9cm]{{M_D_0.500_s_1.000}.pdf} \end{center} \caption{Magnetisation for $\Delta=0.5$ and at the border $\sigma=1$} \label{fig:M_0.5_1} \end{figure} \subsubsection{Strong disorder regime: $\Delta=0.75$} \label{sec:strong} At the disorder parameter $\Delta=0.75$ we have studied different regimes by varying the decay exponent $\sigma$. \paragraph{\underline{$\sigma \lesssim 0.5$: LR first-order transitions}} \label{sec:LR first} \begin{figure} \begin{center} \includegraphics[width=9cm]{{M_D_0.750_s_0.400}.pdf} \end{center} \caption{First-order transition due to LR forces at $\Delta=0.75$ and $\sigma=0.4$.} \label{fig:M_0.75_0.4} \end{figure} \begin{figure} \begin{center} \includegraphics[width=9cm]{{M_D_0.750_s_0.500}.pdf} \end{center} \caption{The jump in the magnetisation is rounded due to disorder at $\Delta=0.75$ and $\sigma=0.5$.} \label{fig:M_0.75_0.5} \end{figure} \begin{figure} \begin{center} \includegraphics[width=9cm]{{M_D_0.750_s_0.625}.pdf} \end{center} \caption{Mixed-order transition at $\Delta=0.75$ and $\sigma=0.625$} \label{fig:M_0.75_0.625} \end{figure} \begin{figure} \begin{center} \includegraphics[width=9cm]{{M_D_0.750_s_0.750}.pdf} \end{center} \caption{Mixed-order transition at $\Delta=0.75$ and $\sigma=0.75$} \label{fig:M_0.75_0.75} \end{figure} \begin{figure} \begin{center} \includegraphics[width=9cm]{{M_D_0.750_s_0.875}.pdf} \end{center} \caption{Mixed-order transition at $\Delta=0.75$ and $\sigma=0.875$} \label{fig:M_0.75_0.875} \end{figure} \begin{figure} \begin{center} \includegraphics[width=9cm]{{M_D_0.750_s_1.000}.pdf} \end{center} \caption{Magnetisation for $\Delta=0.75$ and at the border $\sigma=1$} \label{fig:M_0.75_1} \end{figure} (plus signs on Fig.\ref{fig:phasediag}) At the point $\sigma=0.4$ in Fig.\ref{fig:M_0.75_0.4} the average magnetization has a finite jump of $\Delta m \approx 1$ for all finite systems. The finite-size transition points, which are identified with the position of the jump, $T_c(L)$, are shifted such that $T_c(L_1)<T_c(L_2)$ for $L_1<L_2$. Furthermore the distance from the true transition point is well described by the asymptotic behaviour in the non-random system: $\Delta T_c=T_c-T_c(L) \sim L^{-\sigma}$, since $T_C(L) \propto {\rm cst ~} \zeta_L(1+\sigma)$. Thus the scaling exponent associated to lengths is $\nu \approx 1/\sigma$. At this point, and in more general in the regime $\sigma \lesssim 0.5$ there is a \textit{random first-order transition} due to LR forces. At the borderline value of $\sigma=0.5$ the magnetization profiles in Fig.\ref{fig:M_0.75_0.5} show still a jump, at least for smaller finite systems. With increasing $L$, however, the jump in the magnetization is going to be rounded, so that the transition could be continuous in the thermodynamic limit. With the finite-size results at hand we can not discriminate between these scenarios. The shift of the finite-size transition points are characterized by an exponent: $\nu \approx 2 = 1/\sigma$, in this case, too. \paragraph{\underline{$0.5 < \sigma \le 1.0$: Mixed-order transitions}} \label{sec:mixed} (crosses in Fig.\ref{fig:phasediag}) In this regime we have a series of points with $\sigma=0.625,~0.75,~0.875$ and $1.0$ and the corresponding profiles are shown in Figs.\ref{fig:M_0.75_0.625}-\ref{fig:M_0.75_1}. The new feature of the profiles, that for different sizes they cross each other, so that for $T<T_c$ ($T>T_c$) the profiles satisfy $m(L_1,T)<m(L_2,T)$ ($m(L_1,T)>m(L_2,T)$) for $L_1<L_2$. Furthermore at the transition point in the thermodynamic limit the magnetization has a finite limiting value: $\lim_{L \to \infty} m(L,T_c^-)=m^->0$, which is different from the limit $\lim_{L \to \infty} m(L,T_c^+)=m^+$. Consequently at the transition point there is a jump in the magnetization: $\Delta m=m^--m^+$. We also expect that the actual value of $m^+$ is (close to) zero for strong disorder (large $\Delta$) and it is increasing for smaller value of $\Delta$. In the thermodynamic limit for $T<T_c$ the magnetization is expected to follow a singular temperature dependence: $m(T)-m^- \sim (T_c-T)^{\beta}$. This can be checked in finite systems by defining finite-size transition points as the crossing points of the profiles $m(L_1,T)$ and $m(L_2,T)$: $m[L_1,T_c(L_1,L_2)]=m[L_2,T_c(L_1,L_2)] \equiv m^-(L_1,L_2)$. According to scaling theory the differences should behave asymptotically as: $T_c(L_1,L_2)-T_c \sim (L_1 L_2)^{-\frac{1}{2 \nu}}$ and $m^--m^-(L_1,L_2) \sim (L_1 L_2)^{-\frac{\beta}{2 \nu}}$. Due to strong finite-size corrections we could make an estimate for the critical exponents only in the case $\sigma=0.875$ with the result: $1/\nu \approx 1.27$ and $\beta/\nu \approx 0.78$. This means that in this point, or more generally in the $0.5 < \sigma \le 1.0$ part of the phase diagram (with $\Delta=0.75$) there is a mixed-order phase-transition in the system: the magnetization has a jump at the transition point, but the correlation length is divergent at $T_c$. Comparing the magnetization profiles at different values of $\sigma$, one can notice, that its limiting value, $m^-$, and thus the jump $\Delta m$ is an increasing function of $\sigma$ in the given range. Increasing $\sigma$ over the upper limit, $\sigma=1$, the form of the singularity changes ones more. \paragraph{\underline{$\sigma > 1.0$: SR first-order transitions}} \label{sec:SR first} \begin{figure} \begin{center} \includegraphics[width=9cm]{{M_D_0.750_s_1.250}.pdf} \end{center} \caption{First-order transition due to SR forces at $\Delta=0.75$ and $\sigma=1.25$} \label{fig:M_0.75_1.25} \end{figure} \begin{figure} \begin{center} \includegraphics[width=9cm]{{M_D_0.750_s_1.500}.pdf} \end{center} \caption{First-order transition due to SR forces at $\Delta=0.75$ and $\sigma=1.5$} \label{fig:M_0.75_1.5} \end{figure} (circles on Fig.\ref{fig:phasediag}) The magnetization profiles at $\sigma=1.25$ and $1.5$ in Fig.\ref{fig:M_0.75_1.25} \ref{fig:M_0.75_1.5} show similar features: a jump is developed for large $L$, the asymptotic position of which is at $T_c(\sigma)/\zeta(1+\sigma)<1$, which ratio is decreasing with increasing $\sigma$ and in the true SR model with $\sigma \to \infty$ this ratio is just $1-\Delta$. Thus in this region the transition is of first order due to SR interactions. Comparing the finite-size transition temperatures, $T_c(L)$, defined as the inflection point of the profiles, we observe the asymptotic behaviour: $T_c(L)-T_c \sim L^{-1}$, characteristic for SR forces. We note that spontaneous order in the LR Potts chain for $\sigma > 1$ can be observed only in the $q \to \infty$ limit. For any finite value of $q$ due to thermal fluctuations there is no ordered phase, thus the SR first-order transition regime is absent. \section{Discussion} \label{sec:discussion} We have studied numerically the phase-diagram of the ferromagnetic LR Potts chain with random nearest-neighbour couplings in the $q \to \infty$ limit. Depending on the strength of disorder, $\Delta$, and the decay exponent, $\sigma$, different type of phase-transitions are found: first-order due to LR interactions, first-order due to SR interactions, second-order and mixed-order transitions. A schematic phase-diagram is depicted in Fig.\ref{fig:phasediag}. For small values of $\sigma < \sigma_c(\Delta) \le 0.5$ the long-range interactions are dominant over quenched disorder and the transition is of first order, as in the non-random system. For large values of $\sigma > 1.$ the transition is also of first-order, however now due to short-range interactions. We note, that for finite-values of $q$ in this region there is no ferromagnetic order in the system. For intermediate values of the decay exponent: $\sigma_c(\Delta) < \sigma < 1.$ quenched disorder is going to change the order of the transition. For weaker disorder the transition turns to second-order, which is manifested by a divergent specific heat and by a divergent correlation length, however the magnetization at the critical point is continuous and has a finite value. For strong disorder the transition turns to be of mixed-order. At the transition point the correlation length is divergent, but there is a finite jump in the magnetization, as well as in the energy-density. The finite-size scaling behaviour of the magnetization profiles are also different in the SO and the MO transitions. The different type of transitions are connected with the geometric properties of the optimal graphs. At first-order transitions the optimal graphs are different at the two sides of the transition points: in the ferromagnetic phase there is a giant cluster, whereas in the high-temperature phase the clusters have finite mass and extent. At the second-order transition at both sides there is a giant cluster, however at the transition a hole in this giant cluster is developed, the size of which as well as its mass is divergent. This hole in the SO transition point is a fractal, therefore the average magnetization is continuous. Similar process takes place at a mixed-order transition, too, with the difference, that in this case the ``hole'' in the high-temperature phase is a compact object having a finite density of mass. This leads to a jump in the magnetization in the thermodynamic limit. For large enough $\Delta$ this hole is going to disconnect the giant cluster, so that the density of its mass, being the magnetization has a vanishing value in the thermodynamic limit. We expect that the results summarized in the phase-diagram in Fig.\ref{fig:phasediag} remain qualitatively correct for another, more general models, too. First we mention, that the LR forces in Eq.(\ref{hamiltonian}) can be (weakly) random, too, which means that in Eq.(\ref{J(r)}) the prefactor is modified as $J \to J_i$, and the $J_i > 0$ are random variables. Another set of models are obtained if the parameter $q$ is a large, but finite value. As noted before this model for $\sigma > 1$ has no ordered phase, however similar phase-diagram is expected to hold in the regime $0 < \sigma < 1$. This conjecture is based on the known results in the SR models, in which the properties of the phase transitions in different dimensions are found to be a smooth function of $q$, so that the $q \to \infty$ limit is not singular\cite{jacobsenpicco,ai03,long2d}. Further numerical work is needed to clarify, if similar relation holds also for the LR model. \section*{Appendix} \label{appendix} In this appendix we give the solution of the optimal cooperation problem on the two lines $\Delta=0$ and $\sigma \rightarrow \infty$ of the phase diagram. Let's recall that an optimal set is a set of edges which maximizes the objective function \begin{equation*} f(S;\beta) = c(S) + \beta \sum_{e \in S} J(e) \end{equation*} where $c(S)$ is the number of connected components of $S$ and $\beta$ is the inverse temperature. For any sample, the optimal set for zero temperature is the set of all the bonds, while for high temperature the optimal set is empty. Between these two limits the optimal set changes at a finite number $n_T$ of temperatures ($n_T<L$). We call these temperature breaking temperatures. If there is only one breaking temperature ($n_T=1$) the model is maximally first order since the magnetization jumps from zero to one. Let us first consider the case $\Delta = 0$, therefore a non disordered model. We show below that for any decreasing weight function of the distance (as for example $d^{-(1+\sigma)}$) there is a single breaking temperature for any size $L$. Note first that if a bond of length $d$ belongs to an optimal set, then there is an optimal set to which {\it all} bonds of length $d$ are present. Indeed the permutation of the sites $i \rightarrow i+1$ preserve the length of the bonds and therefore any bonds of length $d$ belongs to some optimal set. Since the union of two optimal sets is also an optimal set, we deduce that there is an optimal set to which all bonds of length $d$ belong. Suppose now that the bond between site 0 and site $d$ belongs to the optimal set. Then the bond between the sites $d$ and $2d$ also belongs to the optimal set and consequently the site $0$, $d$, $2d$ belongs to the same cluster. More generally all the bonds between $\alpha d$ and $(\alpha+1)d$ also belong to the optimal set and consequently all sites $\alpha d$, where the product is modulo $L$ and $\alpha$ an arbitrary integer, belong to the same cluster . If $L$ is a prime number, then all the sites will be attained, and therefore the optimal set contain all bonds if it contains any one, which proves the results. Note that in this special case of $L$ being prime we did not use the fact that the weight function is decreasing. To sketch the results in the case where $L=ln$ is not a prime number we introduce the sets of edges $C_{n,l}(k)$ induced by the vertex sets $\left\{k,l+k,2l+k,\cdots,\left(n-1\right)l+k\right\}$. It is clear that every optimal set is of the form $R(n)=\bigcup_{k=0}^{l-1}C_{n,l}(k)$ and is therefore characterized by a divisor of $L$. Showing that the transition is maximally first order amounts to showing that the optimal set is characterized by only either $1$ or $L$. To this end, let's introduce the sets of edges $\Gamma_{n}(k)$ induced by the set of vertices $\left\{k,k+1,\cdots,k+\left(n-1\right)\right\}$. A union $S(n)=\bigcup_{k=0}^{l-1}\Gamma_{n}(nk)$ is in general not a optimal set. However comparing the objective function for $S(n)$ and $R(n)$ and using the fact that $J$ is a decreasing function of the distance, we find that only $S(1)$ and $S(L)$ can be the optimal set. This proves that the model is maximally first order also when $L$ is not prime. \medskip Now we turn to the case $\sigma \rightarrow\infty$, {\it ie} when only the short range disordered bonds are present. In the general case the couplings constant can take $n$ values $0 < J_0 \le J_1 \le \cdots \le J_{n-1}$ the breaking temperatures $T_k = \frac{1}{k-1}\sum_{i=0}^{L-1}J_i$ for $2 \le k \le L$. Using this relation in the case of bimodal distribution with an equal number of strong ($1+\Delta$) and weak ($1-\Delta$) bonds, one gets $1-\Delta = J_0 = \cdots =J_{\frac{L}{2}} < J_{\frac{L}{2}}=\cdots=J_{L-1}=1+\Delta$ from with the $T_k$ are easily deduced. After some algebra one gets that if $\Delta \le \frac{1}{L-1}$ the model is maximally first order with a breaking temperature $\frac{L}{L-1}$, while if $\frac{1}{L-1}<\Delta$ there are two breaking temperatures $T_1 = \frac{L}{L-2}(1-\Delta)$ and $T_2 = 1+\Delta$. In the intermediate regime $T_1 \le T \le T_2$ the free energy is $f(T,L)=\frac{1}{2} +\frac{1}{2} \frac{1+\Delta}{T}$, and we have numerically observed that magnetization scales as $L^{-0.82}$. So in the thermodynamical limit the magetization jumps from 0 to 1 at $T_2$. Note finally that {\it all} realizations have exactly the same behavior. Therefore in some sense, the model is not disordered. \begin{acknowledgments} This work was supported by the Hungarian Scientific Research Fund under grant No. K109577 and K115959. J-Ch Ad'A extends thanks to the "Theoretical Physics Workshop" and FI to the Universit\'e Joseph Fourier for supporting their visits to Budapest and Grenoble, respectively. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
315
{"url":"https:\/\/www.physicsforums.com\/threads\/n-1-susy-masive-supermultiplets.268753\/","text":"# Homework Help: N=1 SUSY masive supermultiplets\n\n1. Nov 2, 2008\n\n### thatsnotfunny\n\nN=1 susy with massive supermultiplets; therefore we have two pairs of raising\/lowering operators.\nDefine a1 dagger as raising j3 by half and a2 dagger as lowering j3 by half.\nTake vacuum state with j, j3 where j3 has 2J+1 possible values.\nIf we act with just a1 dagger we get superposition of states, both with j3+1\/2 but one has j+1\/2 and one has j-1\/2 .\nIf we act just with a2 dagger we get a similar situation but with j3-1\/2 .\nBut if we act with both a1 and a2 dagger we only get a state with j, j3. I understand that the j3 is additive, but surely we should have states with j plus or minus one as well? It seems to be postulated that the spin can only go up or down by half but not by one?","date":"2018-05-27 01:46:15","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9204742908477783, \"perplexity\": 2121.4808195962783}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-22\/segments\/1526794867977.85\/warc\/CC-MAIN-20180527004958-20180527024958-00112.warc.gz\"}"}
null
null
\section{Introduction} \label{sec: intro} The vast majority of exoplanets discovered so far lies at less than 3 au from the central star, with super-Earths abundant well inside 1 au \citep[e.g.,][]{petigura13}. Planets of up to a few Earth masses could form in situ \citep[e.g.,][]{hansen12}, while giant planets most probably migrate inward from beyond a few au \citep[e.g.,][]{KleyNelson12}. Whether exoplanets form or migrate where they are detected, the effect of the structure and evolution of protoplanetary disks at 0.1--10\,au is expected by all models to be fundamental in shaping planetary systems. First measurements on the evolution of inner disks came from spatially unresolved observations of spectral energy distributions (SEDs), which in some disks show a lower infrared (IR) flux that was attributed to a deficit of hot inner material \citep{strom1989}. This deficit has been interpreted as due to inner holes (``transitional'' disks) or gaps (``pre-transitional'' disks), depending on the level of near-IR flux related to an inner dust belt inside the cavity \citep{espaillat2007}. Modern imaging techniques, probing disk radii of $\gtrsim 5$\,au, have confirmed the existence of $> 10$\,au wide cavities in several disks, depleted of dust particles of up to centimeter sizes \citep[e.g.,][]{andrews2011}. A leading theory for the origin of these cavities is disk clearing by planets, exo-Jupiters, but possibly also super-Earths \citep[e.g.,][]{pinilla12,fung17}. Inner disks can be dispersed by winds as well, although observations cannot yet be fully reconciled in any photoevaporative or planet-disk interaction models \citep{owen16}. In disks around young intermediate-mass stars (Herbig Ae/Be stars), the far-IR excess of the SED, tracing colder material at larger radii, has been adopted by \citet{meeus01} to classify disks into Group I (GI, with high excess) and Group II (GII, with moderate excess). The current understanding of this empirical classification is that it reflects a different disk structure, with GI having a large disk cavity that allows the irradiation of the disk at $\gtrsim 10$ au, and GII having no or at most small inner cavities \citep{maask13,menu15,honda15,garufi17}. While near-IR and millimeter imaging now reveal increasing detail in global disk structures \citep[e.g.,][]{alma15,benisty2015}, information on the inner disk structure at $\lesssim 5$ au can only be obtained from optical and near-IR spectroscopy and near-IR interferometry. Recently, independent studies have found new evidence in dust and gas tracers pointing in the direction of depletion processes in these inner disk regions \citep{kama15,bp15}. By combining these independent datasets (Sect.\,\ref{sec: data}), we report in this work on the discovery of a linked behavior between observables of CO gas and dust in inner disks (Sect.\,\ref{sec: correlations}). This behavior demonstrates a strong link between molecular gas and dust, providing an important framework for better understanding the evolving structure of planet-forming regions at $\lesssim 10$ au (Sect.\,\ref{sec:disc}). \begin{figure* \centering \includegraphics[width=1\textwidth]{Figure1.pdf} \caption{Linked behavior between the datasets combined in this work. \textbf{(a)}: Fe/H vs R$_{\rm co}$. \textbf{(b)}: Fe/H vs $\rm F_{NIR}$. \textbf{(c)}: $\rm F_{NIR}$ vs R$_{\rm co}$. \textbf{(d)}: $v2/v1$ vs $\rm F_{NIR}$. The red curve shows a parametric model of the decrease of $\rm F_{NIR}$ with increasing size of an inner cavity (see text for details). The three disk categories identified in the multi-dimensional parameter space are illustrated to the bottom. Dust is shown in yellow, and we mark the approximate location of infrared CO emission. Dust depletion is shown as a thinner yellow layer of residual dust, and dust layers that dominate the observed $\rm F_{NIR}$ are marked in red. GII disks are shown as magenta squares, high-NIR GI disks as green triangles, low-NIR GI disks as cyan (empty) triangles.} \label{fig: correlations} \end{figure*} \begin{figure*}[ht] \centering \includegraphics[width=1\textwidth]{histos.pdf} \caption{Histograms of normalized distributions of the sample parameters included in Table \ref{tab: sample}. Median values and median absolute deviations for each category are plotted at the top of each histogram. GII disks are shown in magenta, high-NIR GI disks in green, and low-NIR GI disks in cyan. } \label{fig: histos} \end{figure*} \section{Sample and Measurements} \label{sec: data} The three independent datasets combined in this analysis are the orbital radius and excitation of rovibrational CO emission (Sect. \ref{sec: CO}), the iron abundance Fe/H as measured on stellar photospheres (Sect. \ref{sec: Fe}), and the near-IR excess over the stellar flux in the SED (Sect. \ref{sec: NIR}). The sample includes the majority of well-known Herbig Ae/Be stars within 200 pc, together with some at larger distances: 26 late-B, A, and F stars with temperatures $T_{\rm eff}$ between 6,500 K and 11,000 K, masses between 1.5 M$_{\odot}$ and 4 M$_{\odot}$, and ages between 1 and 10 Myr. The sample is composed evenly by GI and GII disks (13 each), and we adopt the classification criterion from \citet{garufi17}, where the flux ratio at 30 and 13 $\mu$\,m is used to separate GI disks (with $F30/F13 > 2.2$) from GII disks (with $F30/F13 < 2.2$, where 2.2 is the ratio for a flat SED). All three datasets are available for 16 of these objects, while only two are available for the other 10 (Table \ref{tab: sample}). \subsection{Rovibrational CO emission} \label{sec: CO} Spectrally resolved emission line profiles provide information on gas kinematics in protoplanetary disks. Rovibrational CO emission at 4.7--4.8\,$\mu$m was used to estimate a characteristic orbital radius of hot/warm ($\sim$300--1500 K) CO gas in Keplerian rotation in a disk, as $\rm{R}_{co} = (2\, \rm sin \,\textit{i}\, /\, FWHM_{\rm co})^2 \, G\,M_{\star}$, where M$_{\star}$ is the stellar mass and $i$ the disk inclination. As a characteristic gas velocity, we took the CO line velocity at the half width at half maximum (FWHM$_{\rm co}/2$) as in \citet{bp15}. For the disk inclinations, we adopted values from the near-IR interferometric survey by \citet{lazareff17}, which probe inner disks at $< 10$\,au. The 1$\sigma$ errors on R$_{\rm co}$ were propagated from the uncertainties on stellar masses, CO line widths, and disk inclinations, and they are dominated by the disk inclination uncertainties. Most CO spectra have been published previously, except for four disks that were newly observed with IRTF-ISHELL (see Appendix \ref{sec:ishellspec}). Another measured property of CO emission is the flux ratio between rovibrational lines from the second and first vibrational levels. The ratio $v2/v1$ is a sensitive tracer of the type of CO excitation \citep[e.g.,][]{britt03,britt07,thi13}: UV-pumping populates high vibrational states first (producing higher $v2/v1$), while IR-pumping and collisional excitation populate low states first (producing lower $v2/v1$). \subsection{Refractory abundance on stellar photosphere} \label{sec: Fe} Modeling of high-resolution optical spectra enables measuring elemental abundances on stellar photospheres. We used the stellar Fe/H compilation from \citet{kama15}, including mostly values from \citet{folsom12}. The very shallow surface convection zone in stars with $\rm T_{eff}\gtrsim7500$\,K inhibits mixing with the bulk of the stellar envelope and keeps accreted material visible on the photosphere for $\sim1\,$Myr. The Fe/H measurements in Herbig~Ae/Be stars have recently been found to correlate with the presence or absence of dust cavities detected by millimeter interferometry imaging, suggesting that the stellar photospheres keep an imprint of the dust/gas ratio of their inner disks through the accreted material \citep{kama15}. Two stars in the sample have $\rm T_{eff}$ lower than 7500 K, HD142527 and HD135344B; these stars are likely mixing the accreted material more efficiently than the rest of the sample. \subsection{Near-infrared excess} \label{sec: NIR} A traditional probe of hot dust in inner disks is the near-IR excess \citep[e.g.,][]{dominik03}. We estimated the fractional $\rm F_{NIR}$ = F(NIR)/F$_\star$ for our entire sample to ensure an homogeneous procedure, and found values consistent with those estimated in previous work. We collected the BVRJHK photometry, along with the WISE fluxes at 3.6 $\mu$m and 4.5 $\mu$m and dereddened by means of the extinction $A_{\rm V}$ available from the literature \citep[$A_{\rm V} < 0.5$\,mag for most of this sample;][]{meeus12}. A PHOENIX model of the stellar photosphere \citep{Hauschildt1999} with the literature stellar temperature and metallicity and a surface gravity log$(g)=-4.0$ was employed for each source and scaled to the dereddened V magnitude for each source. The near-IR excess $\rm F_{NIR}$ was measured by integrating the observed flux exceeding the stellar flux between 1.2 $\mu$m and 4.5 $\mu$m. These values were then divided by the total stellar flux F$_\star$ from the model. The 1$\sigma$ errors on $\rm F_{NIR}$, propagated from the uncertainties on $\rm T_{eff}$ and an assumed uncertainty of 0.2 mag in $A_{\rm V}$, are typically 13\% (median value), and always $\lesssim 20\%$ (Table \ref{tab: sample}). \section{Linked behavior between the datasets} \label{sec: correlations} The datasets combined in this work show a linked behavior in the multi-dimensional parameter space illustrated in the four panels of Figure \ref{fig: correlations}. The correlation between Fe/H and R$_{\rm co}$ demonstrates a link between two observables that could in principle be independent of each other: iron is depleted from the stellar photospheres as R$_{\rm co}$ recedes to larger radii in their inner disks. GII disks have, on average, smaller R$_{\rm co}$ and higher Fe/H, while GI disks have the opposite, larger R$_{\rm co}$ and lower Fe/H. A group of GI disks overlaps with GII disks at intermediate values of R$_{\rm co}$. $\rm F_{NIR}$ values are found to lie between 5\% and 34\%, with only one disk showing a value as low as 0.08\% (HD141569, see Sect.\,\ref{sec: environment}). All GII disks show $\rm F_{NIR}$ in the narrow range between 14\% and 19\%. GI disks instead span a wider range of $\rm F_{NIR}$, some with lower values than the GII (5--11\%) and some with higher values \citep[22--34\%, see also][]{garufi17}. Overall, disks with larger R$_{\rm co}$ have lower $\rm F_{NIR}$ and Fe/H, although without a simple monotonic relation between R$_{\rm co}$ and $\rm F_{NIR}$. In addition, there is a strong linear anticorrelation between $\rm F_{NIR}$ and the CO vibrational ratio $v2/v1$ (Fig. \ref{fig: correlations}d). Three categories of disks can be identified as based on inner disk observables (Figure \ref{fig: correlations} and Table \ref{tab: categories}). Beyond nomenclature or classification intents, we can understeand the evolving properties of inner disks more comprehensively by identifying which observable properties separate out in this multi-dimensional space. Interestingly, the GI disks in the sample are split into two very distinct parts of the parameter space, and we refer to them as ``high-NIR'' and ``low-NIR'' GI disks to identify the different behavior of inner disk observables. This dichotomy in $\rm F_{NIR}$ values of GI disks is strongly segregated, with the Kolmogorov-Smirnov (KS) two-sided test highly rejecting the hypothesis that they may be drawn from a same parent distribution (probability of < 1\%). The three categories are instead not separated in terms of stellar temperature, mass, luminosity, or age, nor in mass accretion rates (see Figure \ref{fig: histos}), and no correlation is found between these parameters and R$_{\rm co}$, $\rm F_{NIR}$, or Fe/H, suggesting that the measured behavior of inner disk observables cannot be attributed to these parameters. \begin{table \caption[]{Median values for the three disk categories as in Fig.\,\ref{fig: histos}.} \label{tab: categories} $$ \begin{tabular}{cccccc} \hline \hline \noalign{\smallskip} log(Fe/H) & $\rm R_{co}$ & $\rm F_{NIR}$ & $v2/v1$ & $F30/F13$ & Cavity? \\ \hline \noalign{\smallskip} $-4.4$ & $3$ au & $16\%$ & $0.12$ & $1.2$ & no/small \\ \noalign{\smallskip} $-4.6$ & $5$ au & $27\%$ & $0.05$ & $5$ & ``high-NIR'' \\ \noalign{\smallskip} $-5.2$ & $18$ au & $8\%$ & $0.27$ & $5$ & ``low-NIR'' \\ \noalign{\smallskip} \hline \end{tabular} $$ \end{table} The linked behavior between inner disk tracers of CO and dust may be produced by a scenario where as dust is depleted (as shown by the decrease in $\rm F_{NIR}$ and Fe/H), CO gas is depleted as well (CO emission at high velocity decreases, FWHM$_{\rm co}$ shrinks, and $\rm R_{co}$ moves to larger disk radii). As a simple test of whether a linked depletion of CO gas and dust from the inner disk may globally reproduce the observed trends, we adopted a parametric description of an inner disk structure in hydrostatic equilibrium, as based on work by \citet{kama09} and summarized in Appendix \ref{sec:model}. The disk temperature decreases with disk radius as $r^{-0.5}$, and we explored how $\rm F_{NIR}$ decreases as the radius of the innermost dust (R$_{\rm dust}$) was sequentially increased to mimic the formation of an inner hole. A decrease of $\rm F_{NIR}$ with increasing hole size (red curve in Fig.\,\ref{fig: correlations}c) is naturally produced by a decreasing temperature of the emitting dust, as hotter dust at smaller radii is removed. In this scenario, R$_{\rm co}$ could trace the size of an inner dust-depleted disk region, as suggested from previous observations \citep{britt03,britt07}. When Keplerian line profiles are modeled assuming a power-law radial brightness \citep[e.g.,][]{salyk11}, R$_{\rm co}$ as defined here can be several times larger than R$_{\rm dust}$ (taken as the innermost radius where CO gas can also exist). The model curve in Fig.\,\ref{fig: correlations}c shows R$_{\rm dust}$ multiplied by 7, by assuming the median scaling factor between R$_{\rm co}$ values and the R$_{\rm dust}$ values from \citet{lazareff17} for this sample. By assuming devoid inner holes, this simple model provides only a trend of the lower boundary to the data points in the figure. Reproducing the high near-IR excess of some Herbig disks has been a long-standing modeling challenge that is still unsolved \citep[see, e.g.,][]{dullemond01,vinko06}. New modeling efforts could be devoted to studying how $\rm F_{NIR}$ can be increased toward values measured in the high-NIR GI disks (25--35\%) by keeping an inner residual dust component while inner cavities are forming at larger disk radii. Such a structure is supported by recent observations of these disks, as discussed below. \section{Discussion} \label{sec:disc} \subsection{Dust and gas depletion in the inner disk regions} \label{sec: gas_dust_depl} The behavior of inner disk observables presented in this work can be affected by processes that involve different modifications to the radial distributions of dust and gas. Beyond the scope of this work, quantification of these processes requires explorations with sophisticated thermo-chemical models of disks, which have only recently started to be applied to disks with inner cavities \citep[e.g.,][]{simon13}. We advocate in this work a strong empirical evidence for a link in the evolution of dust and molecular gas in planet-forming regions, extending to inner disk radii smaller than can be spatially resolved by ALMA. Sub-solar Fe/H (and other refractories) on radiative stellar photospheres have long been suggested to be linked to dust-depleted material accreted onto the star \citep[e.g.,][]{venn90}, but it was unclear why only some Herbig Ae/Be stars exhibit a depletion of refractories \citep{folsom12}. \citet{kama15} showed that sub-solar refractory abundances in Herbig stars correlate with the presence of a large ($>$ 10 au) dust cavity in their disks, pointing out that the depletion of dust grains in the inner regions of transitional disks would naturally explain a lack of refractories in the accreting material. The correlation between Fe/H and $\rm F_{NIR}$ shown here demonstrates that a link exists between refractory abundances on stellar photospheres and the properties of dusty inner disks at $<10$ au. To decrease both Fe/H and $\rm F_{NIR}$, dust must be depleted from both the accretion flow \textit{\textup{and}} from the inner disk. Potential processes include 1) dust grain growth/inclusion into solids larger than $\gg 1$ mm (pebbles and planetesimals) that decouple from the accreted gas and emit less efficiently in the infrared, 2) physical decoupling of inner and outer disk, where the resupply of inner disk dust by inward drift from the outer disk is inhibited \citep[see, e.g., discussions in][]{andrews2011,kama15}. The survival of CO gas in a dust-depleted cavity of Herbig disks was specifically investigated by \citet{simon13}. The study showed that as long as inner disks are still gas rich (N(H$_2$) $\approx10^{27}$cm$^{-2}$ at <1\,au), the radial distribution of CO gas does not depend on the presence of dust, as the density of CO molecules is well above the column needed to self-shield from UV photodissociation \citep{visser09}. When instead the \textit{\textup{total}} gas content in the inner disk is decreased by at least four orders of magnitude, reaching column densities where UV photodissociation of CO becomes relevant, the survival of CO is closely linked to the presence of shielding dust. Specifically, in a dust- \textit{\textup{and}} gas-depleted cavity, the CO column density decreases by orders of magnitude by UV photodissociation inside the cavity when a residual inner dust belt is removed. In this situation, R$_{\rm co}$ will shift to larger radii as the dust is depleted from the innermost disk radii. In the framework of this modeling, the linked behavior between dust and CO gas observables suggests that low-NIR GI disks with large R$_{\rm co}$ may be in a \textit{\textup{gas-depleted}} regime. Gas depletion factors of 10$^2$--10$^4$ have been supported by recent analyses of millimeter CO emission in some of these disks, as imaged by ALMA \citep{vdmarel16}. \begin{figure \centering \includegraphics[width=0.5\textwidth]{Fnir_F30.pdf} \caption{$\rm F_{NIR}$ and $F30/F13$ for this sample of Herbig disks, highlighting the three classes as in Figure \ref{fig: correlations}. Following \citet{meeus01}, we mark disks where the 10\,$\mu$m silicate emission has been detected (``IIa'' and ``Ia'') or not (``Ib''); we keep the color-coding for the Ia/Ib labels from \citet{maask13} to facilitate comparison to that work. \citet{maask13} showed that silicate emission is tightly related to the $F30/F13$ ratio; instead, $\rm F_{NIR}$ is unrelated (Sect. \ref{sec: environment}).} \label{fig: Fnir_F30} \end{figure} \subsection{Dichotomy of inner disk cavities} \label{sec: environment} The net separation between high-NIR and low-NIR GI disks suggests that large inner disk cavities have two possible structures. Their properties are at the opposite extremes of those of GII disks with no/small cavities (Figures \ref{fig: correlations} and \ref{fig: Fnir_F30}). This dichotomy once again suggests a strong link between dust and gas, consistent with the inner disk structure discussed above in thermo-chemical modeling work: a residual inner dust component (high $\rm F_{NIR}$) may allow for UV-shielded and vibrationally colder CO gas to survive in a dust-free cavity at larger radii ($v2/v1 < 0.1$ in high-NIR disks), while its removal would allow for an efficient UV photodissociation of CO gas inside the dust cavity and for UV pumping at the cavity wall ($v2/v1 \sim 0.3$ in low-NIR disks). We note that $\rm F_{NIR}$ does not show a monotonic relation with $F30/F13$ (Figure \ref{fig: Fnir_F30}) and that the 10\,$\mu$m silicate emission is found in disks with cavities regardless of the measured level of $\rm F_{NIR}$. While $F30/F13$ traces the depletion of intermediate disk regions where the warm ($\sim$\,200--400\,K) silicate emission is produced \citep{maask13}, $\rm F_{NIR}$ in the high-NIR disks must be produced by a hotter disk region closer to the star. We also note that no Herbig disk in this sample has $\rm F_{NIR} \sim 0,$ except for HD141569, which has been proposed to be globally dispersing its gas toward the debris disk phase \citep[e.g.,][]{White2016}. This suggests that inner cavities in Herbig disks are depleted but never completely devoid of material, consistent with residual dust detected at the sublimation radius by NIR interferometry \citep{lazareff17} and with significant accretion rates measured even in low-NIR disks. The moderate/high accretion rates measured in some stars with large inner disk cavities currently provide one of the main open problems in understanding the structure and origin of disk cavities \citep[e.g.,][]{owen16,EP17}; interestingly, CO gas depletion in the disk has recently been proposed as a potential solution \citep{ercolano17}. The nature of a hot inner dust component in high-NIR cavities remains to be determined. Studies have investigated how $\rm F_{NIR}$ can be increased by i) increasing the inner rim scale-height by thermal or magnetic processes \citep{dullemond01,flock17}, ii) a dusty wind launched close to the dust sublimation radius \citep[e.g., HD31293 as modeled by][]{bans12}, iii) misaligned inner disks with warps induced by a gap-opening companion \citep{owen17}. All these scenarios invoke vertically extended inner dusty structures that will have effects on CO gas excitation, and still need to be investigated by thermo-chemical models. Intriguingly, the presence of misaligned inner disks has been linked to the presence of shadows and spirals at larger disk radii \citep[e.g.,][]{montesinos2016, min17}, which have been imaged in \textit{\textup{all}} high-NIR GI disks \citep[][]{benisty2015,stolker16,avenh17,benisty17, tang2017}. We highlight that the dichotomy of inner disk cavities presented in this work has not been captured by the classic definition of a ``pre-transitional'' disk, as low-/high-NIR GI disks have been indistinguishably classified ``pre-transitional'' \citep{espaillat2014}. A combination of multiple tracers of dust and gas in inner and outer disk regions is needed to refine previous concepts of ``(pre-)transitional'' disks into a better understanding of their structure and evolutionary phase. While some GII disks are already too small in radius and low in mass to ever show the properties of GI disks \citep[e.g., HD150193 and HD145263; see][]{garufi17}, it is possible that others are still overall large and massive enough to do so, if they eventually form a large inner cavity (e.g., the GII disk HD163296; Fig. \ref{fig: cartoon}). As argued in previous work, this suggests that at least two paths of disk evolution may be sampled by current observations. The observed dichotomy of GI cavities may instead suggest a potential ``on/off'' behavior of the hot inner dust component, where as soon as an inner dust belt/warp is removed, R$_{\rm co}$ abruptly recedes to $\gtrsim 10$\,au due to the destruction of residual CO gas by UV photodissociation in a dust- and gas-depleted cavity (Sect. \ref{sec: gas_dust_depl}). \begin{figure \centering \includegraphics[width=0.5\textwidth]{Cartoon_new.pdf} \caption{Evolutionary paths that appear possible by combining this work and the analysis of \citet{garufi17}. The dichotomy in inner disk cavities is shown at the center: high-NIR cavities have a residual inner dust component, possibly a belt/warp, that shields UV radiation and enables CO (vibrationally cold) to survive into the cavity; low-NIR cavities are instead dust- and gas-depleted, and CO (here vibrationally hot) is detected only close to the cavity wall.} \label{fig: cartoon} \end{figure} \section{Conclusions} \label{sec: conclusions} From the combination of three independent tracers of dust and CO gas in the inner disks of intermediate-mass stars, we conclude that \begin{itemize} \item the recession of NIR CO emission to larger disk radii traces dust depletion in inner disks at $\approx$\,0.1--10\,au, providing key measurements of disk evolution in inner regions beyond reach of direct-imaging techniques; based on recent thermo-chemical modeling \citep{simon13}, this behavior seems to imply that these disk cavities may also be gas depleted; \item the multi-dimensional space of the several observables now available suggests that large cavities in Herbig disks form with either low ($\sim$\,5--10\,\%) or high ($\sim$\,20--35\,\%) NIR excess, a dichotomy that was not captured by the classic definition of ``pre-transitional'' disks; high-NIR GI disks seem to have residual inner dust belts/warps that have recently been inferred also from shadows and spirals at larger disk radii. \end{itemize} \begin{acknowledgements} We thank A. Carmona for providing CO data of HD169142, and acknowledge helpful feedback and discussions with A. Bosman, G. Dipierro, U. Gorti, G. Mulders, P. Pinilla, and E. van Dishoeck. We also thank the anonymous referee for suggestions that helped improve the presentation of this work. M.B. acknowledges funding from ANR of France under contract number ANR-16-CE31-0013 (Planet Forming Disks). This work is partly based on observations obtained at the Infrared Telescope Facility, which is operated by the University of Hawaii under contract NNH14CK55B with the National Aeronautics and Space Administration. This research has made use of the VizieR catalogue, CDS, Strasbourg, France (A\&AS 143, 23). \end{acknowledgements}
{ "redpajama_set_name": "RedPajamaArXiv" }
6,925
Arts Stage & Film Jesuit Sophomores Shine in Their Play, TOC TOC! Jesuit Sophomores Shine in Their Play, TOC TOC! Sergio Lopez-Elizondo, '21 For the past month and a half, the Theatre Arts class has been practicing and preparing to perform their comedic play TOC TOC. Before the play began, the Director, Mr. Acevedo, addressed the crowd and gave a little background on the actual name of the play. He mentioned that there exists a mental disorder called Obsessive Compulsive Disorder or OCD. In Spanish, OCD translates to Trastorno Obsesivo Compulsivo or TOC, therefore giving the play its name, TOC TOC. Six different patients suffering from OCD have their patience tested in a waiting room. They are waiting for Dr. Cooper, one of the most well-renowned specialists in treating OCD, to arrive for their appointments. As they each begin to trickle into the waiting room, the audience begins to learn who they are and what their compulsivities are. The characters are: Fred (Rowen Maguire) A man who has Tourettes, which is very similar to OCD. Fred simply can't control his own voice and randomly yells out insults mid-conversation. He greets another character, Camilo, and his mid conversation insults almost cause a fight. Camilo (Julian Garcia) A man who cannot stop counting because of his calculation OCD. His compulsion becomes apparent when he opens the door to the waiting room as he finishes counting the 88 steps he had to climb. He says "5 floors, 88 steps" then says "I have been waiting for this appointment for 5 years. That's 60 months, 1,826 days, 43,824 hours, 2,629,440 minutes, or 157,766,400 seconds that I have been waiting." Lance (Luke Sullivan) A man suffering from Nosophobia, or an irrational fear of contracting diseases. He enters the waiting room rather quiet and with a big briefcase. Before he sits down, he pulls out a cylinder container of Clorox wipes and does the obvious, he wipes down his seat. John (Jacob Meyer) A man who has a verification OCD. He cannot stop checking and verifying the simplest of things. Multiple times throughout the play he stands up and begins to yell about how he "Forgot to turn the tap off" or "I left the lights on, I just know it", and don't forget the 1000 times that he checks that he has his keys. Willie (Gabe Tan) A man who suffers from both Echolalia and Palilalia which means that he not only repeats back what others say, he also repeats words, phrases, or syllables that he himself has said. This OCD sets up for a hilarious scene where he breaks down in front of the other characters, goes on in a long rant, and just when the audience thought it was over, he began the rant again. Bob (Miles Dikun) A man who has a phobia of treading on lines and disrupting order and symmetry. He opens the door to the waiting room only to be hesitant to enter as he finds that the floor of the room is made up of all tiles with dividing lines. He can't step on the lines, he just can't. So he jumps all across the room in order to sit down. Miles Dikun, an actor in the play commented, "I have most enjoyed the bond that I've created with my fellow cast members. It's kinda weird how even though we are all different, this theatre class has given us a common ground to help and understand each other and become close with people you would have never thought you had anything in common with. Rowen Maguire also stated that he really liked the "practices when we just joked around and had fun. It really brought us together." The play starts out with Fred alone in the waiting room. As other characters enter, his OCD is revealed. With time, the audience learns of each characters' traits and compulsions through dialogue. When all six patients are finally seated, the audience gets the chance to really understand the patients as they converse and comfort each other due to the seemingly never-ending wait on Dr. Cooper. Many of the patients try to leave throughout the play because of the long wait but the other characters always manage to convince them to stay. After the wait has become too long, Bob subtly brings up the topic of group therapy and how it has helped him in the past. Although, at first hesitant, the other patients finally agree with trying out this notion of group therapy. The group attempts to fix their disorders and fail miserably. They become downtrodden at their failure but they soon begin to realize that by helping a specific person trying to overcome his OCD, they themselves have unconsciously overcome theirs. They rejoice in their realizations and finally decide that the time has come to leave. They hug, high five, and say goodbye, and they all leave the waiting room, one by one. He wasn't needed, the patients only needed themselves. That sums up TOC TOC by Rowen Maguire, Julian Garcia, Luke Sullivan, Jacob Meyer, Gabe Tan, Miles Dikun, and Mr. Acevado. Stay tuned to The Roundup for more Jesuit theater news! The Jesuit One Acts: An Inside View Jesuit Stage and Film… and Glitter?
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,173
{"url":"https:\/\/publications.mfo.de\/handle\/mfo\/2817\/browse?type=msc&value=11","text":"Now showing items 1-11 of 11\n\n\u2022 #### 1645 - Analytic Number Theory \ufeff\n\n[OWR-2016-53] (2016) - (06 Nov - 12 Nov 2016)\nAnalytic number theory has flourished over the past few years, and this workshop brought together world leaders and young talent to discuss developments in various branches of the subject\n\u2022 #### 1641 - Arbeitsgemeinschaft: Diophantine Approximation, Fractal Geometry and Dynamics \ufeff\n\n[OWR-2016-48] (2016) - (09 Oct - 14 Oct 2016)\nRecently conjectures by Schmidt and by Davenport were solved in papers by Dzmitry Badziahin, Andrew Pollington and Sanju Velani, and by Victor Beresnevich. The methods which they used are the source for a growing number ...\n\u2022 #### 1614 - Arbeitsgemeinschaft: The Geometric Langlands Conjecture \ufeff\n\n[OWR-2016-20] (2016) - (03 Apr - 09 Apr 2016)\nThe Langlands program is a vast, loosely connected, collection of theorems and conjectures. At quite different ends, there is the geometric Langlands program, which deals with perverse sheaves on the stack of $G$-bundles ...\n\u2022 #### 1632 - Arithmetic Geometry \ufeff\n\n[OWR-2016-38] (2016) - (07 Aug - 13 Aug 2016)\nArithmetic geometry is at the interface between algebraic geometry and number theory, and studies schemes over the ring of integers of number fields, or their $p$-adic completions. An emphasis of the workshop was on p-adic ...\n\u2022 #### 1631 - Computational Group Theory \ufeff\n\n[OWR-2016-37] (2016) - (31 Jul - 06 Aug 2016)\nThis was the seventh workshop on Computational Group Theory. It showed that Computational Group Theory has significantly expanded its range of activities. For example, symbolic computations with groups and their representations ...\n\u2022 #### 1643 - Definability and Decidability Problems in Number Theory \ufeff\n\n[OWR-2016-49] (2016) - (23 Oct - 29 Oct 2016)\nThis workshop brought together experts working on variations of Hilbert\u2019s Tenth Problem and more general decidability issues for structures other than the ring of integers arising naturally in number theory and algebraic geometry.\n\u2022 #### 1615 - Diophantische Approximationen \ufeff\n\n[OWR-2016-21] (2016) - (10 Apr - 16 Apr 2016)\nThis number theoretic conference was focused on a broad variety of subjects in (or closely related to) Diophantine approximation, including the following: metric Diophantine approximation, Mahler\u2019s method in transcendence, ...\n\u2022 #### 1646 - Large Scale Stochastic Dynamics \ufeff\n\n[OWR-2016-54] (2016) - (13 Nov - 19 Nov 2016)\nThe goal of this workshop was to explore the recent advances in the mathematical understanding of the macroscopic properties which emerge on large space-time scales from interacting microscopic particle systems. There were ...\n\u2022 #### 1603 - Lattices and Applications in Number Theory \ufeff\n\n[OWR-2016-3] (2016) - (17 Jan - 23 Jan 2016)\nThis is a report on the workshop on Lattices and Applications in Number Theory held in Oberwolfach, from January 17 to January 23, 2016. The workshop brought together people working in various areas related to the field: ...\n\u2022 #### 1644c - Mini-Workshop: Computations in the Cohomology of Arithmetic Groups \ufeff\n\n[OWR-2016-52] (2016) - (30 Oct - 05 Nov 2016)\nExplicit calculations play an important role in the theoretical development of the cohomology of groups and its applications. It is becoming more common for such calculations to be derived with the aid of a computer. This ...\n\u2022 #### 1617 - Moduli spaces and Modular forms \ufeff\n\n[OWR-2016-23] (2016) - (24 Apr - 30 Apr 2016)\nThe roots of both moduli spaces and modular forms go back to the theory of elliptic curves in the 19th century. Both topics have seen an enormous growth in the second half of the 20th century, but the interaction between ...","date":"2023-03-26 18:21:05","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3713270425796509, \"perplexity\": 1527.5906630006677}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-14\/segments\/1679296946445.46\/warc\/CC-MAIN-20230326173112-20230326203112-00581.warc.gz\"}"}
null
null
Q: rails: loop within loop error Can anybody tell, why this code works? <% @products.each do |p| %> <%= link_to p.name, product_path(p.id), :class => "title" %> <%end%> And this doesn't? <% @products.in_groups_of(2).each do |product_array| %> <% product_array.each do |p| %> <%= link_to p.name, product_path(p.id), :class => "title" %> <% end %> <%end%> The code gives error of undefined method `name' for nil:NilClass. I am not getting clue of it. Can anybody help? A: It appends nil if there is not enough records, that is why it gives nil:NilClass error >> %w(1 2 3).in_groups_of(2) # => [["1", "2"], ["3", nil]] >> %w(1 2 3).in_groups_of(2,false) # => [["1", "2"], ["3"]] see in_groups_of(number, fill_with = nil) <% @products.in_groups_of(2, false).each do |product_array| %> <% product_array.each do |p| %> <%= link_to p.name, product_path(p.id), :class => "title" %> <% end %> <%end%> A: It sounds like you have an odd number of products. If you try and split an odd number into groups of two, the final product in the last group will be nil. You can just add: unless p.nil? <%= link_to p.name, product_path(p.id), :class => "title" %> end to ensure that you have an instance.
{ "redpajama_set_name": "RedPajamaStackExchange" }
360
Captain Vijayakanth is CM front-runner for TN in 2016 October 27, 2014 newswirehqBJPHQ, Chennai, Chief minister, DMDK, Jayalalitha, TN, VijayakanthLeave a comment Captain Vijayakanth has made his ambitions clear in his speech to his party workers over the last week-end. He has asserted in his speech that with Superstar Rajinikanth refusing to be drawn into the BJP's bandwagon, he was the front runner to the Chief-minister-ship of the state. Captain will settle for nothing less than CM in 2016! Elections to TN will be held somewhere in 2016, unless there is a drastic change in plans, where the present ADMK government may call for dissolution of the present assembly, and call for snap polls. Some political observers feel that this is a possibility, provided the ousted CM Jayalalitha comes out of her legal wrangles in the disproportionate assets case, which has convicted her along with Sasikala and a few others. Incidentally, Jaya's appeal is to be heard in the Bangalore High Court today. However, it could be just the beginning of a fairly long drawn process. Some party workers in the meeting venue where Vijayakanth laid his plans clear however felt that just in case the BJP agrees to the ally DMDK holding the CM chair, it would be Premalatha Vijayakanth who would be the partys choice, and even Vijayakanth would bide by the party's choice. The BJP camp is Tamilnadu is amused by these developments; while the leadership in the state Unit would take the actor ally's speech with a pinch of salt, they will also be left wondering if the DMDK is trying pressure tactics for a larger chunk of seat sharing. However, political observers feel that elections 2016 in TN are a long way to go, and Vijayakanth's utterances are too premature. As the classic adage goes – 2 years is a long long time in politics. Modi may attend Maha swearing in… but quandary over protocol?! October 26, 2014 newswirehqAmit Shah, BJPHQ, government, haryana, khattar, maharashtra, narendra modi, protocolLeave a comment PM Narendra Modi is likely to attend the swearing in ceremony of the BJP or BJP led government in Maharashtra, as and when it happens. However, officials of the state governments protocol wing are in a quandary over how the seating arrangements are to be made, just in case Modi does attend the grand swearing in at the precincts of the Mantralaya. Leaders of the state BJP seem to be of the opinion that the state officials must keep a watch on the swearing ceremony of the M L Khattar led government today, to see how the PM Modi is seated in that function. "We suggested them that they should watch where PM sits during the swearing-in ceremony of Haryana government today and accordingly make seating arrangements here," one leader stated. Looks like the Haryana government officials did not ponder that much on the protocol challenge as this video of the today's swearing in depicts. PM Modi meets press at BJP HQ – more of a mainline media 'selfie' opportunity! October 25, 2014 newswirehqAmit Shah, BJPHQ, Editors, media meeting, narendra modi, PM, Press conferenceLeave a comment Prime Minister Narendra Modi took time off from his hectic schedule to meet a host of media people at the BJP Party headquarters this afternoon (25th Oct). Most of the leading lights of prominent TV channels and print media were present there. Party president Amit Shah welcomed and next to that PM Modi addressed the media – his brief speech focused on his Clean India campaign, and how the media was playing a very constructive role in informing people of the same, and propelling them into action. After the speech, he mixed with those present for informal interactions. While there was a promise from the PM of more similar interactions in the coming times, the event was marked by the lack of any serious attempt by Shri Modi to take up questions on more serious issues. For most in the media, it was a good opportunity to wish the PM in person, and get a now in vogue 'selfie' shot. And none of them were complaining of content as per seasoned observers.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,194
{"url":"https:\/\/downloads.haskell.org\/~ghc\/9.4.3\/docs\/users_guide\/separate_compilation.html","text":"# 5.8. Filenames and separate compilation\u00b6\n\nThis section describes what files GHC expects to find, what files it creates, where these files are stored, and what options affect this behaviour.\n\nPathname conventions vary from system to system. In particular, the directory separator is \u201c\/\u201d on Unix systems and \u201c\\\u201d on Windows systems. In the sections that follow, we shall consistently use \u201c\/\u201d as the directory separator; substitute this for the appropriate character for your system.\n\nEach Haskell source module should be placed in a file on its own.\n\nUsually, the file should be named after the module name, replacing dots in the module name by directory separators. For example, on a Unix system, the module A.B.C should be placed in the file A\/B\/C.hs, relative to some base directory. If the module is not going to be imported by another module (Main, for example), then you are free to use any filename for it.\n\nGHC assumes that source files are ASCII or UTF-8 only, other encoding are not recognised. However, invalid UTF-8 sequences will be ignored in comments, so it is possible to use other encodings such as Latin-1, as long as the non-comment source code is ASCII only.\n\n## 5.8.2. Output files\u00b6\n\nWhen asked to compile a source file, GHC normally generates two files: an object file, and an interface file.\n\nThe object file, which normally ends in a .o suffix, contains the compiled code for the module.\n\nThe interface file, which normally ends in a .hi suffix, contains the information that GHC needs in order to compile further modules that depend on this module. It contains things like the types of exported functions, definitions of data types, and so on. It is stored in a binary format, so don\u2019t try to read one; use the --show-iface \u27e8file\u27e9 option instead (see Other options related to interface files).\n\nYou should think of the object file and the interface file as a pair, since the interface file is in a sense a compiler-readable description of the contents of the object file. If the interface file and object file get out of sync for any reason, then the compiler may end up making assumptions about the object file that aren\u2019t true; trouble will almost certainly follow. For this reason, we recommend keeping object files and interface files in the same place (GHC does this by default, but it is possible to override the defaults as we\u2019ll explain shortly).\n\nEvery module has a module name defined in its source code (module A.B.C where ...).\n\nThe name of the object file generated by GHC is derived according to the following rules, where \u27e8osuf\u27e9 is the object-file suffix (this can be changed with the -osuf option).\n\n\u2022 If there is no -odir option (the default), then the object filename is derived from the source filename (ignoring the module name) by replacing the suffix with \u27e8osuf\u27e9.\n\u2022 If -odir \u27e8dir\u27e9 has been specified, then the object filename is \u27e8dir\u27e9\/\u27e8mod\u27e9.\u27e8osuf\u27e9, where \u27e8mod\u27e9 is the module name with dots replaced by slashes. GHC will silently create the necessary directory structure underneath \u27e8dir\u27e9, if it does not already exist.\n\nThe name of the interface file is derived using the same rules, except that the suffix is \u27e8hisuf\u27e9 (.hi by default) instead of \u27e8osuf\u27e9, and the relevant options are -hidir \u27e8dir\u27e9 and -hisuf \u27e8suffix\u27e9 instead of -odir \u27e8dir\u27e9 and -osuf \u27e8suffix\u27e9 respectively.\n\nFor example, if GHC compiles the module A.B.C in the file src\/A\/B\/C.hs, with no -odir or -hidir flags, the interface file will be put in src\/A\/B\/C.hi and the object file in src\/A\/B\/C.o.\n\nFor any module that is imported, GHC requires that the name of the module in the import statement exactly matches the name of the module in the interface file (or source file) found using the strategy specified in The search path. This means that for most modules, the source file name should match the module name.\n\nHowever, note that it is reasonable to have a module Main in a file named foo.hs, but this only works because GHC never needs to search for the interface for module Main (because it is never imported). It is therefore possible to have several Main modules in separate source files in the same directory, and GHC will not get confused.\n\nIn batch compilation mode, the name of the object file can also be overridden using the -o \u27e8file\u27e9 option, and the name of the interface file can be specified directly using the -ohi \u27e8file\u27e9 option.\n\n## 5.8.3. The search path\u00b6\n\nIn your program, you import a module Foo by saying import Foo. In --make mode or GHCi, GHC will look for a source file for Foo and arrange to compile it first. Without --make, GHC will look for the interface file for Foo, which should have been created by an earlier compilation of Foo.\n\nThe strategy for looking for source files is as follows: GHC keeps a list of directories called the search path. For each of these directories, it tries appending \u27e8basename\u27e9.\u27e8extension\u27e9 to the directory, and checks whether the file exists. The value of \u27e8basename\u27e9 is the module name with dots replaced by the directory separator (\u201c\/\u201d or \u201c\\\\\", depending on the system), and \u27e8extension\u27e9 is a source extension (hs, lhs) if we are in --make mode or GHCi.\n\nWhen looking for interface files in -c mode, we look for interface files in the -hidir, if it\u2019s set. Otherwise the same strategy as for source files is used to try to locate the interface file.\n\nFor example, suppose the search path contains directories d1, d2, and d3, and we are in --make mode looking for the source file for a module A.B.C. GHC will look in d1\/A\/B\/C.hs, d1\/A\/B\/C.lhs, d2\/A\/B\/C.hs, and so on.\n\nThe search path by default contains a single directory: \u201c.\u201d (i.e. the current directory). The following options can be used to add to or change the contents of the search path:\n\n-i\u27e8dir\u27e9[:\u27e8dir\u27e9]*\n\nThis flag appends a colon-separated list of dirs to the search path.\n\n-i\n\nresets the search path back to nothing.\n\nThis isn\u2019t the whole story: GHC also looks for modules in pre-compiled libraries, known as packages. See the section on packages (Packages) for details.\n\n## 5.8.4. Redirecting the compilation output(s)\u00b6\n\n-o \u27e8file\u27e9\n\nGHC\u2019s compiled output normally goes into a .hc, .o, etc., file, depending on the last-run compilation phase. The option -o file re-directs the output of that last-run phase to \u27e8file\u27e9.\n\nNote\n\nThis \u201cfeature\u201d can be counterintuitive: ghc -C -o foo.o foo.hs will put the intermediate C code in the file foo.o, name notwithstanding!\n\nThis option is most often used when creating an executable file, to set the filename of the executable. For example:\n\nghc -o prog --make Main\n\nwill compile the program starting with module Main and put the executable in the file prog.\n\nNote: on Windows, if the result is an executable file, the extension \u201c.exe\u201d is added if the specified filename does not already have an extension. Thus\n\nghc -o foo Main.hs\n\nwill compile and link the module Main.hs, and put the resulting executable in foo.exe (not foo).\n\nIf you use ghc --make and you don\u2019t use the -o, the name GHC will choose for the executable will be based on the name of the file containing the module Main. Note that with GHC the Main module doesn\u2019t have to be put in file Main.hs. Thus both\n\nghc --make Prog\n\nand\n\nghc --make Prog.hs\n\nwill produce Prog (or Prog.exe if you are on Windows).\n\n-dyno \u27e8file\u27e9\n\nWhen using -dynamic-too, option -dyno \u27e8suffix\u27e9 is the counterpart of -o. It redirects the dynamic output to \u27e8file\u27e9.\n\n-odir \u27e8dir\u27e9\n\nRedirects object files to directory \u27e8dir\u27e9. For example:\n\n$ghc -c parse\/Foo.hs parse\/Bar.hs gurgle\/Bumble.hs -odir uname -m The object files, Foo.o, Bar.o, and Bumble.o would be put into a subdirectory named after the architecture of the executing machine (x86, mips, etc). Note that the -odir option does not affect where the interface files are put; use the -hidir option for that. In the above example, they would still be put in parse\/Foo.hi, parse\/Bar.hi, and gurgle\/Bumble.hi. Please also note that when doing incremental compilation, this directory is where GHC looks into to find object files from previous builds. -ohi \u27e8file\u27e9 The interface output may be directed to another file bar2\/Wurble.iface with the option -ohi bar2\/Wurble.iface (not recommended). Warning If you redirect the interface file somewhere that GHC can\u2019t find it, then the recompilation checker may get confused (at the least, you won\u2019t get any recompilation avoidance). We recommend using a combination of -hidir and -hisuf options instead, if possible. To avoid generating an interface at all, you could use this option to redirect the interface into the bit bucket: -ohi \/dev\/null, for example. -dynohi \u27e8file\u27e9 When using -dynamic-too, option -dynohi \u27e8file\u27e9 is the counterpart of -ohi. It redirects the dynamic interface output to \u27e8file\u27e9. -hidir \u27e8dir\u27e9 Redirects all generated interface files into \u27e8dir\u27e9, instead of the default. Please also note that when doing incremental compilation (by ghc --make or ghc -c), this directory is where GHC looks into to find interface files. -hiedir \u27e8dir\u27e9 Redirects all generated extended interface files into \u27e8dir\u27e9, instead of the default. Please also note that when doing incremental compilation (by ghc --make or ghc -c), this directory is where GHC looks into to find extended interface files. -stubdir \u27e8dir\u27e9 Redirects all generated FFI stub files into \u27e8dir\u27e9. Stub files are generated when the Haskell source contains a foreign export or foreign import \"&wrapper\" declaration (see Using foreign export and foreign import ccall \"wrapper\" with GHC). The -stubdir option behaves in exactly the same way as -odir and -hidir with respect to hierarchical modules. -dumpdir \u27e8dir\u27e9 Redirects all dump files into \u27e8dir\u27e9. Dump files are generated when -ddump-to-file is used with other -ddump-* flags. -outputdir \u27e8dir\u27e9 The -outputdir option is shorthand for the combination of -odir \u27e8dir\u27e9, -hidir \u27e8dir\u27e9, -hiedir \u27e8dir\u27e9, -stubdir \u27e8dir\u27e9 and -dumpdir \u27e8dir\u27e9. -osuf \u27e8suffix\u27e9 The -osuf \u27e8suffix\u27e9 will change the .o file suffix for object files to whatever you specify. We use this when compiling libraries, so that objects for the profiling versions of the libraries don\u2019t clobber the normal ones. -dynosuf \u27e8suffix\u27e9 When using -dynamic-too, option -dynosuf \u27e8suffix\u27e9 is the counterpart of -osuf. It changes the .dyn_o file suffix for dynamic object files. -hisuf \u27e8suffix\u27e9 Similarly, the -hisuf \u27e8suffix\u27e9 will change the .hi file suffix for non-system interface files (see Other options related to interface files). The -hisuf\/-osuf game is particularly useful if you want to compile a program both with and without profiling, in the same directory. You can say: ghc ... to get the ordinary version, and ghc ... -osuf prof.o -hisuf prof.hi -prof -fprof-auto to get the profiled version. -dynhisuf \u27e8suffix\u27e9 When using -dynamic-too, option -dynhisuf \u27e8suffix\u27e9 is the counterpart of -hisuf. It changes the .dyn_hi file suffix for dynamic interface files. -hiesuf \u27e8suffix\u27e9 The -hiesuf \u27e8suffix\u27e9 will change the .hie file suffix for extended interface files to whatever you specify. -hcsuf \u27e8suffix\u27e9 Finally, the option -hcsuf \u27e8suffix\u27e9 will change the .hc file suffix for compiler-generated intermediate C files. ## 5.8.5. Keeping Intermediate Files\u00b6 The following options are useful for keeping (or not keeping) certain intermediate files around, when normally GHC would throw these away after compilation: -keep-hc-file -keep-hc-files Keep intermediate .hc files when doing .hs-to-.o compilations via C (Note: .hc files are only generated by unregisterised compilers). -keep-hi-files Keep intermediate .hi files. This is the default. You may use -no-keep-hi-files if you are not interested in the .hi files. -keep-hscpp-file -keep-hscpp-files Keep the output of the CPP pre-processor phase as .hscpp files. A .hscpp file is only created, if a module gets compiled and uses the C pre-processor. -keep-llvm-file -keep-llvm-files Implies: -fllvm Keep intermediate .ll files when doing .hs-to-.o compilations via LLVM (Note: .ll files aren\u2019t generated when using the native code generator, you may need to use -fllvm to force them to be produced). -keep-o-files Keep intermediate .o files. This is the default. You may use -no-keep-o-files if you are not interested in the .o files. -keep-s-file -keep-s-files Keep intermediate .s files. -keep-tmp-files Instructs the GHC driver not to delete any of its temporary files, which it normally keeps in \/tmp (or possibly elsewhere; see Redirecting temporary files). Running GHC with -v will show you what temporary files were generated along the way. ## 5.8.6. Redirecting temporary files\u00b6 -tmpdir \u27e8dir\u27e9 If you have trouble because of running out of space in \/tmp (or wherever your installation thinks temporary files should go), you may use the -tmpdir \u27e8dir\u27e9 option to specify an alternate directory. For example, -tmpdir . says to put temporary files in the current working directory. Alternatively, use your TMPDIR environment variable. Set it to the name of the directory where temporary files should be put. GCC and other programs will honour the TMPDIR variable as well. ## 5.8.9. The recompilation checker\u00b6 -fforce-recomp Turn off recompilation checking (which is on by default). Recompilation checking normally stops compilation early, leaving an existing .o file in place, if it can be determined that the module does not need to be recompiled. -fignore-optim-changes -fignore-hpc-changes In the olden days, GHC compared the newly-generated .hi file with the previous version; if they were identical, it left the old one alone and didn\u2019t change its modification date. In consequence, importers of a module with an unchanged output .hi file were not recompiled. This doesn\u2019t work any more. Suppose module C imports module B, and B imports module A. So changes to module A might require module C to be recompiled, and hence when A.hi changes we should check whether C should be recompiled. However, the dependencies of C will only list B.hi, not A.hi, and some changes to A (changing the definition of a function that appears in an inlining of a function exported by B, say) may conceivably not change B.hi one jot. So now\u2026 GHC calculates a fingerprint (in fact an MD5 hash) of each interface file, and of each declaration within the interface file. It also keeps in every interface file a list of the fingerprints of everything it used when it last compiled the file. If the MD5 hash of the source file stored in the .hi file hasn\u2019t changed, the .o file\u2019s modification date is greater than or equal to that of the .hi file, and the recompilation checking is on, GHC will be clever. It compares the fingerprints on the things it needs this time with the fingerprints on the things it needed last time (gleaned from the interface file of the module being compiled); if they are all the same it stops compiling early in the process saying \u201cCompilation IS NOT required\u201d. What a beautiful sight! You can read about how all this works in the GHC commentary. ### 5.8.9.1. Recompilation for Template Haskell and Plugins\u00b6 Recompilation checking gets a bit more complicated when using Template Haskell or plugins. Both these features execute code at compile time and so if any of the executed code changes then it\u2019s necessary to recompile the module. Consider the top-level splice: main =$(foo bar [| () |])\n\nWhen the module is compiled foo bar [| () |] will be evaluated and the resulting code placed into the program. The dependencies of the expression are calculated and stored during module compilation. When the interface file is written, additional dependencies are created on the object file dependencies of the expression. For instance, if foo is from module A and bar is from module B, the module will now depend on A.o and B.o, if either of these change then the module will be recompiled.\n\n## 5.8.10. How to compile mutually recursive modules\u00b6\n\nGHC supports the compilation of mutually recursive modules. This section explains how.\n\nEvery cycle in the module import graph must be broken by a hs-boot file. Suppose that modules A.hs and B.hs are Haskell source files, thus:\n\nmodule A where\nimport B( TB(..) )\n\nnewtype TA = MkTA Int\n\nf :: TB -> TA\nf (MkTB x) = MkTA x\n\nmodule B where\nimport {-# SOURCE #-} A( TA(..) )\n\ndata TB = MkTB !Int\n\ng :: TA -> TB\ng (MkTA x) = MkTB x\n\nHere A imports B, but B imports A with a {-# SOURCE #-} pragma, which breaks the circular dependency. Every loop in the module import graph must be broken by a {-# SOURCE #-} import; or, equivalently, the module import graph must be acyclic if {-# SOURCE #-} imports are ignored.\n\nFor every module A.hs that is {-# SOURCE #-}-imported in this way there must exist a source file A.hs-boot. This file contains an abbreviated version of A.hs, thus:\n\nmodule A where\nnewtype TA = MkTA Int\n\nTo compile these three files, issue the following commands:\n\nghc -c A.hs-boot -- Produces A.hi-boot, A.o-boot\nghc -c B.hs -- Consumes A.hi-boot, produces B.hi, B.o\nghc -c A.hs -- Consumes B.hi, produces A.hi, A.o\nghc -o foo A.o B.o -- Linking the program\n\nThere are several points to note here:\n\n\u2022 The file A.hs-boot is a programmer-written source file. It must live in the same directory as its parent source file A.hs. Currently, if you use a literate source file A.lhs you must also use a literate boot file, A.lhs-boot; and vice versa.\n\n\u2022 A hs-boot file is compiled by GHC, just like a hs file:\n\nghc -c A.hs-boot\n\nWhen a hs-boot file A.hs-boot is compiled, it is checked for scope and type errors. When its parent module A.hs is compiled, the two are compared, and an error is reported if the two are inconsistent.\n\n\u2022 Just as compiling A.hs produces an interface file A.hi, and an object file A.o, so compiling A.hs-boot produces an interface file A.hi-boot, and a pseudo-object file A.o-boot:\n\n\u2022 The pseudo-object file A.o-boot is empty (don\u2019t link it!), but it is very useful when using a Makefile, to record when the A.hi-boot was last brought up to date (see Using make).\n\u2022 The hi-boot generated by compiling a hs-boot file is in the same machine-generated binary format as any other GHC-generated interface file (e.g. B.hi). You can display its contents with ghc --show-iface. If you specify a directory for interface files, the -hidir flag, then that affects hi-boot files too.\n\u2022 If hs-boot files are considered distinct from their parent source files, and if a {-# SOURCE #-} import is considered to refer to the hs-boot file, then the module import graph must have no cycles. The command ghc -M will report an error if a cycle is found.\n\n\u2022 A module M that is {-# SOURCE #-}-imported in a program will usually also be ordinarily imported elsewhere. If not, ghc --make automatically adds M to the set of modules it tries to compile and link, to ensure that M\u2019s implementation is included in the final program.\n\nA hs-boot file need only contain the bare minimum of information needed to get the bootstrapping process started. For example, it doesn\u2019t need to contain declarations for everything that module A exports, only the things required by the module(s) that import A recursively.\n\nA hs-boot file is written in a subset of Haskell:\n\n\u2022 The module header (including the export list), and import statements, are exactly as in Haskell, and so are the scoping rules. Hence, to mention a non-Prelude type or class, you must import it.\n\n\u2022 There must be no value declarations, but there can be type signatures for values. For example:\n\ndouble :: Int -> Int\n\n\u2022 Fixity declarations are exactly as in Haskell.\n\n\u2022 Vanilla type synonym declarations are exactly as in Haskell.\n\n\u2022 Open type and data family declarations are exactly as in Haskell.\n\n\u2022 A closed type family may optionally omit its equations, as in the following example:\n\ntype family ClosedFam a where ..\n\nThe .. is meant literally \u2013 you should write two dots in your file. Note that the where clause is still necessary to distinguish closed families from open ones. If you give any equations of a closed family, you must give all of them, in the same order as they appear in the accompanying Haskell file.\n\n\u2022 A data type declaration can either be given in full, exactly as in Haskell, or it can be given abstractly, by omitting the \u2018=\u2019 sign and everything that follows. For example:\n\ndata T a b\n\nIn a source program this would declare TA to have no constructors (a GHC extension: see Data types with no constructors), but in an hi-boot file it means \u201cI don\u2019t know or care what the constructors are\u201d. This is the most common form of data type declaration, because it\u2019s easy to get right. You can also write out the constructors but, if you do so, you must write it out precisely as in its real definition.\n\nIf you do not write out the constructors, you may need to give a kind annotation (Explicitly-kinded quantification), to tell GHC the kind of the type variable, if it is not \u201c*\u201d. (In source files, this is worked out from the way the type variable is used in the constructors.) For example:\n\ndata R (x :: * -> *) y\n\nYou cannot use deriving on a data type declaration; write an instance declaration instead.\n\n\u2022 Class declarations can either be given in full, exactly as in Haskell, or they can be given abstractly by omitting everything other than the instance head: no superclasses, no class methods, no associated types. However, if the class has any ::extension::FunctionalDependencies, those given in the hs-boot file must be the same.\n\nIf the class declaration is given in full, the entire class declaration must be identical, up to a renaming of the type variables bound by the class head. This means:\n\n\u2022 The class head must be the same.\n\u2022 The class context must be the same, up to simplification of constraints.\n\u2022 If there are any ::extension::FunctionalDependencies, these must be the same.\n\u2022 The order, names, and types of the class methods must be the same.\n\u2022 The arity and kinds of any associated types must be the same.\n\u2022 Default methods as well as default signatures (see ::extension::DefaultSignatures) must be provided for the same methods, and must be the same.\n\u2022 Default declarations for associated types must be provided for the same types, and must be the same.\n\nTo declare a class with no methods in an hs-boot file, it must have a superclass. If the class has no superclass constraints, add an empty one, e.g.\n\nclass () => C a\n\nThis is a full class declaration, not an abstract declaration in which the methods were omitted.\n\n\u2022 You can include instance declarations just as in Haskell; but omit the \u201cwhere\u201d part.\n\n\u2022 The default role for abstract datatype parameters is now representational. (An abstract datatype is one with no constructors listed.) To get another role, use a role annotation. (See Roles.)\n\n## 5.8.11. Module signatures\u00b6\n\nGHC 8.2 supports module signatures (hsig files), which allow you to write a signature in place of a module implementation, deferring the choice of implementation until a later point in time. This feature is not intended to be used without Cabal; this manual entry will focus on the syntax and semantics of signatures.\n\nTo start with an example, suppose you had a module A which made use of some string operations. Using normal module imports, you would only be able to pick a particular implementation of strings:\n\nmodule Str where\ntype Str = String\n\nempty :: Str\nempty = \"\"\n\ntoString :: Str -> String\ntoString s = s\n\nmodule A where\nimport Str\nz = toString empty\n\nBy replacing Str.hs with a signature Str.hsig, A (and any other modules in this package) are now parametrized by a string implementation:\n\nsignature Str where\ndata Str\nempty :: Str\ntoString :: Str -> String\n\nWe can typecheck A against this signature, or we can instantiate Str with a module that provides the following declarations. Refer to Cabal\u2019s documentation for a more in-depth discussion on how to instantiate signatures.\n\nModule signatures actually consist of two closely related features:\n\n\u2022 The ability to define an hsig file, containing type definitions and type signature for values which can be used by modules that import the signature, and must be provided by the eventual implementing module, and\n\u2022 The ability to inherit required signatures from packages we depend upon, combining the signatures into a single merged signature which reflects the requirements of any locally defined signature, as well as the requirements of our dependencies.\n\nA signature file is denoted by an hsig file; every required signature must have an hsig file (even if it is an empty one), including required signatures inherited from dependencies. Signatures can be imported using an ordinary import Sig declaration.\n\nhsig files are written in a variant of Haskell similar to hs-boot files, but with some slight changes:\n\n\u2022 The header of a signature is signature A where ... (instead of the usual module A where ...).\n\n\u2022 Import statements and scoping rules are exactly as in Haskell. To mention a non-Prelude type or class, you must import it.\n\n\u2022 Unlike regular modules, the defined entities of a signature include not only those written in the local hsig file, but also those from inherited signatures (as inferred from the -package-id \u27e8unit-id\u27e9 flags). These entities are not considered in scope when typechecking the local hsig file, but are available for import by any module or signature which imports the signature. The one exception to this rule is the export list, described below.\n\nIf a declaration occurs in multiple inherited signatures, they will be merged together. For values, we require that the types from both signatures match exactly; however, other declarations may merge in more interesting ways. The merging operation in these cases has the effect of textually replacing all occurrences of the old name with a reference to the new, merged declaration. For example, if we have the following two signatures:\n\nsignature A where\ndata T\nf :: T -> T\n\nsignature A where\ndata T = MkT\ng :: T\n\nthe resulting merged signature would be:\n\nsignature A where\ndata T = MkT\nf :: T -> T\ng :: T\n\n\u2022 If no export list is provided for a signature, the exports of a signature are all of its defined entities merged with the exports of all inherited signatures.\n\nIf you want to reexport an entity from a signature, you must also include a module SigName export, so that all of the entities defined in the signature are exported. For example, the following module exports both f and Int from Prelude:\n\nsignature A(module A, Int) where\nimport Prelude (Int)\nf :: Int\n\nReexports merge with local declarations; thus, the signature above would successfully merge with:\n\nsignature A where\ndata Int\n\nThe only permissible implementation of such a signature is a module which reexports precisely the same entity:\n\nmodule A (f, Int) where\nimport Prelude (Int)\nf = 2 :: Int\n\nConversely, any entity requested by a signature can be provided by a reexport from the implementing module. This is different from hs-boot files, which require every entity to be defined locally in the implementing module.\n\n\u2022 GHC has experimental support for signature thinning, which is used when a signature has an explicit export list without a module export of the signature itself. In this case, the export list applies to the final export list after merging, in particular, you may refer to entities which are not declared in the body of the local hsig file.\n\nThe semantics in this case is that the set of required entities is defined exclusively by its exports; if an entity is not mentioned in the export list, it is not required. The motivation behind this feature is to allow a library author to provide an omnibus signature containing the type of every function someone might want to use, while a client thins down the exports to the ones they actually require. For example, supposing that you have inherited a signature for strings, you might write a local signature of this form, listing only the entities that you need:\n\nsignature Str (Str, empty, append, concat) where\n-- empty\n\nA few caveats apply here. First, it is illegal to export an entity which refers to a locally defined type which itself is not exported (GHC will report an error in this case). Second, signatures which come from dependencies which expose modules cannot be thinned in this way (after all, the dependency itself may need the entity); these requirements are unconditionally exported. Finally, any module reexports must refer to modules imported by the local signature (even if an inherited signature exported the module).\n\nWe may change the syntax and semantics of this feature in the future.\n\n\u2022 The declarations and types from signatures of dependencies that will be merged in are not in scope when type checking an hsig file. To refer to any such type, you must declare it yourself:\n\n-- OK, assuming we inherited an A that defines T\nsignature A (T) where\n-- empty\n\n-- Not OK\nsignature A (T, f) where\nf :: T -> T\n\n-- OK\nsignature A (T, f) where\ndata T\nf :: T -> T\n\n\u2022 There must be no value declarations, but there can be type signatures for values. For example, we might define the signature:\n\nsignature A where\ndouble :: Int -> Int\n\nA module implementing A would have to export the function double with a type definitionally equal to the signature. Note that this means you can\u2019t implement double using a polymorphic function double :: Num a => a -> a.\n\nNote that signature matching does check if fixity matches, so be sure specify fixity of ordinary identifiers if you intend to use them with backticks.\n\n\u2022 Fixity, type synonym, open type\/data family declarations are permitted as in normal Haskell.\n\n\u2022 Closed type family declarations are permitted as in normal Haskell. They can also be given abstractly, as in the following example:\n\ntype family ClosedFam a where ..\n\nThe .. is meant literally \u2013 you should write two dots in your file. The where clause distinguishes closed families from open ones.\n\n\u2022 A data type declaration can either be given in full, exactly as in Haskell, or it can be given abstractly, by omitting the \u2018=\u2019 sign and everything that follows. For example:\n\nsignature A where\ndata T a b\n\nAbstract data types can be implemented not only with data declarations, but also newtypes and type synonyms (with the restriction that a type synonym must be fully eta-reduced, e.g., type T = ... to be accepted.) For example, the following are all valid implementations of the T above:\n\n-- Algebraic data type\ndata T a b = MkT a b\n\n-- Newtype\nnewtype T a b = MkT (a, b)\n\n-- Type synonym\ndata T2 a b = MkT2 a a b b\ntype T = T2\n\nData type declarations merge only with other data type declarations which match exactly, except abstract data, which can merge with data, newtype or type declarations. Merges with type synonyms are especially useful: suppose you are using a package of strings which has left the type of characters in the string unspecified:\n\nsignature Str where\ndata Str\ndata Elem\n\nIf you locally define a signature which specifies type Elem = Char, you can now use head from the inherited signature as if it returned a Char.\n\nIf you do not write out the constructors, you may need to give a kind to tell GHC what the kinds of the type variables are, if they are not the default *. Unlike regular data type declarations, the return kind of an abstract data declaration can be anything (in which case it probably will be implemented using a type synonym.) This can be used to allow compile-time representation polymorphism (as opposed to run-time representation polymorphism), as in this example:\n\nsignature Number where\nimport GHC.Types\ndata Rep :: RuntimeRep\ndata Number :: TYPE Rep\nplus :: Number -> Number -> Number\n\nRoles of type parameters are subject to the subtyping relation phantom < representational < nominal: for example, an abstract type with a nominal type parameter can be implemented using a concrete type with a representational type parameter. Merging respects this subtyping relation (e.g., nominal merged with representational is representational.) Roles in signatures default to nominal, which gives maximum flexibility on the implementor\u2019s side. You should only need to give an explicit role annotation if a client of the signature would like to coerce the abstract type in a type parameter (in which case you should specify representational explicitly.) Unlike regular data types, we do not assume that abstract data types are representationally injective: if we have Coercible (T a) (T b), and T has role nominal, this does not imply that a ~ b.\n\n\u2022 A class declarations can either be abstract or concrete. An abstract class is one with no superclasses or class methods:\n\nsignature A where\nclass Key k\n\nIt can be implemented in any way, with any set of superclasses and methods; however, modules depending on an abstract class are not permitted to define instances (as of GHC 8.2, this restriction is not checked, see #13086.) These declarations can be implemented by type synonyms of kind Constraint; this can be useful if you want to parametrize over a constraint in functions. For example, with the ConstraintKinds extension, this type synonym is a valid implementation of the signature above:\n\nmodule A where\ntype Key = Eq\n\nA concrete class specifies its superclasses, methods, default method signatures (but not their implementations) and a MINIMAL pragma. Unlike regular Haskell classes, you don\u2019t have to explicitly declare a default for a method to make it optional vis-a-vis the MINIMAL pragma.\n\nWhen merging class declarations, we require that the superclasses and methods match exactly; however, MINIMAL pragmas are logically ORed together, and a method with a default signature will merge successfully against one that does not.\n\n\u2022 You can include instance declarations as in Haskell; just omit the \u201cwhere\u201d part. An instance declaration need not be implemented directly; if an instance can be derived based on instances in the environment, it is considered implemented. For example, the following signature:\n\nsignature A where\ndata Str\ninstance Eq Str\n\nis considered implemented by the following module, since there are instances of Eq for [] and Char which can be combined to form an instance Eq [Char]:\n\nmodule A where\ntype Str = [Char]\n\nUnlike other declarations, for which only the entities declared in a signature file are brought into scope, instances from the implementation are always brought into scope, even if they were not declared in the signature file. This means that a module may typecheck against a signature, but not against a matching implementation. You can avoid situations like this by never defining orphan instances inside a package that has signatures.\n\nInstance declarations are only merged if their heads are exactly the same, so it is possible to get into a situation where GHC thinks that instances in a signature are overlapping, even if they are implemented in a non-overlapping way. If this is giving you problems give us a shout.\n\n\u2022 Any orphan instances which are brought into scope by an import from a signature are unconditionally considered in scope, even if the eventual implementing module doesn\u2019t actually import the same orphans.\n\nKnown limitations:\n\n\u2022 Pattern synonyms are not supported.\n\u2022 Algebraic data types specified in a signature cannot be implemented using pattern synonyms. See #12717\n\n## 5.8.12. Using make\u00b6\n\nIt is reasonably straightforward to set up a Makefile to use with GHC, assuming you name your source files the same as your modules. Thus:\n\nHC = ghc\nHC_OPTS = -cpp $(EXTRA_HC_OPTS) SRCS = Main.lhs Foo.lhs Bar.lhs OBJS = Main.o Foo.o Bar.o .SUFFIXES : .o .hs .hi .lhs .hc .s cool_pgm :$(OBJS)\nrm -f $@$(HC) -o $@$(HC_OPTS) $(OBJS) # Standard suffix rules .o.hi: @: .lhs.o:$(HC) -c $<$(HC_OPTS)\n\n.hs.o:\n$(HC) -c$< $(HC_OPTS) .o-boot.hi-boot: @: .lhs-boot.o-boot:$(HC) -c $<$(HC_OPTS)\n\n.hs-boot.o-boot:\n$(HC) -c$< $(HC_OPTS) # Inter-module dependencies Foo.o Foo.hc Foo.s : Baz.hi # Foo imports Baz Main.o Main.hc Main.s : Foo.hi Baz.hi # Main imports Foo and Baz Note Sophisticated make variants may achieve some of the above more elegantly. Notably, gmake\u2019s pattern rules let you write the more comprehensible: %.o : %.lhs$(HC) -c $<$(HC_OPTS)\n\nWhat we\u2019ve shown should work with any make.\n\nNote the cheesy .o.hi rule: It records the dependency of the interface (.hi) file on the source. The rule says a .hi file can be made from a .o file by doing\u2026nothing. Which is true.\n\nNote that the suffix rules are all repeated twice, once for normal Haskell source files, and once for hs-boot files (see How to compile mutually recursive modules).\n\nNote also the inter-module dependencies at the end of the Makefile, which take the form\n\nFoo.o Foo.hc Foo.s : Baz.hi # Foo imports Baz\n\nThey tell make that if any of Foo.o, Foo.hc or Foo.s have an earlier modification date than Baz.hi, then the out-of-date file must be brought up to date. To bring it up to date, make looks for a rule to do so; one of the preceding suffix rules does the job nicely. These dependencies can be generated automatically by ghc; see Dependency generation\n\n## 5.8.13. Dependency generation\u00b6\n\nPutting inter-dependencies of the form Foo.o : Bar.hi into your Makefile by hand is rather error-prone. Don\u2019t worry, GHC has support for automatically generating the required dependencies. Add the following to your Makefile:\n\ndepend :\nghc -M $(HC_OPTS)$(SRCS)\n\nNow, before you start compiling, and any time you change the imports in your program, do make depend before you do make cool_pgm. The command ghc -M will append the needed dependencies to your Makefile.\n\nIn general, ghc -M Foo does the following. For each module M in the set Foo plus all its imports (transitively), it adds to the Makefile:\n\n\u2022 A line recording the dependence of the object file on the source file.\n\nM.o : M.hs\n\n(or M.lhs if that is the filename you used).\n\n\u2022 For each import declaration import X in M, a line recording the dependence of M on X:\n\nM.o : X.hi\n\n\u2022 For each import declaration import {-# SOURCE #-} X in M, a line recording the dependence of M on X:\n\nM.o : X.hi-boot\n\n(See How to compile mutually recursive modules for details of hi-boot style interface files.)\n\nIf M imports multiple modules, then there will be multiple lines with M.o as the target.\n\nThere is no need to list all of the source files as arguments to the ghc -M command; ghc traces the dependencies, just like ghc --make (a new feature in GHC 6.4).\n\nNote that ghc -M needs to find a source file for each module in the dependency graph, so that it can parse the import declarations and follow dependencies. Any pre-compiled modules without source files must therefore belong to a package [1].\n\nBy default, ghc -M generates all the dependencies, and then concatenates them onto the end of makefile (or Makefile if makefile doesn\u2019t exist) bracketed by the lines \u201c# DO NOT DELETE: Beginning of Haskell dependencies\u201d and \u201c# DO NOT DELETE: End of Haskell dependencies\u201d. If these lines already exist in the makefile, then the old dependencies are deleted first.\n\nDon\u2019t forget to use the same -package options on the ghc -M command line as you would when compiling; this enables the dependency generator to locate any imported modules that come from packages. The package modules won\u2019t be included in the dependencies generated, though (but see the -include-pkg-deps option below).\n\nThe dependency generation phase of GHC can take some additional options, which you may find useful. The options which affect dependency generation are:\n\n-ddump-mod-cycles\n\nDisplay a list of the cycles in the module graph. This is useful when trying to eliminate such cycles.\n\n-v2\n\nPrint a full list of the module dependencies to stdout. (This is the standard verbosity flag, so the list will also be displayed with -v3 and -v4; see Verbosity options.)\n\n-dep-makefile \u27e8file\u27e9\n\nUse \u27e8file\u27e9 as the makefile, rather than makefile or Makefile. If \u27e8file\u27e9 doesn\u2019t exist, mkdependHS creates it. We often use -dep-makefile .depend to put the dependencies in .depend and then include the file .depend into Makefile.\n\n-dep-suffix \u27e8suffix\u27e9\n\nMake dependencies that declare that files with suffix .\u27e8suf\u27e9\u27e8osuf\u27e9 depend on interface files with suffix .\u27e8suf\u27e9hi, or (for {-# SOURCE #-} imports) on .hi-boot. Multiple -dep-suffix flags are permitted. For example, -dep-suffix a_ -dep-suffix b_ will make dependencies for .hs on .hi, .a_hs on .a_hi, and .b_hs on .b_hi. If you do not use this flag then the empty suffix is used.\n\n--exclude-module=\u27e8file\u27e9\n\nRegard \u27e8file\u27e9 as \u201cstable\u201d; i.e., exclude it from having dependencies on it.\n\n-include-pkg-deps\n\nRegard modules imported from packages as unstable, i.e., generate dependencies on any imported package modules (including Prelude, and all other standard Haskell libraries). Dependencies are not traced recursively into packages; dependencies are only generated for home-package modules on external-package modules directly imported by the home package module. This option is normally only used by the various system libraries.\n\n-include-cpp-deps\n\nOutput preprocessor dependencies. This only has an effect when the CPP language extension is enabled. These dependencies are files included with the #include preprocessor directive (as well as transitive includes) and implicitly included files such as standard c preprocessor headers and a GHC version header. One exception to this is that GHC generates a temporary header file (during compilation) containing package version macros. As this is only a temporary file that GHC will always generate, it is not output as a dependency.\n\n## 5.8.14. Orphan modules and instance declarations\u00b6\n\nHaskell specifies that when compiling module M, any instance declaration in any module \u201cbelow\u201d M is visible. (Module A is \u201cbelow\u201d M if A is imported directly by M, or if A is below a module that M imports directly.) In principle, GHC must therefore read the interface files of every module below M, just in case they contain an instance declaration that matters to M. This would be a disaster in practice, so GHC tries to be clever.\n\nIn particular, if an instance declaration is in the same module as the definition of any type or class mentioned in the head of the instance declaration (the part after the \u201c=>\u201d; see Instance termination rules), then GHC has to visit that interface file anyway. Example:\n\nmodule A where\ninstance C a => D (T a) where ...\ndata T a = ...\n\nThe instance declaration is only relevant if the type T is in use, and if so, GHC will have visited A\u2019s interface file to find T\u2019s definition.\n\nThe only problem comes when a module contains an instance declaration and GHC has no other reason for visiting the module. Example:\n\nmodule Orphan where\ninstance C a => D (T a) where ...\nclass C a where ...\n\nHere, neither D nor T is declared in module Orphan. We call such modules \u201corphan modules\u201d. GHC identifies orphan modules, and visits the interface file of every orphan module below the module being compiled. This is usually wasted work, but there is no avoiding it. You should therefore do your best to have as few orphan modules as possible.\n\nFunctional dependencies complicate matters. Suppose we have:\n\nmodule B where\ninstance E T Int where ...\ndata T = ...\n\nIs this an orphan module? Apparently not, because T is declared in the same module. But suppose class E had a functional dependency:\n\nmodule Lib where\nclass E x y | y -> x where ...\n\nThen in some importing module M, the constraint (E a Int) should be \u201cimproved\u201d by setting a = T, even though there is no explicit mention of T in M.\n\nThese considerations lead to the following definition of an orphan module:\n\n\u2022 An orphan module orphan module contains at least one orphan instance or at least one orphan rule.\n\n\u2022 An instance declaration in a module M is an orphan instance if\n\n\u2022 The class of the instance declaration is not declared in M, and\n\u2022 Either the class has no functional dependencies, and none of the type constructors in the instance head is declared in M; or there is a functional dependency for which none of the type constructors mentioned in the non-determined part of the instance head is defined in M.\n\nOnly the instance head counts. In the example above, it is not good enough for C\u2019s declaration to be in module A; it must be the declaration of D or T.\n\n\u2022 A rewrite rule in a module M is an orphan rule orphan rule if none of the variables, type constructors, or classes that are free in the left hand side of the rule are declared in M.\n\nIf you use the flag -Worphans, GHC will warn you if you are creating an orphan module. Like any warning, you can switch the warning off with -Wno-orphans, and -Werror will make the compilation fail if the warning is issued.\n\nYou can identify an orphan module by looking in its interface file, M.hi, using the --show-iface \u27e8file\u27e9 mode. If there is a [orphan module] on the first line, GHC considers it an orphan module.\n\n [1] This is a change in behaviour relative to 6.2 and earlier.","date":"2023-03-22 12:09:18","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.343684583902359, \"perplexity\": 3212.3710989085735}, \"config\": {\"markdown_headings\": true, \"markdown_code\": false, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-14\/segments\/1679296943809.76\/warc\/CC-MAIN-20230322114226-20230322144226-00370.warc.gz\"}"}
null
null
\section{Introduction} The prediction and the discovery of stellar mass black holes certainly were a great achievement of the last century. However, it is still unclear if astrophysical black holes can possess a significant electric charge. The main objection is related to common observation that there is no charge in the standard astrophysical objects like stars and planets. We know anyway that all these objects have an electric charge which is nonzero. As an example, the sun charge computed following Rosseland (1924) is equivalent to a few hundred of coulombs. However, from the point of view of relativistic astrophysics, it is insignificant. There is also a theoretical objection\cite{eardley1975}, related to the fact that a compact object with a net charge greater than $\sim 100 \times (M/M_\odot) \; C$, would be rapidly discharged by interaction with the surrounding plasma. \\ The critic remarks, we can address to the two objections, are the following: \\ 1) Regarding the observations, we know that it is related to criteria and precision of the experiment. In fact, new space-based missions will be able to measure the position of stars with a precision of $1 \; \mu as$. This will be sufficient to scrutinize the possible gravitational lensing and evidence the part due to the charge of the black hole.\\ 2) From the theoretical point of view, we know that the Reissner-Nordstr\"om metric allows a maximum charge given by : $ Q_{max}= \sqrt {G}\times M \simeq 1.7 \cdot 10^{20} (M/M_\odot) \; C$. We want to stress that the objection\cite{eardley1975} is not a theoretical limit since it depends on the modelization of the local environment and presupposes that this charge is static. We cannot exclude a peculiar situation when we could dynamically form a charged black hole that could evolve after formation to a complete or partial discharge. Here the key assumption is related to a dynamical approach. Without this fence of a static charge for an astrophysical object we can explore how a compact object could acquire a net charge. In this article, we propose an astrophysical scenario in order to obtain a charged black hole. Our basis is that we have observed numerous compact objects with a high magnetic field ($B \simeq 10^{13} \; Gs$) and with high rotation velocity ($\Omega \simeq 10^{3} \; rad.s^{-1}$). The aim of this article is to show that there is no reason to believe that the charge of a new-born black hole is negligible. \section{Model description} As it is believed, stellar black holes are born most likely as a result of supernova explosions of massive stars\cite{knigashefa2}. At the later stages of evolution this sort of a star consists of a compact and dense core and an extensive rare envelope. Finally, the core loses stability and collapses into a black hole. The envelope is erupted out and observed as a supernova phenomenon\footnote{Strictly speaking, there are other models of the stellar black hole creation, for instance, with no supernova explosion (so-called silent collapse\cite{knigashefa2}).}. During all the process the collapsing core is surrounded by some plasma which is a very good conductor. Usually, it is considered as a critical argument that the core and, consequently, the new-born black hole cannot have any significant charge. However, it disregards that the core rotates and possesses a strong magnetic field. If the resistance of the substance is negligible, the electric field in the comoving frame of reference must be zero. In the static frame of reference the electric field is defined by the Lorentz transform: \begin{align} \vec E &=-\dfrac{\left[\dfrac{\vec \upsilon}{c}\times \vec B\right]}{\sqrt{1-\dfrac{\upsilon^2}{c^2}}}\label{lifsh1} \\ \notag\\ \vec E &=-\left[\dfrac{\vec \upsilon}{c}\times \vec B\right]\qquad \text{in the nonrelativistic limit}\label{lifsh2} \end{align} Generally speaking, the charge density $\rho\sim\mathop{\mathrm{div}}\vec E$ is nonzero. Consequently, the core charge need not be obligatory equal to zero. In order to estimate the charge of the core we use the following procedure: \begin{enumerate} \item Magnetic field calculation around the collapsing core. \item Determination of the surface velocity and electric field calculation in accordance with (\ref{lifsh1}). \item The charge calculation from the Maxwell equation \begin{equation} Q=\dfrac{1}{4\pi}\oint \vec E\, d\!\vec S \label{maxwell} \end{equation} \end{enumerate} We make several assumptions to simplify the solution. First of all, the core and the substance surrounding the core are supposed to have infinite conductivity (see substantiating calculations in the {\it Discussion}). We presume the gravitational field around the core to be the Schwarzschild one and the collapse to be spherically symmetric. Of course, for the rotating core it cannot be precisely correct. However, if the rotation is not very rapid, the deviation from the Schwarzschild metric should not be very big. Acceptability of the supposition will be discussed in the {\it Discussion}. We presume that the magnetic field has only the dipole component during the collapse and the axis of rotation coincides with the axis of the magnetic dipole. We make also some more assumptions, but they will be mentioned during the solution itself. \section{Calculations} Hereafter $t$, $r$, $\theta$, and $\phi$ are the standard Schwarzschild coordinates, $r_g \equiv \dfrac{2 G M }{c^2}$. We use the orthonormal tetrad frame \begin{equation} \left( \vec e_0= \dfrac{1}{\sqrt{1-r_g/r}}\;\partial_t,\quad \vec e_1= \sqrt{1-\dfrac{r_g}{r}}\;\partial_r,\quad \vec e_2= \dfrac{1}{r}\;\partial_\theta,\quad \vec e_3= \dfrac{1}{r\sin\theta}\;\partial_\phi \right) \label{frame} \end{equation} The corresponding coframe of differential forms looks like: \begin{equation} \left(\omega^0=\sqrt{1-\dfrac{r_g}{r}}\;dt,\quad \omega^1= \dfrac{dr}{\sqrt{1-r_g/r}},\quad \omega^2= r\;d\theta,\quad \omega^3= r\sin\theta\;d\phi \right) \nonumber \end{equation} With this choice of the frame the metric has the Lorentz form $g_{\alpha\beta}\equiv\eta_{\alpha\beta}$. The quantities $\vec B$, $\vec E$, $\vec F$, and $\vec\upsilon$ are three-vectors; their components are given in the spatial orthonormal triad set of basis three-vectors, corresponding to the chart~(\ref{frame}). $$ \left(\vec e_r= \sqrt{1-\dfrac{r_g}{r}}\;\partial_r,\quad \vec e_\theta= \dfrac{1}{r}\;\partial_\theta,\quad \vec e_\phi= \dfrac{1}{r\sin\theta}\;\partial_\phi \right) $$ \subsection{Magnetic field calculation} The dipole magnetic field of a spherically symmetric massive body is described by the formulae\cite{ginzburg65,anderson70}: \begin{align} B_r &=A \left[ -\ln\left(1- \dfrac{r_g}{r}\right)-\dfrac{r_g}{r} -\dfrac{r^2_g}{2 r^2} \right]\cos\theta \label{magn1} \\ B_\theta &=A \left[ \left(\dfrac{r}{r_g}-1\right)\ln\left(1- \dfrac{r_g}{r}\right)+1-\dfrac{r_g}{2 r} \right] \sqrt{\frac{r^2_g}{r(r-r_g)}}\cdot \sin\theta \label{magn2} \end{align} The equations are valid for all the space out of the core, but only the surface of the collapsing core is the object of our interest, and we mean by $r$ hereafter the radius of the core. $A$ is a coefficient proportional to the dipole momentum of the system. In order to obtain the magnetic field around the collapsing core we assume\cite{anderson70} the collapse to be quasistatic . Of course, this supposition is artificial, especially of the last stages of collapse, but it gives us the opportunity to proceed a strict examination of the problem in the framework of general relativity. If the collapse is quasistatic, the magnetic field around the core is always described by (\ref{magn1},\ref{magn2}) but the coefficient $A$ is no longer constant\cite{anderson70}, it depends upon the radius of the collapsing core $r$. We can deduce the dependence from infinite conductivity of the star and continuity of the normal component of the magnetic field on the surface of the core. In our case it can be written in the form (see Ref.~\refcite{anderson70}, equation(24) for more details): $B_r r^2 \sin^2\theta=\mbox{\it const}$. Since the collapse is radial, $\theta$ coordinate of a surface element considered remains constant during it and, consequently, ($B_r r^2$) and ($B_r r^2 / \cos\theta$) are also constant. We define a new quantity $D\equiv B_r \dfrac{r^2}{r^2_g \cos\theta}$ which thus remains constant during the collapse (in contrast to $A$). It should be derived from the initial conditions. We obtain from (\ref{magn1}): \begin{align} &A\left[ -\dfrac{r^2}{r^2_g}\ln\left(1- \dfrac{r_g}{r}\right)-\dfrac{r}{r_g} -\dfrac12 \right]= D\\ &A=D\left[ -\dfrac{r^2}{r^2_g}\ln\left(1- \dfrac{r_g}{r}\right)-\dfrac{r}{r_g} -\dfrac12 \right]^{-1} \label{a} \end{align} Substituting this $A$ into (\ref{magn1}) and (\ref{magn2}), we get the formulas describing the magnetic field around the collapsing core: \begin{align} B_r &= D\dfrac{r^2_g}{r^2}\cos\theta \\ B_\theta &=D \; \sin\theta\; \sqrt{\frac{r^2_g}{r(r-r_g)}} \left( \dfrac{\left(\dfrac{r}{r_g}-1\right)\ln\left(1- \dfrac{r_g}{r}\right)+1 -\dfrac{r_g}{2 r}}{ -\left(\dfrac{r}{r_g}\right)^2 \ln\left(1- \dfrac{r_g}{r}\right)-\dfrac{r}{r_g} -\dfrac12}\right) \label{magth} \end{align} \subsection{Calculation of the charge created by the collapsing core rotation in the magnetic field} Tangential velocity of a particle falling on a black hole is defined by the angular momentum conservation condition\cite{teorpol}. For a non-relativistic particle falling in the equatorial plane with small angular momentum $\mathfrak{M}$ we have: \begin{equation} \upsilon_{\phi} = \dfrac{\alpha c}{r}\sqrt{\dfrac{r-r_g}{r}} \nonumber \end{equation} Here $\alpha\equiv\dfrac{\mathfrak{M}}{mc}$ is a constant. Since the angular momentum of the collapsing core also remains constant, by analogy we can suggest the tangential velocity of the surface to be \begin{equation} \upsilon_{\phi} = \dfrac{\alpha c}{r}\sqrt{\dfrac{r-r_g}{r}}\: \sin\theta \label{tangen} \end{equation} Here $\sin\theta$ provides solid body rotation of the surface, and $\alpha$ is as before a constant which should be derived from the initial conditions. In order to calculate electric field intensity, we should make some assumptions about the substance surrounding the collapsing core. It is most likely hot but respectively rare plasma. As it was already mentioned, we consider it as a perfect conductor. Since the magnetic field of the core is very strong at the late stages of the collapse, we can expect that the plasma is frozen to the magnetic field and corotates. Consequently, its tangential velocity $\upsilon_{\phi}$ (just around the surface of the core) is equal to (\ref{tangen}). The question of $\upsilon_{r}$ and $\upsilon_{\theta}$ components of the plasma velocity is not so clear. We speculate that they are equal to zero. Properly speaking, in the nonrelativistic case they cannot make a contribution to the electric field flux (since $B_{\phi}=0$), but if they are relativistic, they contributes to the $\upsilon^2$ in (\ref{lifsh1}). Hence, in our case the supposition $\upsilon_{r}=0$, $\upsilon_{\theta}=0$ means only that the velocities are much less than the speed of light. Finally, for the substance velocity we have: \begin{equation} \upsilon_{r}=0 \quad \upsilon_{\theta}=0 \quad \upsilon_{\phi} = \dfrac{\alpha c}{r}\sqrt{\dfrac{r-r_g}{r}}\: \sin\theta \label{velocity} \end{equation} Since we presume the rotation of the core to be slow, the ratio $\alpha/r_g$ is small and $\upsilon_{\phi} \ll c$; in any case near the gravitational radius $\upsilon_{\phi} \to 0$. We can therefore use (\ref{lifsh2}) to calculate the electric field. Combining (\ref{magth}) and (\ref{velocity}), we obtain for the electric field just above the core surface: \begin{equation} E_r =\frac{\alpha D r_g}{r^2} \left( \dfrac{\left(\dfrac{r}{r_g}-1\right)\ln\left(1- \dfrac{r_g}{r}\right)+1 -\dfrac{r_g}{2 r}}{ -\left(\dfrac{r}{r_g}\right)^2 \ln\left(1- \dfrac{r_g}{r}\right)-\dfrac{r}{r_g} -\dfrac12}\right) \sin^2\theta \nonumber \end{equation} Integrating it over the sphere surrounding the core and using equation (\ref{maxwell}), we obtain for the charge: \begin{equation} Q =\frac{\alpha D r_g}{3 \pi} \left( \dfrac{\left(\dfrac{r}{r_g}-1\right)\ln\left(1- \dfrac{r_g}{r}\right)+1 -\dfrac{r_g}{2 r}}{ -\left(\dfrac{r}{r_g}\right)^2 \ln\left(1- \dfrac{r_g}{r}\right)-\dfrac{r}{r_g} -\dfrac12}\right) \label{charge} \end{equation} The charge is positive if the angular velocity of the core and its magnetic moment are codirectional. \section{Discharge processes and the new-born black hole charge estimation} The charge described by (\ref{charge}), reaches its maximum at $r\simeq 1.3 r_g$ and drops to zero as $|\ln((r-r_g)/r_g)|^{-1}$ when $r$ tends to $r_g$. Seemingly, it should mean that the charge of the formed black hole is zero. However, we have not yet taken into account gravitational force and its influence on the discharge processes. Let us consider a resting probe particle of mass $m$ and charge $q$ (let us denote its specific charge by $\mu$). Electric field acting on the particle consists of several components: electric field (\ref{lifsh1}) produced by the plasma rotation in the magnetic field and electrostatic field created by the excess of nonequilibrium charge (with respect to (\ref{charge})) which has not discharged. Magnetic field impedes the discharge (equilibrium quantity of charge, described by (\ref{charge}), is always nonzero). However, since the equilibrium charge approaches zero when $r\to r_g$, we can ignore for simplicity the influence of the magnetic field at the last stages of the collapse. The electrostatic force acting on the probe particle can be taken as: \begin{equation} F_e=\dfrac{q Q}{r^2} \end{equation} Gravitational force acting on the particle is\cite{frolovnovikov,zeldnov1}: \begin{equation} F_g=-\dfrac{G M m}{r^2 \sqrt{1-\dfrac{r_g}{r}}} \end{equation} Both the forces directed radially. The discharge (by particles with specific charge $\mu$) will be stopped by the gravitational field, when $F_e+F_g=0$. \begin{align} \dfrac{q Q}{r^2}&=\dfrac{G M m}{r^2 \sqrt{1-\dfrac{r_g}{r}}} \nonumber\\ Q&=\dfrac{r_g c^2}{2\mu}\sqrt{\dfrac{r}{r-r_g}} \label{gcharge} \end{align} The last equation represents the charge which can be held by the gravitational field. It is represented on {\it Fig.}~\ref{fig} by the dot line)\footnote{Actually, the figure represents the left (solid line) and the right (dot line) parts of the equation (\ref{main}), but they are proportional to the charge created by the collapsing core rotation in the magnetic field and to the charge which can be held by the gravitational field, respectively, with the coefficient depending upon the parameters of the system.}. The equilibrium charge described by (\ref{charge}) is represented by the solid line. The graphs have the only crossing point. When it is reached, the discharge is stopped by the gravitational field. Consequently, we can take the charge of the collapsar at this point as an estimation of the charge of new-born black hole. We equate expressions (\ref{charge}) and (\ref{gcharge}): \begin{equation} \frac{\alpha D r_g}{3 \pi} \left( \dfrac{\left(\dfrac{r}{r_g}-1\right)\ln\left(1- \dfrac{r_g}{r}\right)+1 -\dfrac{r_g}{2 r}}{ -\left(\dfrac{r}{r_g}\right)^2 \ln\left(1- \dfrac{r_g}{r}\right)-\dfrac{r}{r_g} -\dfrac12}\right) = \dfrac{r_g c^2}{2\mu}\sqrt{\dfrac{r}{r-r_g}}\nonumber \end{equation} Introducing a new dimensionless constant \begin{equation} \mathfrak{Y}=\dfrac{3\pi c^2}{2\alpha \mu D} \label{y} \end{equation} we finally obtain: \begin{equation} \left( \dfrac{\left(\dfrac{r}{r_g}-1\right)\ln\left(1- \dfrac{r_g}{r}\right)+1 -\dfrac{r_g}{2 r}}{ -\left(\dfrac{r}{r_g}\right)^2 \ln\left(1- \dfrac{r_g}{r}\right)-\dfrac{r}{r_g} -\dfrac12}\right)=\mathfrak{Y}\sqrt{\dfrac{r}{r-r_g}} \label{main} \end{equation} In order to find the charge we should solve this equation and substitute the obtained radius $r$ into (\ref{gcharge}) or (\ref{charge}). If $\mathfrak{Y}$ is very small ($-\ln\mathfrak{Y}\gg 1$), then $-\ln(r/r_g-1)\gg 1$, and the equation can be simplified: \begin{equation} \dfrac{1}{2 \mathfrak{Y}}=\sqrt{\frac{r_g}{r-r_g}}\ln\frac{r_g}{r-r_g} \label{mainsimple} \end{equation} \section{Application of the results for calculation of the charge of an astronomical black hole} Let us estimate the electric charge of a real new-born black hole. We have already mentioned that a sun mass black hole is born by the collapse of a massive star central part. As it is believed, neutron stars are also born as a consequence of massive star core collapses, but of lower masses. Consequently, the characteristic physical parameters of the core can be estimated from the parameters of neutron stars. The radius of a neutron star is $10-15$~km, the mass is $1.4-1.8 M_\odot$. Pulsars have the magnetic field $10^{12}-10^{13}$~{Gs}, the shortest known period of revolution\cite{knigashefa1} of a pulsar is $1.5$~ms. We take the mass of the collapsing core $3 M_\odot$ ($r_g=9\cdot 10^5$~cm). It is heavier than a neutron star to form a black hole. Hence, we should consider its physical parameter when its radius is bigger. Using the above-mentioned analogy, we presume that when the radius of the collapsing core is $20$~{km} its period of revolution is $10^{-3}$~{s} and the magnetic field intensity on its pole is $10^{13}$~{Gs}. As the specific charge $\mu$ we take the specific charge of electron (or positron). If the charge $Q$ is negative (the angular velocity of the core and its magnetic moment are oppositely directed), it is obvious. But even in the case when the charge of the core is positive, the temperature at the last stages of the collapse is so high that the positron concentration is sufficient to provide the discharge. From this we calculate $\alpha$,$D$ and $\mathfrak{Y}$. The formula for the period of revolution coincides with the classical one: $T_{\text{\it rev}}=\dfrac{2\pi r}{c \upsilon_{\phi}}$. Subtracting $\upsilon_{\phi}$ from (\ref{velocity}), we finally find $\alpha\simeq 11$~{km.} We find $\mathfrak{Y}=1.4\cdot 10^{-16}$. As soon as $-\ln\mathfrak{Y}\simeq 36\gg 1$ we can use (\ref{mainsimple}). Solving the equation, we have: \begin{align} \sqrt{\frac{r-r_g}{r_g}}&=1.77\cdot 10^{-14}\label{rad}\\ Q&=1.4\cdot 10^{13}\: \text{Coulombs} \label{q} \end{align} The maximum charge of the core is reached (in accordance with (\ref{charge})), when the radius is $r\simeq 1.3 r_g$. It is equal to $Q_m=4.3\cdot 10^{14}$~{Coulombs}, thirty times higher than the final one. \begin{figure} \centerline{\epsfig{file=5graph1.eps, width=10cm}} \caption{represents the left (solid line) and the right (dot line) parts of equation (\ref{main}) for the case when $\mathfrak{Y}=0.01$. They are proportional to the charge created by the collapsing core rotation in the magnetic field and to the charge which can be held by the gravitational field, respectively, with the coefficient depending upon the parameters of the system.} \label{fig} \end{figure} \section{Discussion} The foregoing calculations may provoke several objections. First of all, the radius (\ref{rad}), where the discharge is stopped by the gravitational field, is too close to the Schwarzschild sphere. The disturbance of the sphere produced by the core rotation is much stronger, and the adduced solution is certainly not valid for this region. Moreover, we used a lot of strong simplifying suppositions, such as the calculation of the magnetic field on the assumption of quasistatic collapse. These remarks are correct, but they do not affect significantly the conclusions of the article. The purpose of the work was to show that a new-born black hole can possess a big electric charge. The charge of the core surpasses $10^{14}$~C since the radius of the collapsing core becomes less than $r\simeq 8 r_g$, where the departure of the metric from the Schwarzschild one is not so significant. Moreover, the final charge depends weakly on the parameter (\ref{rad}) and, consequently, on the details of the discharge processes. Of course, the estimation (\ref{q}) can be inaccurate, but nevertheless we have strong reasons to presume the final charge to be bigger than $10^{13}$~C. Initially, the star is not electrically charged. As we have shown, because of the magnetodynamical processes a charge separation appears. Then what happens with the charge separated from the forming black hole? Of course, it is emitted to the substance of the envelope; a further destiny of the charge depends on the envelope evolution. The collapse of a core of a massive star is considered numerically in Ref.~\refcite{ardeljan05}. It is shown that the differential rotation of the envelope appears resulting in the magnetic field generation. At some moment magnetohydrodynamical instability arises; it leads to the twisting of the magnetic lines and closed magnetic vortex generation. The magnetic field structure becomes very complex. Finally, magnetic pressure increases so that it ejects the envelope to the outer space producing a supernova explosion. The charge evolved from the core moves along the magnetic lines and is trapped by the forming vortexes. During the supernova explosion the envelope substance is erupted out together with the charge frozen in the magnetic vortexes. In the {\it model description} we presumed that the core and the substance surrounding the core have infinitive conductivity. This statement can be proved numerically. Conductivity of the totally ionized plasma depends only on its temperature and does not depend on the density\cite{knigashefa1}: \begin{equation} \sigma=8\cdot 10^{-4} T^\frac32 _e \left[\frac{1}{\text{Ohm}\cdot \text{m}}\right]\nonumber \end{equation} Let us consider a sphere of the radius of $20$~{km} (characteristic for the collapsing core) and with electric charge $1.4\cdot 10^{13}$~C merged into the plasma with the temperature $T_e=10^{6}$~K. Then electric field intensity around the sphere is $6\cdot 10^{14}$~$\text{V}/\text{m}$, current density is $5\cdot 10^{20}\text{A}/\text{m}^2$, and the total current is $2.5\cdot 10^{30}\text{A}$. The sphere would be discharged in $\sim 10^{-17}$~s. Of course, this time has not a direct physical meaning, but this proves the negligibility of the electrical resistance in the considered task. It is important to notice that though the obtained charge of an astrophysical black hole is big, the charge to mass ratio is small $Q/(\sqrt{G} M) \sim 10^{-7}$, and it is not sufficient to affect significantly either the gravitational field of the star or the dynamics of its collapse. The numerical collapse of a charged star in the Reissner-Nordstr\"om space-time was computed\cite{ghezzi2004,ghezzi2005}. They considered spherical symmetry with distribution of an electric charge proportional to the distribution of mass. For the numerical computation they assumed a polytropic equation of state with $\gamma = 5/3$. They concluded that for low values of $Q/(\sqrt{G} M) $ no departure from unpolarized collapse was found. In our case $Q/(\sqrt{G} M) \sim 10^{-7}$, which satisfies the criteria\cite{ghezzi2004}, and it also justifies our approach of using a Schwarzschild metric. In the article, we considered a collapsing core with a moderate magnetic field. The electric charge of the new-born black hole turns to be sizable, but its electrostatic field ($\sim 10^{12}$~{Gs}) is lower than the critical one ($4.4\cdot 10^{13}$~{Gs}). However, in the case of magnetars, the magnetic field of the collapsing core can also be extremely strong ($\sim 10^{15}$~{Gs}). Then the charge of the black hole can be so big that the electric field will surpass the critical. In this case a new effect appears\cite{preparata98,ruffini2001}: a very large number of $e^+ e^-$ pairs is created in the region with overcritical electric field, which finally leads to the formation of a strong gamma-ray burst. Interaction of the accreting substance with the energetic $\gamma$-ray emission of a black hole can also induce an electric charge into it\cite{diego2004}. The Compton scattering of $\gamma$-photons kicks out electrons from the accretion flow, while for the protons this mechanism is not so effective because of higher mass-charge ratio. As a result, the black hole should obtain a significant charge. However, one can see\cite{diego2004} that in the system considered the charge induced by this process is approximately 100 times lower than (\ref{q}). Actually, in our case the mechanism is not effective. In Ref.~\refcite{lee2001}, processes in the force-free magnetosphere of an already formed black hole are considered. As it was shown, because of accretion the black hole acquires a significant electric charge: \begin{equation} Q\sim 10^{15}\text{C} \left(\dfrac{B}{10^{15}\text{Gs}}\right)\left(\dfrac{M}{M_\odot}\right)^2 \nonumber \end{equation} Here $B$ is the magnetic field intensity in the accreting substance near the black hole. This quantity is big and quite correlates with (\ref{q}). Thus, an astrophysical black hole can possess a big electric charge, either getting it during the birth or owing to the accretion. \section{Acknowledgements} This work was supported by the RFBR (Russian Foundation for Basic Research, Grant 08-02-00856).
{ "redpajama_set_name": "RedPajamaArXiv" }
6,584
Q: Finding equilibrium points for the system $y'=Ay$ Given a matrix $A= \begin{pmatrix} 8 & 4 \\ -10 & -5 \end{pmatrix}$, I am supposed to find all of the equilibrium points for the system $y'=Ay$ I find the eigenvalues are 0 and -3, and the corresponding eigenvectors $(1,-2)^T$ and $(4,-5)^T$ I am not sure what equilibrium points they want, are we particularly choosing $\lambda=0$? A: $y_0$ is an equilibrium point $ \iff Ay_0=0 \iff y_0= t (1,-2)^T$ for some $t \in \mathbb R.$
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,733
Sitting in the tiller cab the tiller operator cannot physically see the tiller wheels. By adding markers on the top of the trailer directly over the tiller axle the tiller operator can see where the tiller wheels are located. The markers provide a visual aid or reference point for the tiller operator allowing them to make decisions on when to make a turn and pivot around obstacles. The tiller wheels sit in front or ahead of the tiller operator. Stop and ask yourself this question "When sitting in the tiller seat, how many feet are the tiller wheels in front of the tiller cab?" Depending on the manufacturer and the length of your apparatus generally the tiller wheels are a good 8 to 10 feet ahead of the tiller seat. This is information is important, allowing the tiller operator to become more precise in their maneuvering of the trailer. Knowing when to make the correct pivot can only happen if the tiller operator knows the exact location of the tiller wheels. This design option is not standard with most manufacturers and will need to be requested during the spec process. Existing tractor drawn aerials that do not have tiller wheel markers can easily make this modification. There are a variety of different ways to accomplish this function. Departments have become creative when designing markers for the tiller wheels. Here are a couple examples. When we first recommended this with our manufacturer we chose to use the markers that the manufacturer use to line up the turn table. They are extremely easy to retrofit and pose no tripping hazard while walking on the top of the trailer. Brevard County incorporated an amber light into their design giving them the added benefit of seeing the marker at night. Their design also is low-profile and allows you to stand or walk on it. Of course there is more fabrication involved. During the design phase Clackamas Fire District was able to add tiller markers by simply re-positioning their existing scene lights directly over the tiller wheels. The Meriden fire Department added a piece of blue tape on their ground ladders that is attached to the outside of the trailer on the driver side. The piece of tape is directly over the tiller wheels.
{ "redpajama_set_name": "RedPajamaC4" }
6,883
require 'nokogiri' require 'open-uri' require 'json' module CodeCitations module Plos def self.search(name, urls, authors) page = 1 found = [] authors = "(" + authors[0..2].map{|author| "(" + author.split(' ').join(' OR ') + ")"}.join(' OR ') + ")" search_string = "(everything:\"#{name}\") OR " search_string += "(reference:" + [name, authors].map{|s| "\"#{s}\""}.join(' AND ') + ")" urls.each do |url| search_string += " OR everything:\"#{url}\"" end loop do start = (page - 1) * 100 api_url = "http://api.plos.org/search?q=#{search_string}&api_key=7Jne3TIPu6DqFCK=&rows=100&wt=json&start=#{start}" response = open(URI.encode(api_url)).read results = JSON.parse(response)['response']['docs'] break if results.empty? results.each do |result| doi = result['id'] next unless doi found << doi end page += 1 end found.uniq end end end
{ "redpajama_set_name": "RedPajamaGithub" }
8,143
From Idea to Virtual Reality: ALADIN - The Adult Learning Documentation and Information Network. Report of a CONFINTEA V Workshop and Its Follow-Up. Science for All Cultures: A Collection of Articles from NSTA's Journals.
{ "redpajama_set_name": "RedPajamaC4" }
921
\section{Introduction} \label{sec:intro} The residual symmetry approach~\cite{Lam:2008sh} is one of the appealing possibilities to explain the lepton mixing. According to this approach, the mixing originates from different ways of the flavor symmetry breaking in the neutrino and charged lepton Yukawa interactions. These different ways of breaking lead to different residual symmetries, $G_\nu$ and $G_\ell$, of the neutrino and charged lepton mass matrices~\cite{Altarelli:2010gt,King:2013eh,King:2014nza}. The residual symmetries ensure certain form of the mass matrices, and consequently, the mixing matrix. The observed pattern of lepton mixing close to Tri-BiMaximal (TBM) mixing is hardly connected to the neutrino and charged lepton masses or ratios of masses. For this reason the ``generic'' symmetries $G_\nu$ and $G_\ell$ were used which exist for arbitrary values of masses (eigenvalues of the mass matrices). In this way symmetry provides complete control over the mixing. For the Majorana neutrinos maximal generic symmetry is given by the Klein group $\mathbf{Z}_2 \times \mathbf{Z}_2$. Depending on selected residual groups and symmetry assignments for the leptons, different mixing matrices and relations between the mixing matrix elements can be obtained. The general relations~\cite{Hernandez:2012sk,Hernandez:2012ra}, the case of $\mathbf{Z}_2$ symmetry~\cite{Ge} and maximal CP violation~\cite{cpmax} have been discussed. Unfortunately, realizations of this approach in consistent gauge models are rather complicated and not very convincing: apart from set of new fields (flavons), they contain a number of assumptions, new parameters and additional symmetries with \textit{ad hoc} charge assignments (see~\cite{Altarelli:2010gt,King:2013eh,King:2014nza,Smirnov:2011jv} for reviews). In view of this, existence of symmetries behind lepton mixing, and in particular, residual symmetries is still an open issue. The standard way to obtain predictions for mixing angles consists of model-building, construction of mass matrices, and finally, diagonalization of these matrices. It was realized, however, that predictions for mixing angles can be obtained from knowledge of the residual symmetries immediately without model-building~\cite{Hernandez:2012sk,Hernandez:2012ra}. The \textit{symmetry group relation} has been derived which includes the mixing matrix, $U_{\rm PMNS}$, and the matrices of transformations of the residual symmetries of neutrinos, $S$, and charged leptons, $T$, in mass basis~\cite{Hernandez:2012ra}. The symmetry group relation is an efficient tool to explore possible consequences of various symmetries. Once viable relations and corresponding symmetry assignments are realized, one can proceed with model-building. It is widely believed that neutrinos are Majorana particles and smallness of neutrino mass is related somehow to the Majorana nature of neutrinos. Therefore, most of the studies of discrete residual symmetries have been performed for Majorana neutrinos. However, it is possible that neutrinos are the Dirac particles and, in fact, there is a number of mechanisms and models which lead to small Dirac neutrino masses~\cite{modelDirac}, \textit{e.g.} Dirac seesaw, Peccei-Quinn symmetry, extra dimensional mechanisms, chiral symmetry, {\it etc}. Consequences of some discrete flavor symmetries for the Dirac neutrinos have been studied~\cite{fDirac}. In this paper we will consider applications of the residual symmetry approach to the Dirac neutrinos. For this we will use generalizations of the symmetry group relation. The goal is to see if new relations between the mixing parameters can be obtained in the case of Dirac neutrinos with the hope that their simpler realizations are possible in consistent gauge models. The paper is organized as follows. In sec.~\ref{sec:residual} we consider the symmetry group relation for Dirac neutrinos. We study the case of $\mathbf{Z}_2$ residual symmetry for charged leptons ($m = 2$) in sec.~\ref{sec:cons}. $\mathbf{Z}_m$ symmetry with $m > 2$ for charged leptons will be explored in sec.~\ref{sec:m>2}. In sec.~\ref{sec:m3} we discuss some further considerations and generalizations. Discussion and conclusions are presented in sec.~\ref{sec:conc}. \section{Symmetry group relations for Dirac neutrinos} \label{sec:residual} We introduce the diagonal mass matrices of charged leptons, $m_\ell$, and Dirac neutrinos, $m_\nu$, as well as the PMNS maxing matrix, $U_{\rm PMNS}$, according to the following Lagrangian \begin{equation} \mathcal{L} = \frac{g}{\sqrt{2}} \bar{\ell}_L U_{\rm PMNS} \gamma^\mu \nu_L W^+_\mu + \bar{E}_R m_\ell \ell_L + \bar{N}_R m_\nu \nu_L + {\rm h.c.}~. \end{equation} Here $\ell_L\!=\!(e_L,\mu_L,\tau_L)^T$, $E_R\!=\!(e_R,\mu_R,\tau_R)^T$, $\nu_L\!=\!(\nu_{1L},\nu_{2L},\nu_{3L})^T$ and $N_R=(\nu_{1R},\nu_{2R},\nu_{3R})^T$. Let $S$ and $T$ be the matrices of transformation of the left-handed (as well as right-handed) components of neutrinos and charged leptons that leave the mass matrices invariant: \begin{equation} S^\dagger m_\nu S = m_\nu \quad , \quad T^\dagger m_\ell T = m_\ell~. \end{equation} So, $S$ and $T$ are generators of the residual symmetry groups $G_\nu$ and $G_\ell$ in the mass basis. For discrete finite symmetry groups, there should be integers $n$ and $m$ such that \begin{equation}\label{eq:finite} S^n=\mathbb{I} \quad {\rm and} \quad T^m=\mathbb{I}~. \end{equation} Then the symmetry group condition reads~\cite{Hernandez:2012ra} \begin{equation}\label{eq:st} \left[ U_{\rm PMNS}\, S \, U_{\rm PMNS}^\dagger \, T \right]^p = \mathbb{I}~, \end{equation} where $p$ is an integer number. Equivalently the condition can be transformed to~\cite{Hernandez:2012ra} \begin{equation}\label{eq:grouprel} {\rm Tr}\left[U_{\rm PMNS}\,S\,U_{\rm PMNS}^\dagger \,T\right]= a_p~, \end{equation} where \begin{equation}\label{eq:sum} a_p =\sum_{\beta=1}^{3} \lambda_\beta \quad , \quad \left(\lambda_\beta\right)^p = 1~, \end{equation} {\it i.e.}, $\lambda_\beta$ are the $p$-roots of unity. The three selected roots in Eq.~(\ref{eq:sum}) satisfy the condition \begin{equation}\label{eq:prod} \prod_{\beta=1}^{3} \lambda_\beta = 1~, \end{equation} which guarantees that the symmetry group $G_\nu\times G_\ell$ can be embedded in $SU(3)$. Values of $a_p$, for different $p$, which satisfy Eqs.~(\ref{eq:sum}) and (\ref{eq:prod}) are shown in Table~\ref{tab:ap}. Notice that for $p\leq3$ the value of $a_p$ is unique. \begin{table}[htdp] \caption{Values of $a_p$ for different $p$.} \begin{center} \begin{tabular}{|c|c|} \hline $a_2$ & -1 \\ \hline $a_3$ & 0 \\ \hline $a_4$ & 1 , $-1\pm2i$ \\ \hline $a_5$ & \begin{tabular}[c]{@{}c@{}} 1 , $\frac{1\pm\sqrt{5}}{2}$, \\ $\frac{-3-\sqrt{5}}{4} + i \frac{\sqrt{5(5+\sqrt{5})}}{2\sqrt{2}}$, \\ $\frac{-3+\sqrt{5}}{4} - i \frac{\sqrt{5(5-\sqrt{5})}}{2\sqrt{2}}$ \\ \end{tabular} \\ \hline \end{tabular} \end{center} \label{tab:ap} \end{table} The relation in Eq.~(\ref{eq:grouprel}) is general and valid for both Majorana and Dirac neutrinos. Its derivation is completely the same in both cases. In general, the symmetry transformation matrix $T$ can be written as \begin{equation} \label{eq:T} T= {\rm diag} \left( e^{i\phi_e} , e^{i\phi_\mu} , e^{i\phi_\tau} \right)~. \end{equation} The finiteness of the group, Eq.~(\ref{eq:finite}), allows to parametrize $\phi_\alpha$ as \begin{equation} \phi_\alpha= \frac{2\pi k_\alpha}{m}~, \quad {\rm where} \quad 0\leq k_\alpha < m~, \end{equation} and $k_\alpha$ are integers. The condition $\det[T]=1$, which means that the symmetry group generated by $T$ can be embedded in $SU(3)$, implies that $k_e+ k_\mu+ k_\tau = mq_k $, where $q_k = 1, 2$, and so for a given $q_k$ just two of $k_\alpha$ are independent. If one of $k_\alpha$ is zero, each transformation is determined by a single parameter $k$. For example, for $k_e=0$, we have $$T_{(e)} = {\rm diag} \left(1,e^{2\pi i k/m},e^{2\pi i (m-k)/m}\right)~,$$ where we introduced the subscript of $T$ which correspond to the lepton which does not transform under $T$. In the same way we can introduce independent symmetry transformations $$T_{(\mu)} = {\rm diag} \left(e^{2\pi i k/m},1,e^{2\pi i (m-k)/m}\right) \quad , \quad T_{(\tau)} = {\rm diag} \left(e^{2\pi i k/m},e^{2\pi i (m-k)/m},1\right)~.$$ In general $m$ and $k$ in $T_{(\mu)}$ and $T_{(\tau)}$ can be different, so that the total symmetry group is $\mathbf{Z}_m \times \mathbf{Z}_{m'} \times \mathbf{Z}_{m''}$. We will keep the general form of $T$, as in Eq.~(\ref{eq:T}), for our calculations. Similarly to (\ref{eq:T}), for the Dirac neutrinos, the transformation $S$ is given by \begin{equation}\label{eq:S} S= {\rm diag} \left( e^{i\psi_1}, e^{i\psi_2}, e^{i\psi_3} \right)~. \end{equation} From Eq.~(\ref{eq:finite}), we can parametrize the phases as $\psi_i=2\pi l_i/n$ with $0\leq l_i<n$. The condition $\det[S]=1$ leads to \begin{equation}\label{eq:det1} \sum_{j = 1}^3 l_j = n q_l~, ~~~~ q_l = 1, 2~, \end{equation} so that two $l_i$ are independent. In general three independent generators can be introduced and the total neutrino mass matrix symmetry is $\mathbf{Z}_n \times \mathbf{Z}_{n'} \times \mathbf{Z}_{n''}$. For Majorana neutrinos the generic (mass independent) symmetry exists only for $n = 2$. So, the new possibilities which are specific to the Dirac neutrinos consist of symmetries with $n>2$. Recall that the symmetry group condition stems from the fact that $S$ and $T$ originate from the same finite discrete group. In the flavor basis the charged currents are diagonal and whole information on mixing is encoded in the mass matrix of neutrinos, which is now non-diagonal: $m_\nu^f = U_{\rm PMNS} m_\nu U_{\rm PMNS}^\dagger$. In this basis the symmetry transformation is given by $S_U = U_{\rm PMNS} S U_{\rm PMNS}^\dagger$. Then, the condition that $S_U$ and $T$ form the same finite group is that their product belongs to the same group and some integer $p$ exist such that $\left( S_{U} T \right)^p = \mathbb{I}$. This condition coincides with the symmetry group relation in Eq.~(\ref{eq:st}). The parameters $n$, $m$ and $p$ in Eqs.~(\ref{eq:finite}) and (\ref{eq:st}) define the von Dyck group $D(n,m,p)$. The case of Majorana neutrinos discussed in~\cite{Hernandez:2012ra} is the special case with $n=2$. The von Dyck group is a finite group if \begin{equation}\label{eq:vondyck} \frac{1}{n} + \frac{1}{m} + \frac{1}{p} > 1~. \end{equation} Introducing the real and imaginary parts, $a_p = a_p^R + ia_p^I$, we obtain from Eqs.~(\ref{eq:grouprel}), (\ref{eq:T}) and (\ref{eq:S}) the following relations \begin{eqnarray}\label{eq:eq} \sum_\alpha \sum _i \left|U_{\alpha i}\right|^2 \cos\left( \phi_\alpha + \psi_i \right) = a_p^R~, \nonumber\\ \sum_\alpha \sum _i \left|U_{\alpha i}\right|^2 \sin\left( \phi_\alpha + \psi_i \right) = a_p^I~. \end{eqnarray} They are generalization of equations obtained in~\cite{Hernandez:2012ra} for $n = 2$. The Eq.~(\ref{eq:eq}) provide relations between moduli squared of the all elements of $U_{\rm PMNS}$ for specific ``symmetry assignment''. In general, all the elements of PMNS matrix are involved in each equation in ({\ref{eq:eq}). However, for some specific values of the phases $\phi_\alpha$ and $\psi_i$, these equations can be reduced to relations between the elements of one row or column of the $U_{\rm PMNS}$. This is the case if the phases are $0$ and $\pi$; {\it i.e.}, if $n=2$ or $m=2$. The two relations in Eq.~(\ref{eq:eq}) between the 4 independent parameters (three angles and CP phase) allow, {\it e.g.}, to predict two of them, once two others are fixed by experimental data. If one adds an additional transformation $S$ or $T$, two more relations with different values of $\phi_\alpha$ and $\psi_i$ will appear. If consistent, this can fix all the mixing parameters completely. In what follows we obtain the relations for various symmetry assignments and confront them with experimental results, thus identifying phenomenologically viable possibilities. The symmetry assignment consist of specifying ({\it i}) von Dyck group parameters $(n,m,p)$, ({\it ii}) the values of neutrino charges under the transformation $S$, $(l_1,l_2,l_3)$, ({\it iii}) the values of charged lepton charges under the transformation $T$, $(k_e,k_\mu,k_\tau)$, ({\it iv}) the value of $a_p$, for $p\geq4$. Recall that $n = 2$ is the only possibility for Majorana neutrinos and therefore the only common case to both Majorana and Dirac neutrinos, which has been explored in~\cite{Hernandez:2012ra}. In this case $\psi_i=0$ and $\pi$ and the relations in Eq.~(\ref{eq:eq}) are reduced to relations between elements of the \textit{columns} of $U_{\rm PMNS}$. These two relations and unitarity condition fix the column completely. The column number $i$ is determined by the neutrino mass state $\nu_i$ which is invariant under $S$, {\it i.e.} has $\psi_i=0$. All the relations obtained in~\cite{Hernandez:2012ra} for the Majorana neutrinos are valid also for the Dirac neutrinos in the case $n = 2$. However, model-building for the Dirac neutrinos can be different. In what follows we search for new relations between mixing parameters which are specific to the Dirac neutrinos. \section{Relations between the mixing parameters for $G_\ell = \mathbf{Z}_2$} \label{sec:cons} In the case of $m = 2$ the only symmetry assignments for the charged leptons consistent with the condition $\sum_\alpha k_\alpha = mq_k$ are $k_\alpha = (0,1,1)$, $(1,0,1)$ and $(1,1,0)$. They correspond to one of the phases $\phi_\gamma=0$ and two others equal $\phi_{\alpha\neq\gamma}= \pi$. Inserting this set of phases in Eq.~(\ref{eq:eq}) we obtain \begin{eqnarray}\label{eq:eqm2} \sum _i \left( 2 |U_{\gamma i}|^2 -1 \right) \cos \psi_i = a_p^R, \nonumber\\ \sum _i \left( 2|U_{\gamma i}|^2 -1 \right) \sin \psi_i = a_p^I~. \end{eqnarray} Thus, the symmetry group relation, with the unitarity condition, determine all elements of the $\gamma$-row of the mixing matrix. Recall that in (\ref{eq:eqm2}) the index $\gamma$ corresponds to the lepton invariant under transformation $T$: $k_\gamma = 0$. The solution to Eq.~(\ref{eq:eqm2}) is \begin{eqnarray}\label{eq:m2} |U_{\gamma 1}|^2 & = & \frac{a_p^R \cos\frac{\psi_1}{2} + \cos\frac{3\psi_1}{2} - a_p^I \sin\frac{\psi_1}{2}}{4\sin\frac{\psi_{21}}{2} \sin\frac{\psi_{31}}{2}}~, \nonumber\\ |U_{\gamma 2}|^2 & = & \frac{a_p^R \cos\frac{\psi_2}{2} + \cos\frac{3\psi_2}{2} - a_p^I \sin\frac{\psi_2}{2}}{4\sin\frac{\psi_{12}}{2} \sin\frac{\psi_{32}}{2}}~, \nonumber\\ |U_{\gamma 3}|^2 & = & \frac{a_p^R \cos\frac{\psi_3}{2} + \cos\frac{3\psi_3}{2} - a_p^I \sin\frac{\psi_3}{2}}{4\sin\frac{\psi_{13}}{2} \sin\frac{\psi_{23}}{2}}~, \end{eqnarray} where $\psi_{ij}\equiv\psi_i-\psi_j$~\footnote{Eq.~(\ref{eq:m2}) differs from relations in~\cite{Hernandez:2012ra} by substitution: $\phi_e\to\psi_1$, $\phi_\mu\to\psi_2$ and $\phi_\tau\to\psi_3$.}. The solution is valid for any value of $n$ and $p$. There are interesting properties of Eq.~(\ref{eq:m2}) which we will use in our further considerations. If $a_p$ is real ($a_p^I = 0$) and one of the phases $\psi_i=0$, {\it i.e.} the corresponding symmetry parameter $l_i = 0$, the equations in Eq.~(\ref{eq:m2}) are invariant with respect to the permutation of two other (nonzero) parameters $l_j \leftrightarrow l_k$. So, the assignments $(0, l_2, l_3)$ and $(0, l_3, l_2)$ or $(l_1, l_2, 0)$ and $(l_2, l_1,0)$, {\it etc.} will lead to the same solution in Eq.~(\ref{eq:m2}). Indeed, $l_2 + l_3 = n$, or $\psi_2 = 2\pi-\psi_3$, therefore permutation of $l_2$ and $l_3$ is equivalent to: $\psi_2 \rightarrow 2\pi- \psi_2$, $\psi_3 \rightarrow 2\pi-\psi_3$. This change the sign of both numerator and denominator, thus leaving the whole expressions in Eq.~(\ref{eq:m2}) invariant. We performed scan of all possible symmetry assignments, satisfying Eq.~(\ref{eq:vondyck}), and identified the phenomenologically viable ones; {\it i.e.}, symmetry assignments which lead to relations among the mixing parameters in agreement with experimental data. The latter has been done in the following way: for fixed values of $\theta_{13}$ and $\theta_{12}$, the two symmetry relations in Eq.~(\ref{eq:eqm2}) determine the values of $\theta_{23}$ and $\delta$. We find regions of $\theta_{23}$ and $\delta$ by varying the angles $\theta_{13}$ and $\theta_{12}$ in the $3\sigma$ allowed ranges from the global fit of the neutrino oscillation data. Then we confront this prediction with the allowed regions in the $(\sin^2\theta_{23},\delta)$ plane. The results are shown in Figure~\ref{fig:allowed} where the solid, dashed and dot-dashed curves show the allowed regions respectively at $1\sigma$, $2\sigma$ and $3\sigma$ confidence level from global analysis of neutrino oscillation data (including the latest T2K~\cite{Abe:2015awa} and NO$\nu$A~\cite{nova} data) taken from~\cite{marrone}. The black point inside the $1\sigma$ regions shows the best-fit point. We find only three symmetry assignments that lead to predictions compatible with the experimental data, at least at $3\sigma$ level. These are shown by colored regions in Figure~\ref{fig:allowed}. All of these possibilities have one $l_j = 0$, and the corresponding phase $\psi_j = 0$. In this case the condition in Eq.~(\ref{eq:det1}) is reduced to $ l_i + l_k = n q_l$, $i, k \neq j$, or $\psi_i = - \psi_k + 2 \pi q_l$. We will consider general case of all nonzero values of $l_j$ in sec.~\ref{sec:m3}. Below we describe the viable possibilities in detail. 1). $(n,m,p)=(3,2,5)$: the neutrino symmetry is $\mathbf{Z}_3$, with the embedding von Dyck group $\mathbf{A}_5$. Neutrino charges equal $(l_1,l_2,l_3)=(0,1,2)$ or $(0,2,1)$ which give the same result, and the trace parameter is $a_5=(1-\sqrt{5})/2$. Then for the charged lepton phase $\phi_\mu = 0$ ($\mu$-row), which we call the $\mu$-solution, or $\phi_\tau = 0$ ($\tau$-row, the $\tau$-solution) we obtain \begin{equation}\label{eq:n3m2} \left( |U_{\mu(\tau)1}|^2,|U_{\mu(\tau)2}|^2,|U_{\mu(\tau)3}|^2 \right) = \left(\frac{3-\sqrt{5}}{6}, \frac{3+\sqrt{5}}{12},\frac{3+\sqrt{5}}{12}\right)~. \end{equation} The case $\phi_e=0$ is inconsistent with the data. The main feature of solutions in Eq.~(\ref{eq:n3m2}) is that \begin{equation} |U_{\mu(\tau)2}|^2 = |U_{\mu(\tau)3}|^2 \approx 0.436~. \end{equation} In terms of the mixing angles the $\mu$-solution (in the standard parametrization of the PMNS matrix) reads \begin{eqnarray}\label{eq:n3m2mu} |U_{\mu 3}|^2 & = & s_{23}^2 c_{13}^2 = A~, \nonumber\\ |U_{\mu 2}|^2 & = & c_{12}^2c_{23}^2 + s_{12}^2s_{23}^2s_{13}^2 - 2 c_{12}c_{23}s_{12}s_{23}s_{13}\cos\delta = B, \end{eqnarray} where $c_{ij}=\cos\theta_{ij}$, $s_{ij}=\sin\theta_{ij}$ and $A=B=(3+\sqrt{5})/12\approx0.436$. From the first equality in Eq.~(\ref{eq:n3m2mu}) we find that \begin{equation} s_{23}^2 = \frac{A}{c_{13}^2} \approx 0.445~. \end{equation} Due to high accuracy of the measurements of $\theta_{13}$ angle, and the smallness of this angle, the value of $\theta_{23}$ is fixed rather precisely. The second relation in Eq.~(\ref{eq:n3m2mu}) gives \begin{equation}\label{eq:delmu} \cos\delta = 2\;\frac{-B+c_{12}^2c_{23}^2+s_{13}^2s_{12}^2s_{23}^2}{\sin2\theta_{12}\,\sin2\theta_{23}\,\sin\theta_{13}}~, \end{equation} where $B=0.436$. The first two terms in the numerator of Eq.~(\ref{eq:delmu}) have comparable values and the last term, proportional to $s_{13}^2$, is small. Consequently, even small variation of $c_{12}^2$ lead to significant change in the value of $\cos\delta$. These features can be seen in Figure~\ref{fig:allowed}: the red region shows the predicted ranges of parameters $(\sin^2\theta_{23},\delta)$ which are obtained using Eq.~(\ref{eq:n3m2mu}) with uncertainties of $\theta_{13}$ and $\theta_{12}$ taken into account ({\it i.e.}, varying the $\theta_{12}$ and $\theta_{13}$ in $3\sigma$ allowed ranges). The black cross in the red region shows the prediction from Eq.~(\ref{eq:n3m2mu}) assuming the best-fit values of $\theta_{13}$ and $\theta_{12}$. \begin{figure}[t] \centering \subfloat[]{ \includegraphics[width=0.5\textwidth]{NHn} \label{fig:nh} } \subfloat[]{ \includegraphics[width=0.5\textwidth]{IHn} \label{fig:ih} } \caption{\label{fig:allowed}The prediction of discrete symmetries for Dirac neutrinos in the $(\sin^2\theta_{23},\delta)$ plane, compared with the experimentally allowed region for normal (left panel) and inverted (right panel) hierarchies. The red and green regions correspond to the predictions of the symmetry group $D(3,2,5)$. The blue and brown regions are for $D(4,2,3)$; and the orange region is for $D(5,2,3)$. The solid, dashed and dot-dashed curves determine the allowed regions from global analysis of oscillation data, respectively at $1\sigma$, $2\sigma$ and $3\sigma$~\cite{marrone}.} \end{figure} The $\mu$-solution is in good agreement with data in the first quadrant in the case of normal mass hierarchy. The best fit value of the phase $\delta = 1.25 \pi$, and maximal allowed phase, $\delta = 1.4\pi$, is close to the present best experimental fit. The predictions have overlap with the allowed region at $\sim1\sigma$. For the $\tau$-solution we have \begin{eqnarray}\label{eq:n3m2tau} |U_{\tau 3}|^2 & = & c_{23}^2 c_{13}^2 = A~, \nonumber \\ |U_{\tau 2}|^2 & = & c_{12}^2s_{23}^2 + s_{12}^2c_{23}^2s_{13}^2 + 2 c_{12}c_{23}s_{12}s_{23}s_{13}\cos\delta = B~, \end{eqnarray} where $A=B=(3+\sqrt{5})/12$. From the first equality we obtain $s_{23}^2 = 0.555$. This solution is related to the $\mu$-solution by $s_{23}^2 \leftrightarrow c_{23}^2$ and changing the sign of $\cos\delta$. This connection is reflected in Figure~\ref{fig:allowed} where the green region shows the predicted values in $\tau$-solution. Since for the $\tau$-solution $s_{23}^2 \leftrightarrow c_{23}^2$, numerically we get the same estimation for $\delta$ up to the change $\delta\to\pi-\delta$, which can be seen in Figure~\ref{fig:allowed}. The black cross in the green region shows the prediction from Eq.~(\ref{eq:n3m2tau}) for the best-fit values of $\theta_{13}$ and $\theta_{12}$. The $\tau$-solution gives good agreement with data for inverted mass hierarchy. The predicted best fit value of phase is $\delta=1.76\pi$, and the lower bound $\delta\geq1.62\pi$. The solution overlaps with $\sim2\sigma$ allowed region. Future determination of the octant of $\theta_{23}$ can disentangle $\mu$- and $\tau$-solutions. 2). $(n,m,p)=(4,2,3)$: the neutrino residual symmetry is $\mathbf{Z}_4$ and the von Dyck group is $\mathbf{S}_4$. The symmetry assignments $(l_1,l_2,l_3)=(1,3,0)$ or $(3,1,0)$, $\phi_\mu = 0$ ($\phi_\tau =0$) and $a_3=0$ leads to \begin{equation}\label{eq:n4m2} \left( |U_{\mu(\tau)1}|^2,|U_{\mu(\tau)2}|^2,|U_{\mu(\tau)3}|^2 \right) = \left( \frac{1}{4},\frac{1}{4},\frac{1}{2} \right)~. \end{equation} This solution gives relations between mixing parameters as in Eqs.~(\ref{eq:n3m2mu}) and (\ref{eq:n3m2tau}), with $A=1/2$ and $B=1/4$. The predictions are shown by blue and brown regions in Figure~\ref{fig:allowed}, respectively for the $\mu$-row and $\tau$-row solutions. For the best fit values of $\theta_{13}$ and $\theta_{12}$ the relations in Eq.~(\ref{eq:n4m2}) cannot be satisfied. Thus, there are no cross points in these regions. The 2-3 mixing is close to maximal: $s_{23}^2=0.5/c_{13}^2\approx0.511$ for the $\mu$-solution; and $s_{23}^2\approx0.489$ for the $\tau$-solution. Expression for phase, given by Eq.~(\ref{eq:delmu}) with $a=0.25$, leads to values of $\delta$ close to $0$ or $\pi$. The $\delta-\theta_{23}$ predictions are compatible with the data at best at $2\sigma$ level. 3) $(n,m,p)=(5,2,3)$: in this case the neutrino residual symmetry is $\mathbf{Z}_4$ and the covering von Dyck group is $\mathbf{A}_5$. The symmetry assignments $(l_1,l_2,l_3)=(0,3,2)$ or $(0,2,3)$, $\phi_\tau = 0$ and $a_3=0$ lead to \begin{equation}\label{eq:n5m2} \left( |U_{\tau1}|^2,|U_{\tau2}|^2,|U_{\tau3}|^2 \right) = \left( \frac{2}{5+\sqrt{5}},\frac{1+\sqrt{5}}{4\sqrt{5}},\frac{1+\sqrt{5}}{4\sqrt{5}} \right)~. \end{equation} Rewriting this relation in terms of mixing parameters gives relations similar to Eq.~(\ref{eq:n3m2tau}) with $A=B=(1+\sqrt{5})/(4\sqrt{5})\approx0.36$. The prediction from Eq.~(\ref{eq:n5m2}) is shown by the orange region in Figure~\ref{fig:allowed}. According to Eq.~(\ref{eq:n5m2}) \begin{equation} |U_{\tau 2}|^2 = |U_{\tau 3}|^2 =A \approx 0.36~, \end{equation} which deviates from maximal mixing significantly being at the border of $3\sigma$ allowed region. Again $\delta$ is close to $\pi$. The $\mu$-solution with $\phi_\mu=0$ is excluded at $3\sigma$ level. The symmetry of the prediction regions in Figure~\ref{fig:allowed} with respect to $\delta=\pi$ comes from the $\cos\delta$ dependence of the moduli of mixing elements. \section{The cases with $G_\ell=\mathbf{Z}_m$ and $m>2$} \label{sec:m>2} As we have mentioned in sec.~\ref{sec:residual}, for symmetry groups with $n,m>2$ the relations in Eq.~(\ref{eq:eq}) cannot be reduced to constraints on the elements of single column or row of the $U_{\rm PMNS}$ matrix. We find that all the symmetry assignments with $m>2$ (which include $D(3,3,2)$, $D(3,4,2)$, $D(3,5,2)$, $D(4,3,2)$ and $D(5,3,2)$) lead to relations between matrix elements which are incompatible with the experimental data at $3\sigma$ level. To elucidate the reason behind this result we consider one example in details. $D(3,3,2)\equiv \mathbf{A}_4$: for this assignment the first relation in Eq.~(\ref{eq:eq}) leads to \begin{equation}\label{eq:n3m3first} |U_{\alpha i}|^2 + |U_{\beta j}|^2 + |U_{\gamma k}|^2 = \frac{1}{3}~, \end{equation} where $\alpha$, $\beta$ and $\gamma$ are all different and should be selected from $(e,\mu,\tau)$, also $i$, $j$ and $k$ are all different being selected from $(1,2,3)$ depending on various assignments for $(k_e,k_\mu,k_\tau)$ and $(l_1,l_2,l_3)$. The key feature of the equality in Eq.~(\ref{eq:n3m3first}) is smallness of the right side which strongly restricts the allowed elements in the left side. Since the elements $|U_{e1}|^2$, $|U_{\mu 3}|^2$ and $|U_{\tau 3}|^2$ alone are bigger than $1/3$, the left hand side should include $|U_{e3}|^2$ and small elements from the $\mu$ and $\tau$ rows and the 1 and 2 columns. In fact, the only allowed choices of combinations of elements in Eq.~(\ref{eq:n3m3first}) are $|U_{e3}|^2 + |U_{\mu1}|^2 + |U_{\tau2}|^2 =1/3$ and a similar combination with $\mu\leftrightarrow\tau$. The second relation in Eq.~(\ref{eq:eq}) for the first combination leads to the equality \begin{equation}\label{eq:n3m3second1} |U_{e1}|^2 + |U_{\mu 3}|^2 + |U_{\tau 2}|^2 = |U_{e2}|^2 + |U_{\mu 1}|^2 + |U_{\tau 3}|^2~, \end{equation} which can be reduced to \begin{equation} c_{13}^2 \left[ \cos2\theta_{12}-\cos2\theta_{23} \right] =0~, \end{equation} and $\theta_{23}=\theta_{12}$. For the second combination the relation can be obtained by $\mu\leftrightarrow\tau$ permutation, which finally leads to $\theta_{23}=\pi/2-\theta_{12}$. Both relations are excluded by experimental data. Notice, however, that the current best fit values are $\sin\theta_{23}=0.67$ and $\sin\theta_{12}=0.56$, so that $\sim10\%$ corrections to the equality $\sin\theta_{23}=\sin\theta_{12}$ can bring these relations to agreement with the data. For the other symmetry groups with $m>2$ similar arguments can be found showing the exclusion by experimental data, which we have checked also by numerical calculation. \section{Further considerations and generalizations} \label{sec:m3} Here we lift some of the conditions imposed in the previous sections. 1. Let us relax condition that one of the phases $\psi_i$ is zero. Notice that for $m = 2$ the equality $\phi_\alpha = 0$ for one the phases is unavoidable. It is straightforward to check that the equality $\det[S]=1$ can be satisfied with all nonzero values of $l_i$ only for $n\geq3$. In the following we take $m=2$ and explore the cases with $n\geq3$. For $n\leq5$, the condition $\det[S]=1$ or (\ref{eq:det1}) can be satisfied (with all the $l_i$ being nonzero) when at least two of the $l_i$ are equal. Indeed, for $n=3$, the possibilities are $l_j = (1, 1, 1)$ ($q_l = 1$) and $l_j = (2, 2, 2)$ ($q_l = 2$). In the case of all equal charges $l_j$, and therefore the phases $\psi_j = \psi$, the conditions in Eq.~(\ref{eq:eqm2}) are reduced (due to unitarity) to the inconsistent equalities $\sin \psi = \cos \psi = 0$. For $n=4$ two assignments satisfy condition (\ref{eq:det1}): $l_j =(1, 1, 2)$ ($q_l = 1$) and $l_j =(2,3,3)$ ($q_l = 2$). For $n= 5$ we have $(1,1,3)$, $(1,2,2)$ ($q_l = 1$) and $(4,3,3)$, $(4,4,2)$ ($q_l = 2$) (with all possible permutations of charges). General feature of all these assignments is that two charges and therefore two phases are equal. Denoting equal phases by $\psi^\prime$, we get for the remaining phase $\psi_j=2\pi q_l - 2\psi^\prime$. Then the relations in Eq.~(\ref{eq:eqm2}) are reduced to \begin{equation}\label{eq:4-5rel} \left| U_{\gamma j} \right|^2 =\frac{a_p^R+\cos2\psi^\prime}{2\left( \cos2\psi^\prime - \cos\psi^\prime \right)} \quad, ~~~~~ \quad \left| U_{\gamma j} \right|^2 =\frac{-a_p^I+\sin2\psi^\prime}{2\left( \sin2\psi^\prime + \sin\psi^\prime \right)}~, \end{equation} where $\gamma$ corresponds the charge lepton with zero phase $\phi_\gamma=0$ and $j$ --- to the neutrino with unequal phase. Explicitly, for $n = 4$ we have $\psi^\prime= \pi/2$ or $\psi^\prime= 3\pi/2$ and then Eq.~(\ref{eq:4-5rel}) gives $\left| U_{\gamma j} \right|^2 = 0.5 (1 - a_p^R)$ and $\left| U_{\gamma j} \right|^2 = - 0.5 a_p^I$. These two equalities are inconsistent for $a_2$ and $a_3$. For $p=4$ the finiteness condition (\ref{eq:vondyck}) is not satisfied. Assuming that in this case a subgroup can be made finite and using $a_4=1$ or $-1-2i$ we obtain $|U_{\gamma j}|^2=0$ or $1$. The first case may still work for $\gamma = e$ and $j = 3$, \textit{i.e.} for $U_{e3}$, in the first approximation. For $n = 5$, the phase $\psi^\prime$ can be $2\pi/5$, $4\pi/5$, $6\pi/5$ and $8\pi/5$. We have checked that none of these $\psi^\prime$ values, and possible values of $a_p$, lead to a consistent solution of Eq.~(\ref{eq:4-5rel}). For $n\geq6$, there are more assignments that satisfy the condition (\ref{eq:det1}) with all nonzero $l_j$, both when two $l_j$ are equal and when they are all different. Furthermore, the finiteness of group, Eq.~(\ref{eq:vondyck}), requires that $p=2$ and consequently $a_2=-1$. When two $l_j$ are equal, the relations in Eq.~(\ref{eq:eqm2}) lead to $\cos2\psi^\prime=\cos\psi^\prime$ and therefore to zeros in the denominator of relations in Eq.~(\ref{eq:4-5rel}). For the case of all different $l_j$, we have numerically checked that no viable solution of Eq.~(\ref{eq:eqm2}) exist. Thus, generally, when $m=2$, there is no viable symmetry assignment with all nonzero charges $l_j$. \\ 2. We have numerically checked that also for $m>2$ no symmetry group relation compatible with the data and with all nonzero charges $l_j$ can be obtained. \\ 3. In all the considered examples in secs.~\ref{sec:cons} and \ref{sec:m>2} symmetry fixes two out of four mixing parameters in $U_{\rm PMNS}$. All the 4 mixing parameters can be determined by symmetry if additional symmetry transformation, $S^\prime$ or $T^\prime$, is introduced. This gives second symmetry group relation which can lead to two new relations between the matrix elements. Let us consider the second transformation $T^\prime$. We can take $m^\prime=2$ (but with different assignment for $k_\alpha$ than the assignment for $T$). In this way we can fix the elements of both $\mu$- and $\tau$-rows. However, as we see in Figure~\ref{fig:allowed} the $\mu$- and $\tau$-solutions are inconsistent with each other (the corresponding regions do not overlap). In sec.~\ref{sec:m>2} we have shown that for $m^\prime>2$ there is no solution compatible with data for any $n\geq3$. Let us consider a second neutrino transformation $S^\prime$ and single $T$ with $m=2$. For $n^\prime\geq3$ again the second symmetry group condition will give one of the viable solutions described above. Again, since any two different obtained solutions are incompatible, adding such $S^\prime$ will be inconsistent. Finally let us consider the case of $S^\prime$ with $n=2$ (and $T$ with $m=2$). In this case the second symmetry group condition is characterized by $n=m=2$ and arbitrary $p$. In this case the phases of transformations are $0$ or $\pi$, both in $S^{\prime}$ and $T$. Therefore, the left-hand side of the second equation in (\ref{eq:eq}) is zero, which requires $a_p^I=0$, for consistency. So, in this case only the first equation in (\ref{eq:eq}) gives non-trivial relation on the mixing elements. From this equation we obtain \begin{equation}\label{eq:nn} |U_{\alpha i}|^2 = \frac{1+ a^R_p}{4}~, \end{equation} where $\alpha$ and $i$ are referred to the charged lepton and neutrino which are invariant under transformations. In this way, in principle, one can fix any element of the mixing matrix. Possible values of the right-hand side in Eq.~(\ref{eq:nn}) are 0 (for $a_2$), $1/4$ (for $a_3$), 1/2 (for $a_4$), and $0.65$ or $0.08$ (for $a_5$). The interesting possibility would be $|U_{\alpha i}|^2 = 0.65$ for $a_5$, which is close experimental result for $|U_{e1}|^2$. However, in the case of single $T$, it fixes $\alpha$ for the additional relation, which should be $\mu$ or $\tau$. So, it can only fix one of the elements of the row which was already determined by $S$ and $T$. To fix the element outside the fixed row by the first symmetry group relation one needs to introduce another transformation $T'$, and therefore further expand the symmetry group. \section{Conclusion} \label{sec:conc} We have studied mixing of the Dirac neutrinos in the residual symmetries approach. The key difference from the Majorana case (extensively studied before) is that the Dirac mass matrix may have larger generic (mass independent) symmetries: $\mathbf{Z}_{n}$ with $n > 2$, with maximal symmetry being the product of three such factors. For the Majorana neutrinos only $n = 2$ is possible with the maximal symmetry $\mathbf{Z}_{2} \times \mathbf{Z}_{2}$. Of course for the Dirac neutrinos also the case $n = 2$ is applied. We generalized the symmetry group relations to the case of Dirac neutrinos and use them to explore new patterns of lepton mixing which can be obtained for the Dirac case. The residual neutrino symmetries $\mathbf{Z}_{n}$ with $n \geq 3$ have been explored. We have found all new phenomenologically viable (within allowed $3\sigma$ region) relations between mixing parameters and the corresponding symmetry assignments. We presented the relations as predictions for the 2-3 mixing angle, $\theta_{23}$, and the CP phase, $\delta$, in terms of the well-measured parameters $\sin^2 \theta_{13}$ and $\sin^2 \theta_{12}$. We find that for $n \geq 3$ the viable solutions (relations) exist only if the charged lepton residual symmetry is $G_{\ell} = \mathbf{Z}_{2}$. In this case the symmetry fixes elements of a single row of the PMNS matrix. Viable solutions (for mixing angles) exist for $n = 3$ and the group $\mathbf{A}_5$, $n = 4$ and the group $\mathbf{S}_4$, $n = 5$ and the group $\mathbf{A}_5$. The solutions fix rather precisely $\theta_{23}$ and give rather large range of values of $\delta$ which is determined by the present uncertainty in $\theta_{12}$. We found that introduction of the second $S^\prime$ or $T^\prime$, which can fix all the mixing parameters, does not lead to consistent solution. For bigger residual symmetry of the charged leptons, $G_{\ell} = \mathbf{Z}_{m}$ with $m > 2$, the relations between the mixing matrix elements become more complicated, involving elements of both columns and rows. We checked that no viable solutions exist in this case. At the same time some interesting relations are realized which can be brought in agreement with data if some relatively small corrections (related to violation of the residual symmetries) are included. In particular, the equality $\theta_{23} = \theta_{12}$ is realized in $D(3, 3, 2) = \mathbf{A}_{4}$ case. Corrections of the order $10\%$ can lead to agreement with data. The solutions we have found lead to discrete values of $\sin^2\theta_{23}$, determined by the symmetry group parameters. Future precise measurement of $\sin^2\theta_{23}$ with accuracy $\sim0.01$ can discriminate among the possibilities. Also, the precise determinations of $\sin^2\theta_{12}$ ({\it e.g.} by JUNO) will lead to precise prediction of the phase $\delta$, and so future measurements of the phase will provide crucial checks of the obtained symmetry relations. On the other hand, the identified viable symmetries and symmetry assignments will be useful for further model building. \begin{acknowledgements} The authors would like to express special thanks to the Mainz Institute for Theoretical Physics (MITP) for its hospitality and support. A.~E. thanks the Max-Planck-Institut f\"ur Kernphysik, Heidelberg, for the hospitality and support during the completion of this work. \end{acknowledgements}
{ "redpajama_set_name": "RedPajamaArXiv" }
6,129
Q: Общая доменная логика: .Net Standard или .Net Framework? Пробую создать демо-решение "Записная книжка" (база контактов), в котором было бы несколько приложений под разные платформы. Как я себе это видел. База данных будет на MS SQL, будет обычный asp.net mvc веб-сайт (это типа приложение номер раз - для тех, кто пойдёт через браузер), отдельно будет asp.net web api, которое будет отдавать чистый json для мобильных приложений и отдельно будет пара приложений (одно под андроид, одно под ios). В VS2017 есть проект xamarin.forms с готовым шаблоном прямо специально заточенным под показ списка контактов - Master Detail, остаётся только MockDataStore перепилить с List на сервис, читающий из web api. Собственно, ради приложения под андроид всё и затевалось, хотелось попробовать мобильную разработку. Однако что-то у меня не стыкуется с тем, как состыковать столь разнородные приложения. Я создал проект Domain (обычная class library), в который поместил класс Contact и попробовал заменить модель Item из проекта мобильного приложения на свою модель, однако даже простое перемещение решарпером класса Item в Class Library .Net Framework приводит к ошибке (Domain should reference Assembly:netstandard). К слову, та же самая ошибка даже если пробовать переместить в проект Class Library .Net Standard. Насильственное перемещение приводит к множеству ошибок компиляции. Окончательно рвёт шаблон то, что при попытке подключить в проект assembly через reference - add reference никаких assembly со словом net или standard на компе нет. Итого. Непонятно: * *как создавать проект доменной логики: как .Net Framework или как .Net Standard. Для ASP.Net MVC нужна обычная .Net Framework, какая нужна приложениям Xamarin вообще непонятно *где найти Assembly:netstandard *возможно, что для подобных мультипроектов нужно отказываться от классического приложения asp.net mvc и заменять их на asp.net core (в них я видел при создании выбор между .net core и .net framework), чтобы все проекты шли на одном технологическом стеке (раз, два)
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,216
Hamilton can legitimatelyclaim Formula One's greatest Brit title Oct. 19, 2017 | 12:05 AM Lewis Hamilton is already the most successful British Formula One driver of all time and by sunset Sunday he could legitimately claim to be among the greatest. Hamilton slashes Vettel's lead to one point Lewis Hamilton won his home British Grand Prix for the fourth year in a row Sunday while a penultimate-lap puncture slashed Sebastian Vettel's championship... Motor racing-Hamilton on pole for home British Grand Prix Lewis Hamilton took pole position for his home British Grand Prix for the third year in a row on Saturday with a sensational lap that left him one step away... Triple crown: Monaco or Formula One championship? If Fernando Alonso were to beat the odds and win the Indianapolis 500 on May 28, the double Formula One world champion will be one step away from the "Triple... Rosberg a winner among several big champions Nico Rosberg drew level with Damon Hill in Formula One's all-time list of race winners Sunday and the odds are narrowing on the German matching the Briton... Rossi's heart in Monaco, mind on Indy 500 win Alexander Rossi's mind will be focused on the 100th running of the Indianapolis 500 Sunday but his heart may be riding along at the Monaco Grand Prix Fatalities of Formula One drivers during Grand Prix weekends Formula One drivers killed during Grand Prix weekends following the death of Jules Bianchi Friday night nine months after a devastating crash at the Japanese... Happy Hamilton a home winner again Formula One world champion Lewis Hamilton soaked up the energy of a roaring home crowd to win the British Grand Prix for the second year in a row Sunday and... Hamilton determined to dominate at British Grand Prix Lewis Hamilton may not be feeling the years just yet but the double Formula One world champion still senses time is ticking by as he gears up for the ninth... Double top for the older, wiser and confident Briton Grand Prix greats come from all sorts of backgrounds but few started life with the odds stacked against them quite as heavily as Lewis Hamilton. Formula One's top 10 title deciding showdowns Sunday's season-ending Abu Dhabi Grand Prix will decide the Formula One title, with Mercedes duo Lewis Hamilton and Nico Rosberg fighting for the crown. Hamilton aims to make it 10 in Texas Alan Baldwin Nov. 01, 2014 | 12:12 AM Lewis Hamilton will be aiming for a big win in Texas this weekend, even if the Formula One starting grid is the smallest he has ever known it with the absence... Hamilton wins Chinese GP in Mercedes one-two Abhishek Takle Apr. 20, 2014 | 02:05 PM Lewis Hamilton completed a hat-trick of wins by leading a dominant Mercedes one-two in the Chinese Formula One Grand Prix Sunday. Hamilton takes record sixth British Grand Prix victory Formula One champion Lewis Hamilton hailed his home fans after celebrating a record sixth British Grand Prix win Sunday and stretching his lead over luckless...
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,142
package com.reflectoring; @Company(name = "AAA", city = "ZZZ") public class MultiValueAnnotatedEmployee { }
{ "redpajama_set_name": "RedPajamaGithub" }
3,635
package org.wildfly.extension.gravia.parser; import javax.xml.stream.XMLStreamConstants; import javax.xml.stream.XMLStreamException; import org.jboss.as.controller.persistence.SubsystemMarshallingContext; import org.jboss.staxmapper.XMLElementWriter; import org.jboss.staxmapper.XMLExtendedStreamWriter; /** * Write Gravia subsystem configuration. * * @author Thomas.Diesler@jboss.com * @since 23-Aug-2013 */ final class GraviaSubsystemWriter implements XMLStreamConstants, XMLElementWriter<SubsystemMarshallingContext> { static GraviaSubsystemWriter INSTANCE = new GraviaSubsystemWriter(); // hide ctor private GraviaSubsystemWriter() { } @Override public void writeContent(XMLExtendedStreamWriter writer, SubsystemMarshallingContext context) throws XMLStreamException { context.startSubsystemElement(Namespace.CURRENT.getUriString(), false); writer.writeEndElement(); } }
{ "redpajama_set_name": "RedPajamaGithub" }
4,503
{"url":"http:\/\/www.cs.nyu.edu\/pipermail\/fom\/2000-June\/004121.html","text":"# FOM: From philosophy to applied' model theory\n\nJohn Baldwin jbaldwin at math.uic.edu\nWed Jun 21 10:15:58 EDT 2000\n\nThis is a further response to the comments of Steve Simpson and Dave\nMarker on the use of the term\napplied model theory. It arises because these remarks coincided with\nmy writing a review of a paper called Finite Covers' by Evans, McPherson\nand Ivanov. This topic shows one example of how philosophical' questions\nin model theory have evolved into technical mathematics.\n\nIn his thesis, Morley posed the question. Can an aleph-1 categorical\ntheory be finitely axiomatizable?\n\nThis question has some philosophical content. Bill Tait put it, do we\nknow all the ways a single sentence can demand infinity:\ndiscrete linear order, dense linear; a pairing function. (I think these\nbasically remain the only examples.)\n(Note that even a requirement of completeness complicates\nthe issue: Asserting that a unary function is 1-1 and only one element\ndoes not have a predecessor forces an infinite model without dealing\nwith how many finite cycles are permitted. Morley asks more strongly\nthat the sentence be aleph_1 categorical.)\n\nThis question split into two parts depending on whether the theory was\nrequired to be aleph_0 categorical. If not, Peretyatkin gave a\ncounterexample and in fact has constructed a machine (deriving from Hanf)\nfor converting non-finitely axiomatized aleph_1 categorical theories to\nfinitely axiomatized ones. The question of whether there is\na finitely axiomatizable strongly minimal set remains open; conceivably\nthis would require a new' axiom of infinitity.\n\nBut, the proof (Zilber, Cherlin-Harrington-Lachlan) that there is\nno finitely axiomatizable theory categorical in all infinite\ncardinalities, gave birth to geometric stability theory'.\nThe proof required that one analyze strictly minimal sets (every definable\nsubset is finite or cofinite and there is no definable\nnon-trivial equivalence relation)\nand how the models of a totally categorical theory were constructed from\nstrictly minimal sets.\n\nThe analysis of strictly minimal sets was done in two ways. Cherlin and\nMills (independently) saw how to the get the required result from a part\nof the classification of finite simple groups.\n\n(So here is a use of nontrivial core' (well, maybe not, according to\nBourbaki) mathematics in model\ntheory. This is a better example than the one I mentioned last week. In\nparticular, Pillay pointed out I quoted the wrong application.)\nIn the other direction, Zil'ber's method for this analysis eventually\nworked and gave a different proof of the classification of finite two transitive\ngroups. (Don't hold me too tightly to the exact class of groups.)\n\nThe study of how the models of a totally categorical theory were built up\nled to Zil'ber's notion of binding groups and to the Ahlbrandt-Ziegler\n(and many more) analysis of finite covers. A finite-one surjection\n$\\pi: C \\rightarrow W$ is a {\\em finite cover} if there is a\n$0$-definable equivalence relation on $C$, whose classes are the inverse\nimages of points, and any relation on $W^n$ ($C^$) which is $0$-definable in the\ntwo-sorted structure $(C,W,\\pi)$ is already $0$-definable in $W$ ($C$).\n\nThe study of finite covers can be viewed as a branch of permutation group\ntheory; it uses techniques like Pontriagin duality and cohomology of\ndiscrete groups. On the other hand, Shelah notions like strong types and\nthe finite equivalence relation theorem play a central role (and have\nmotivated analogous constructions on the permutation group side.) The survey I\nreviewed referred to over 20 papers analyzing these covers. This work is\ncertainly a blend of model theory and group theory. But the problems grew\nup within the area so the description applied model theory' is strained.\n\nIn closing let me point out one further development. Suppose we weaken\nfinitely axiomatizable to axiomatizable by a family of\nsentences in k-variables'. For totally categorical theories, this is\njust a reformulation of the finite axiomatizability problem. For\ngeneral aleph_1 categorical theories, Shawn Hedman has done some\ninteresting though, so far, technical work. He conjectures no non-locally\nmodular aleph_1 categorical theory can be finitely variable\naxiomatizable. This generalizes Abraham Robinson's question of whether\nthe complex field can be so axiomatized.\n\nOne could imagine the answers to these questions\nbeing characterized as either pure or applied model theory (depending in\nsome cases on which way the questions were answered).","date":"2017-01-24 17:21:41","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8354859948158264, \"perplexity\": 3053.6434630320973}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-04\/segments\/1484560285001.96\/warc\/CC-MAIN-20170116095125-00183-ip-10-171-10-70.ec2.internal.warc.gz\"}"}
null
null
Mount Rainier (4 395 m ö.h.) är en stratovulkan som ligger 87 km sydost om Seattle i Washington state i USA. Berget ingår i Mount Rainier nationalpark och är är den högsta toppen i Kaskadbergen. Berget bestegs första gången 1870 av Hazard Stevens och P.B. Van Trump. De senaste officiellt registrerade utbrotten från vulkanen var under perioden 1820 till 1854. Vulkanen är dock inte klassificerad som utslocknad utan förväntas få utbrott i framtiden. Se även Mount St. Helens Mount Baker Mount Adams Glacier Peak Externa länkar US National Parks Service Mount Rainier vandringsbeskrivningar Foton från Mount Rainier National Park Vulkaner i Washington Berg i Washington Stratovulkaner Pierce County, Washington Dekadvulkaner
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,420
Are you or do you know Margaret A Waterman? Margaret A Waterman is a published author of young adult books. A published credit of Margaret A Waterman is Biological Inquiry: A Workbook of Investigative Case Studies. To edit or update the above biography on Margaret A Waterman, please Log In or Register.
{ "redpajama_set_name": "RedPajamaC4" }
4,516
We at Young Pioneer Tours started from humble beginnings as a group of expats living in China brought together by our love of being on the road. By the nature of our own travel we learnt what makes and breaks a good trip, which made starting our tours much easier. Now, years down the line and countless tours behind us, we have tweaked and experimented with our tours to best fulfil what you guys out there are looking for, making a lot of new friends and having some interesting and bizarre experiences along the way. We try to find ways to keep our tours as cheap as possible, to encourage all walks of life to come and see our destinations, a factor that has been key in the growth of YPT. All our group tours come with an experienced YPT guide who gives you the opportunity to do things you couldn't do without them there to make it happen – something which is of utmost importance when off the beaten track. We have a great reputation for awfully fun guides who bring out the travel bug in people while ensuring that everything runs smoothly. There is an attitude to travel commonly associated with experienced backpackers which we fully believe in; going out of your way to help others on the road, share experiences and make friends wherever you can. It's this attitude which has opened doors to us that can't be opened in other ways. Our guides have developed close bonds with the local guides and our regular customers, while maintaining the professionalism required to make everything come together to give people great memories and stories to go home with. Opening doors for people and bringing together like-minded travellers has become our main driving force and always pushes us to get the most out of each destination at a budget price. We love what we do and the community we are involved in and will keep trying to push the envelope for opening tourism in places often forgotten.
{ "redpajama_set_name": "RedPajamaC4" }
7,354
Fill in the required details below to begin your enrolment with HDAA for Accreditation to the National Safety and Quality Health Service Standards (NSQHSS) for Dental Practices. Group Name If you require the accreditation for more than one dental practice in a group, please enter the group name here. All practices in a group must hold the same ABN. This password allows your practice to access Dental onTrack once HDAA have verified your enrollment details. Organisation Name If your organisation name is different to your trading name, enter it here. Street Number and Name* If you have a P.O. Box, enter the details of it here instead of street number and name. Important: The preferred accreditation contact must be a person who can verify and agree to the terms and conditions, and to whom correspondence about accreditation can be directed. This is the only practice participating in the accreditation scheme I am responsible for. Choose your Payment Plan from the two options below. 'Pay As You Go' is where you pay the full fee for assessment in one sum at the time you enrol. 'Flexible Fee' is where you pay the fee each year (for the 3 years) with the first instalment due when you enrol. For Electronic Bank Transfer, please include your Client ID as your "Reference number" in your online banking payment. You will be provided your Client ID upon submitting this enrolment form. Please review the information you have entered for accuracy, ensuring that at minimum all the compulsory fields are completed. Giving false or misleading information is a serious offence. Full terms and conditions are available on this website. Payment Information:* I confirm I have permission to act on behalf of the Dental Practice named above, and that I am able to authorise this enrolment with HDAA Australia Pty Ltd for accreditation to the "National Standards for Quality & Safety in Health Care for Dental Accreditation" and to verify the completeness and accuracy of the information provided in this enrolment and associated application documentation (when submitted for assessment). I understand and accept on behalf of the Dental Practice named above the terms and conditions of use of onTrack and conditions of accreditation. I agree that fees will be paid within fourteen (14) days of receipt of invoice and understand the terms of the Payment Plan selected.
{ "redpajama_set_name": "RedPajamaC4" }
9,752
Sébastien Josse est un skipper français né le à Montereau-Fault-Yonne (Seine-et-Marne). Il est niçois d'origine et habite dans le Finistère à Clohars-Carnoët. Biographie Diplômé d'un B.E.P. M.P.P. (Mécanique Plaisance et Pêche) au Lycée professionnel Jacques DOLLE d'Antibes en 1993, il a commencé la compétition tardivement. Puis grâce à sa victoire au Challenge Espoir Crédit Agricole en 1997 il intègre le pôle France Finistère Course au Large, qui est un centre d'entraînement, basé à Port-la-Forêt, pour les skippers français. Depuis ce jour sa carrière fut lancée et Sébastien pris goût à la compétition. En 2001, il est deuxième de la Solitaire du Figaro. En 2002, il remporte le Trophée Jules Verne sur Orange 1, le maxi-catamaran de Bruno Peyron. En 2002 Sébastien Josse conclut un partenariat avec l'entreprise Vendéenne VMI, pour acheter l'ex monocoque Sodébo de Thomas Coville. Les courses s'enchaînent, il termine entre autres de The Transat en 2004, ou encore 5° de la Transat Jacques Vabre avec Isabelle Autissier. Mais surtout en 2004-2005 pour sa première participation au Vendée Globe Sébastien termine 5°, le en 93 jours 2 minutes et 10 secondes. Un exploit comparé au nombre de concurrents et à la vétusté de son VMI (en effet beaucoup de participants ont effectué le Vendée Globe sur des bateaux neufs). En 2005, la banque hollandaise ABN AMRO contacte Sébastien (alors même qu'il était encore sur le Vendée Globe) pour lui proposer de skipper son deuxième bateau lors de la Volvo Ocean Race. L'équipage ABN AMRO II termine de la Volvo Ocean Race 2005-2006 après 8 mois de course et près de parcourus à travers les océans. Sébastien Josse était engagé à bord du monocoque PRB de Vincent Riou lors de la Barcelona World Race 2007-2008, course autour du monde en double. En tête au Cap de Bonne Espérance, les deux hommes ont été contraints à l'abandon à la suite de la casse de leur tête de mât. Sébastien dispute le Vendée Globe 2008-2009 sur un bateau neuf grâce au sponsor « BT » et à l'écurie des Britanniques Ellen MacArthur et Mark Turner Offshore Challenge. Il abandonne le , alors qu'il figurait dans le groupe de tête, à la suite d'une série de sérieux problèmes techniques (pont fissuré et un safran hors service) après avoir été couché par une vague dans l'océan Pacifique. En 2009 il est encore contraint à l'abandon dans la Transat Jacques Vabre et remporte la Fastnet Race. En 2010, alors que le monocoque BT court sous les couleurs de Veolia avec Roland Jourdain, Sébastien Josse s'entrainera au pôle France de Saint Gilles-Croix de Vie et participera sous les couleurs de la Vendée à la Solitaire du Figaro, à la course Cap-Istanbul et à la Solo Quiberon, qui comptent pour le championnat de France. Sébastien Josse prend le départ du Vendée Globe 2016-2017 à bord de l'Imoca Mono60 Edmond de Rothschild - , mais il est contraint à l'abandon au sud de l'Australie. En 2017, il devient le skipper du maxi-trimaran Maxi Edmond de Rothschild - jusqu'en février 2019. Ensuite il intègre l'équipe de Corum L'Épargne aux côtés de Nicolas Troussel. Depuis mars 2022, il fait partie du Team Banque Populaire et travaille sur le Maxi Banque populaire XI. Palmarès 2017 : de la Transat Jacques-Vabre avec Thomas Rouxel sur l'Ultime Maxi Edmond de Rothschild - 2016 : de la Transat New York-Vendée sur l'Imoca Mono60 Edmond de Rothschild - Abandon lors du Vendée Globe sur l'Imoca Mono60 Edmond de Rothschild - 2015 : vainqueur de la Transat Saint-Barth-Port-la-Forêt sur l'Imoca Mono60 Edmond de Rothschild - 2013 : vainqueur de la Transat Jacques-Vabre sur le MOD70 Groupe Edmond de Rothschild - avec Charles Caudrelier 2012 : de la Krys Ocean Race, course transatlantique entre New York et Brest, sur le MOD70 Groupe Edmond de Rothschild - 2010 : de la Solo Concarneau - portsdefrance.com sur le Figaro Vendée 2009 : vainqueur de la Course du Fastnet sur l'IMOCA BT abandon dans la Transat Jacques Vabre avec Jean-François Cuzon sur l'IMOCA BT 2008 : abandon sur problème technique dans le Vendée Globe sur l'IMOCA BT abandon sur problème technique dans l'Artemis Transat (en tête au moment de l'abandon) sur l'IMOCA BT 2007 : vainqueur de la Rolex Fastnet Race (en double) sur l'IMOCA PRB de Vincent Riou vainqueur de la Calais Round Britain Race (en équipage) sur l'IMOCA PRB de Vincent Riou abandon sur la Barcelona World Race (en double) sur l'IMOCA PRB de Vincent Riou 2006 : de la Volvo Ocean Race, trois fois deuxième sur des étapes sur le VO70 ABN-AMRO TWO record de distance en 24h, en monocoque et en équipage (562,96 milles parcourus) sur le VO70 ABN-AMRO TWO 2005 : du Vendée Globe à bord de VMI de The Transat à bord de VMI 2004 : de The Transat à bord de VMI 2003 : vainqueur de la Rolex Fastnet Race à bord de VMI de la Transat Jacques Vabre sur VMI avec Isabelle Autissier 2002 : codétenteur du Trophée Jules Verne à bord du Maxi Catamaran Orange skippé par Bruno Peyron en 64 jours 8 heures 37 minutes et 24 secondes abandon pour démâtage à la Route du Rhum à bord de VMI 2001 : de la Solitaire du Figaro sur Créaline du Tour de Bretagne de la Generali Méditerranée 2000 : vainqueur de la Route du Ponant du Championnat de France de course au large en solitaire 1999 : de la première étape de la Mini Transat du Championnat de France de course au large en solitaire 1998 : vainqueur du Championnat de France Espoir Solitaire de la Solitaire du Figaro 1997 : vainqueur du Challenge Espoir Crédit Agricole Notes et références Voir aussi Bibliographie Articles connexes Vendée Globe Volvo Ocean Race 60 pieds IMOCA Liens externes Long interview (2h6) de 2020 retraçant sa carrière Site officiel de la Team Gitana Liens externes Multimedia Video presentation 2010 Sébastien Josse surfe dans les rugissant Vendée Globe 2008-2009 Transat Jacques Vabre : sauvetage de S. Josse et JF.Cuzon TEAM ABN AMRO into the Volvo Ocean Race 2005-2006 ABN AMRO TWO Drama, Volvo Ocean Race Skipper français Navigateur solitaire Team PRB Naissance en mars 1975 Naissance à Montereau-Fault-Yonne Skipper du Vendée Globe
{ "redpajama_set_name": "RedPajamaWikipedia" }
9,568
package com.opengamma.sesame.graph; import java.util.List; import com.opengamma.sesame.function.Parameter; /** * Indicates the type of an object injected into a constructor doesn't conform to the required type. */ public class IncompatibleTypeException extends InvalidGraphException { /** * Creates an instance * * @param path the path of parameters to the problem, not null * @param message the descriptive message, not null */ /* package */ IncompatibleTypeException(List<Parameter> path, String message) { super(path, message); } }
{ "redpajama_set_name": "RedPajamaGithub" }
106
{"url":"http:\/\/pldml.icm.edu.pl\/pldml\/element\/bwmeta1.element.bwnjournal-article-doi-10_4064-sm182-1-1","text":"Pe\u0142notekstowe zasoby PLDML oraz innych baz dziedzinowych s\u0105 ju\u017c dost\u0119pne w nowej Bibliotece Nauki.\nZapraszamy na https:\/\/bibliotekanauki.pl\n\nPL EN\n\nPreferencje\nJ\u0119zyk\nWidoczny [Schowaj] Abstrakt\nLiczba wynik\u00f3w\n\u2022 # Artyku\u0142 - szczeg\u00f3\u0142y\n\n## Studia Mathematica\n\n2007 | 182 | 1 | 1-27\n\n## From restricted type to strong type estimates on quasi-Banach rearrangement invariant spaces\n\nEN\n\n### Abstrakty\n\nEN\nLet X be a quasi-Banach rearrangement invariant space and let T be an (\u03b5,\u03b4)-atomic operator for which a restricted type estimate of the form $\u2225T\u03c7_{E}\u2225_{X} \u2264 D(|E|)$ for some positive function D and every measurable set E is known. We show that this estimate can be extended to the set of all positive functions f \u2208 L\u00b9 such that $||f||_{\u221e} \u2264 1$, in the sense that $\u2225Tf\u2225_{X} \u2264 D(||f||\u2081)$. This inequality allows us to obtain strong type estimates for T on several classes of spaces as soon as some information about the galb of the space X is known. In this paper we consider the case of weighted Lorentz spaces $X = \u039b^{q}(w)$ and their weak version.\n\n1-27\n\nwydano\n2007\n\n### Tw\u00f3rcy\n\nautor\n\u2022 Departament de Matem\u00e0tica, Aplicada i An\u00e0lisi, Universitat de Barcelona, E-08071 Barcelona, Spain\nautor\n\u2022 Dipartimento di Matematica e Applicazioni, Universit\u00e0 di Milano - Bicocca, 20126 Milano, Italy\nautor\n\u2022 Department of Mathematics, University of Western Ontario, N6A 5B7, London, Canada","date":"2022-08-14 21:11:13","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7368707060813904, \"perplexity\": 1653.2107284108702}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-33\/segments\/1659882572077.62\/warc\/CC-MAIN-20220814204141-20220814234141-00765.warc.gz\"}"}
null
null
\section{Introduction} Quantum chromodynamics (QCD) at low energies is dominated by the non-perturbative phenomena of quark confinement and spontaneous chiral symmetry breaking (SCSB). Presently, a rigorous treatment of them is only possible in a lattice regularization. Many of the important features of non-abelian gauge theories are already present in $SU(2)$, which simplifies theoretical and numerical calculations. The non-perturbative vacuum can be characterized by various kinds of topological gauge field excitations. An established theory of SCSB relies on instantons~\cite{Belavin:1975fg,Actor:1979in,'tHooft:1976fv,Bernard:1979qt} , which are localized in space-time and carry a topological charge of modulus one. Due to the Atiyah-Singer index theorem~\cite{Atiyah:1971rm,Schwarz:1977az,Brown:1977bj,Narayanan:1994gw}, a zeromode of the Dirac operator arises, which is concentrated at the instanton core. In the instanton liquid model~\cite{Ilgenfritz:1980vj,Diakonov:1984vw} overlapping would-be zeromodes split into low lying non-zeromodes which create the chiral condensate. On the other hand there is plenty of evidence~\cite{'tHooft:1977hy,Vinciarelli:1978kp,Yoneya:1978dt,Cornwall:1979hz,Mack:1978rq,Nielsen:1979xu} for the explanation of confinement by center vortices, closed, quantized magnetic flux tubes with values in the center of the gauge group. These properties are the key ingredients in the vortex model of confinement, which is theoretically appealing and was also confirmed by a multitude of numerical calculations~\cite{DelDebbio:1996mh,Kovacs:1998xm}. Lattice simulations indicate that vortices may be responsible for topological charge and SCSB as well~\cite{deForcrand:1999ms,Alexandrou:1999vx,Engelhardt:2002qs,Hollwieser:2008tq}, and thus unify all non-perturbative phenomena in a common framework. However, to date the potential physical mechanism for symmetry breaking through vortices is still unclear. A similar picture to the instanton liquid model exists insofar as lumps of topological charge arise at the intersection and writhing points of vortices~\cite{Reinhardt:2000ck, Reinhardt:2001kf}. The present numerical investigation concentrates on the topological charge contributions of vortex intersections and their localization by Dirac zeromodes. All measurements were performed on hyper-cubic lattices of even sizes from $12^4$ up to $22^4$-lattices. We start with the construction of thick planar vortex configurations and derive the topological charge contribution of their intersections. Then we introduce the lattice index theorem for various fermion realizations and representations in more detail. Using the overlap and asqtad staggered Dirac operator, we compute fundamental zeromodes in the background of four vortex intersections. By visualizing the probability density, we compare the distribution of the eigenmode density with the position of the vortices and the topological charge density created by intersection points. For interpreting the results, we refer to existing analytical calculations of the zeromodes for flat vortices with gauge potentials living in the Cartan subalgebra of a $SU(2)$ gauge group~\cite{Reinhardt:2002cm}. A comparison reveals systematic discrepancies, which we tentatively attribute to differences in the values of the Polyakov loops between the analytical~\cite{Reinhardt:2002cm} and our numerical calculations. We argue that configurations with the same field strength, but different Polyakov loops do not give rise to the same eigenmode density. Comments on the physical significance of this observation follow further below. Furthermore, we calculate the Dirac eigenmodes in the adjoint fermion representation. Adjoint fermions are of special interest with respect to the constituents of the QCD vacuum with fractional topological charge. For configurations with total topological charge $|Q|=1/2$ no fundamental zeromode would necessarily be produced, however, adjoint fermions are able to create a zeromode. Edwards et al.\ presented in~\cite{Heller:1998cf} some evidence for fractional topological charge on the lattice. Garc\'ia-P\'erez et al.~\cite{GarciaPerez:2007ne}, however, associated this to lattice artifacts, {\it i.e.}, topological objects of size of the order of the lattice spacing. Vortex intersections are examples of fractional topological charge contributions $|Q|=1/2$, which can be related to merons~\cite{Reinhardt:2001hb} and calorons~\cite{Bruckmann:2009pa}. Nevertheless, on lattice configurations with periodic (untwisted) boundary conditions, no single vortex intersection can be realized. Therefore we tried to separate adjoint zeromodes on periodic lattices with four vortex intersections in order to find linear combinations of zeromodes which detect one vortex intersection only. We did not find such linear combinations, the scalar density of the zeromodes peaks at least at two intersection points. Finally we combine "thin" and "thick" vortex sheets, which simulate in some sense twisted boundary conditions. In fact, "thin" vortices are not recognized by adjoint fermions. If we intersect such vortex pairs adjoint fermions detect a single vortex intersection only and the adjoint index theorem signals the topological charge $Q=1/2$. Nevertheless, the zeromodes do not seem to localize the vortex intersection but rather extend over the whole lattice, avoiding regions with nonvanishing topological charge density and negative Polyakov lines. \section{Plane Vortices}\label{sec:planevort} We investigate planar vortices parallel to two of the coordinate axes in $SU(2)$ lattice gauge theory. Since we use periodic (untwisted) boundary conditions for the links, vortices occur in pairs of parallel sheets, each of which is closed by virtue of the lattice periodicity. We use two different orientations of vortex sheets, $xy$- and $zt$-planes with nontrivial links within the vortex thickness. These links vary in the $\sigma_3$ subgroup of $SU(2)$, $U_\mu=\exp(i \phi \sigma_3)$. For $xy$-vortices $\mu=t$ links are nontrivial in one $t$-slice only, for $zt$-vortices we have nontrivial $y$-links in one $y$-slice. Since the $U(1)$ subgroup remains unchanged, the direction of the flux and the orientation of the vortex are determined by the gradient of the angle $\phi$, which we choose as a linear function of the coordinate perpendicular to the vortex. Upon traversing a vortex sheet, the angle $\phi$ increases or decreases by $\pi$ within a finite thickness $2d$ of the vortex, see Fig.~\ref{fig:phis}. Center projection leads to a (thin) P-vortex at half the thickness ($d$)~\cite{DelDebbio:1996mh}. We distinguish parallel and anti-parallel vortices, {\it i.e.}, vortex sheets with the same resp. opposite orientation. The angle $\phi_i$ first increases from $0$ to $\pi$, then $\phi_1$ increases to $2\pi$ and $\phi_2$ returns to $0$. For an $xy$-vortex with vortex sheets at $z_1$ and $z_2$ the $t$-links in one $t$-slice vary with the angle \begin{equation} \phi_1(z) = \begin{cases} 2\pi \\ \pi\left[ 2-\frac{z-(z_1-d)}{2d}\right] \\ \pi \\ \pi\left[ 1-\frac{z-(z_2-d)}{2d}\right] \\ 0 \end{cases} \ldots \phi_2(z) = \begin{cases} 0 & 0 < z \leq z_1-d \\ \frac{\pi}{2d}[z-(z_1-d)] & z_1-d < z \leq z_1+d \\ \pi & z_1+d < z \leq z_2-d \\ \pi\left[ 1-\frac{z-(z_2-d)}{2d}\right] & z_2-d < z \leq z_2+d \\ 0 & z_2+d < z \leq N \end{cases} \label{eq:phi-pl0} \end{equation} Since the gradient of $\phi_1$ ($\phi_2$) points in the same (opposite) direction at the two vortex sheets of a pair, their fluxes are (anti-) parallel and the total flux through the $zt$-plane is therefore $2\pi$ (zero). \begin{figure}[htb] \centering \psfrag{z1}{\scriptsize $z_1$} \psfrag{z2}{\scriptsize $z_2$} \psfrag{2d}{\scriptsize $2d$} \psfrag{1}{\scriptsize $1$} \psfrag{0}{\scriptsize $0$} \psfrag{p}{\scriptsize $\pi$} \psfrag{2p}{\scriptsize $2\pi$} \psfrag{12}{\scriptsize $N_z$} \psfrag{f}{$\phi_2$} \psfrag{f1}{$\phi_1$} \psfrag{z}{$z$} a)\includegraphics[width=.4\linewidth]{phip}\hspace{5mm}b)\includegraphics[width=.4\linewidth]{phap} \caption{The link angle a) $\phi_1$ of a parallel and b) $\phi_2$ of an anti-parallel vortex pair. The arrows rotate counterclockwise with increasing $\phi_i$. The vertical dashed lines indicate the positions of the P-vortices. In the shaded areas the links have positive, otherwise negative trace.} \label{fig:phis} \end{figure} Next we consider these thick, planar vortices intersecting orthogonally. As shown in~\cite{Engelhardt:1999xw}, each intersection carries a topological charge with modulus $|Q|=1/2$, whose sign depends on the relative orientation of the vortex fluxes. The plaquette definition simply discretizes the continuum (Minkowski) expression of the Pontryagin index to a lattice (Euclidean) version of the topological charge definition: \begin{gather} Q = - \frac{1}{16\pi^2} \int d^4x \, \mbox{tr}[\tilde{\cal F}_{\mu\nu} {\cal F}_{\mu\nu} ] = - \frac{1}{32\pi^2} \int d^4x \, \epsilon_{\mu\nu\alpha\beta} \mbox{tr}[{\cal F}_{\alpha\beta} {\cal F}_{\mu\nu} ] = \frac{1}{4\pi^2} \int d^4x \, \vec E \cdot \vec B \label{eq:qlatq} \end{gather} We can drop color indices since all our links belong to the $\sigma_3$-generated $U(1)$ subgroup of $SU(2)$. Our $xy$-vortices have only nontrivial $zt$-plaquettes, {\it i.e.}, an electric field $E_z$, while $zt$-vortices bear nontrivial $xy$-plaquettes corresponding to a magnetic field $B_z$. The topological charge is then proportional to $E_zB_z$, hence parallel crossings give $Q=1/2$ and anti-parallel crossings give $Q=-1/2$. However, due to the finite lattice spacing (or the finite thickness of vortices in units of the lattice spacing) the numerical absolute values of the lattice charge are slightly smaller than the continuum ones. As mentioned above, the lattice periodicity forbids single vortex sheets and therefore we get at least four intersections for two vortex pairs, summing up to an even valued topological charge. We combine $xy$ and $zt$-vortices in central $y$- and $t$-slices with vortex centers at $x_{1,2}$ resp. $z_{1,2}$ located symmetrically around the lattice center and varying vortex thickness $d$. In Fig.~\ref{fig:vortcross3d} we present a 3-dimensional view of the intersecting vortices on a $12^4$-lattice together with topological charge distributions in the $xz$-plane at ($y=6,t=6$), which is the intersection plane. Further density plots of Dirac eigenmodes or topological charge, as well as Polyakov loop distributions will all be plotted in this plane, except for (specially mentioned) orthogonal plots in order to analyze localization properties. The four intersection points of two parallel vortices all carry topological charge $Q=+1/2$ whereas for two anti-parallel vortices and parallel-anti-parallel vortex combinations we get two intersections with $Q=+1/2$ and two with $Q=-1/2$. \begin{figure}[htb] \psfrag{x}{$x$} \psfrag{y}{$y$} \psfrag{z}{$z$} \psfrag{0}{\small $0$} \psfrag{+Q}{\small $+Q$} \psfrag{-Q}{\small $-Q$} \begin{tabular}{ccc} Parallel Vortices & Geometry & Anti-parallel Vortices\\ \includegraphics[keepaspectratio,width=0.33\textwidth]{tdplq2} & \includegraphics[keepaspectratio,width=0.24\textwidth]{plq2-vorclu-2} & \;\includegraphics[keepaspectratio,width=0.33\textwidth]{tdplq0} \end{tabular} \caption{A 3-dimensional section (hyperplane) in $xyz$-direction of a $12^4$-lattice at time $t=6$ (center). The horizontal planes are the $xy$-vortices, which exist only at this time. The vertical lines are the $zt$-vortices, which continue over the whole time axis. The ticks protruding from the vertical lines extend in time direction. The vortices intersect in four points of the $y=t=6$ - plane, giving topological charge $Q=2$ for parallel vortices (lhs) or $Q=0$ for anti-parallel vortices (rhs).} \label{fig:vortcross3d} \end{figure} \section{Fermionic zeromodes of the overlap and asqtad staggered Dirac operator for intersecting center vortex fields} We test the lattice index theorem and analyze the scalar density $\rho(x)=\psi^\dag\psi(x)$ of fermionic zeromodes $\psi$ in the background of intersecting plane vortices. As described in~\cite{Hollwieser:2010mj} the improved staggered operator also produces eigenmodes which can clearly be identified as zeromodes and all results in this paper show perfect agreement between the two fermion realizations. The fermionic zeromodes are used to measure the topological charge $Q$ via the Atiyah-Singer index theorem~\cite{Atiyah:1971rm,Schwarz:1977az,Brown:1977bj \begin{equation} \mathrm{ind}\;D[A] = n_- - n_+ = Q, \end{equation} where $n_-$ and $n_+$ are the number of left- and right-handed zeromodes of the Dirac operator $D$. This equation accounts for Wilson and overlap fermions in the fundamental representation. The adjoint version of the index theorem reads \begin{equation} \mathrm{ind}\;D[A] = n_- - n_+ = 2NQ = 4Q \end{equation} where $N=2$ is the number of colors and the additional factor $2$ is due to the fact that the fermion is in the real representation, hence the spectrum of the adjoint Dirac operator $iD$ is doubly degenerate. The eigenvalues of the staggered fermion operator have a twofold degeneracy due to a global charge conjugation symmetry in $SU(2)$. We therefore have $\mathrm{ind}\;D[A] = n_- - n_+ = 2Q$ for fundamental and $\mathrm{ind}\;D[A] = n_- - n_+ = 8Q$ for adjoint (asqtad) staggered fermions. \subsection{Fundamental zeromodes for intersecting center vortex fields with topological charge $Q=2$}\label{seq:fundzms} We intersect two parallel vortex pairs with $x_1=z_1=6$ and $x_2=z_2=16$ at $y=t=11$ respectively on a $22^4$-lattice. The four intersection points all carry topological charge contributions of $+1/2$ and therefore sum up to a total topological charge $Q=2$. In agreement with the lattice index theorem we get two overlap and four asqtad staggered zeromodes of negative chirality (left-handed) in the fundamental representation. Fig.~\ref{fig:plq2fund} shows the scalar density plots of the fundamental overlap and asqtad staggered zeromodes with periodic boundary conditions together with the sum of Wilson lines in y- and t-direction (Polyakov-loops) in the intersection plane as well as the scalar density plot of the two overlap zeromodes with usual antiperiodic boundary conditions in time direction (asqtad staggered modes again distribute similarly). The individual modes all distribute equally, showing four distinct maxima, as trivially all their linear combinations do. A close look shows that the zeromodes do not exactly peak at the vortex intersections, they rather avoid regions with negative Polyakov lines and approach the intersections (or the vortex surfaces) from regions with positive Polyakov lines. This behavior was already observed in~\cite{Jordan:2007ff} for spherical vortices, the zeromodes avoid regions with negative Polyakov lines. The sensitivity of Dirac eigenmodes to the Polyakov value can be exhibited clearly using one parallel vortex pair (e.g. two parallel $xy$-vortices), which are exchanged by a discrete translation $T_z$ in the $z$-direction by half the lattice length $N_z$. The configuration apparently consists of two identical flux lines because the field strength is invariant under the $z$-translation $T_z$. However, this operation is not a symmetry of the Dirac operator since it would change the Polyakov loop, which is a gauge-invariant quantity. Therefore the distribution of the scalar density of the Dirac eigenfunctions is not invariant under $T_z$. In fact, the antiperiodic boundary conditions in time direction change the time-like Polyakov lines and the zeromode density shows the shift in the $z$-direction by half the lattice size, see Fig.~\ref{fig:plq2fund}d. \begin{figure}[htb] \centering \psfrag{x}{$x$} \psfrag{y}{$z$} \psfrag{z}{$z$} a)\includegraphics[width=.44\linewidth]{plq2minspbc22a}b)\includegraphics[width=.44\linewidth]{plq2staggpbc22a}\\ c)\includegraphics[width=.44\linewidth]{polvort}d)\includegraphics[width=.44\linewidth]{plq2mins22a} \caption{$Q=2$ configuration: Scalar density plots of a) the two overlap zeromodes, b) the four asqtad staggered zeromodes, both with periodic boundary conditions, and d) the two overlap zeromodes with antiperiodic boundary conditions in time direction. The plot titles indicate the plane positions, the chirality (chi=$\pm1$) and numbers (n=1-2/n=1-4) of (overlap/asqtad staggered) zeromodes and the maximum density peak in the plot. c) Sum of Wilson lines in y- and t-direction (Polyakov-loops) in the intersection plane. P-vortices are indicated with red or black lines.} \label{fig:plq2fund} \end{figure} The configuration is also presumably equivalent to the abelian gauge fields whose zeromodes are derived analytically in~\cite{Reinhardt:2002cm}. Nevertheless, the probability distribution in the intersection plane shown in the reference differs significantly from the one we obtained, namely in that the former exhibits exactly the translation symmetry discussed above. This may indicate a relation to the issue of the Polyakov loop because configurations having the same flux distribution may still differ in the values of their Polyakov loops. The values of the Polyakov loops in our study and the calculation of \cite{Reinhardt:2002cm} only coincide in the intersection plane. The choice of boundary conditions is another possibility for the origin of the discrepancy. For the flux of two parallel center vortices the transition matrices need to have a winding number one around the boundary. On the lattice, the transition functions are supposed to be incorporated in the links such that all plaquettes agree with the corresponding integrals of the continuum field. Hence it should be sufficient to use periodic boundary conditions for the links as well as for the fermions. However, this allows further zeromodes which cancel each other in the index theorem (non-topological zeromodes) and so we prefer the usual antiperiodic boundaries in time direction which only reproduce the relevant zeromodes. \subsection{Adjoint zeromodes for intersecting center vortex fields with topological charge $Q=0$ and $Q=2$} Now we try to locate the fractional topological charge contributions with adjoint eigenmodes of the overlap Dirac operator which are also sensitive to topological charge contributions of $|Q|=1/2$. For asqtad staggered fermions we find the correct (doubled) numbers of zeromodes in all cases and the scalar densities of the sum of all zeromodes always localizes similarly. We do not find zeromodes localized to a single region with nonvanishing topological charge contributions. Therefore we use the inverse participation ratio (IPR)~\cite{Ilgenfritz:2007xu, Polikarpov:2005ey, Aubin:2004mp, Bernard:2005nv} to quantify the localization of eigenmodes. The IPR of a normalized ($\sum_x\rho_i(x)=1$) field $\rho_i(x)$ is defined as \begin{equation} I=N\sum_{x=0}^N\rho_i^2(x) \end{equation} where N is the number of lattice sites $x$. With this definition, $I$ characterizes the inverse fraction of sites contributing significantly to the support of $\rho(x)$, {\it i.e.}, a high IPR indicates that the eigenmode is localized to a a few lattice points only. We perform systematic and random IPR maximization procedures for linear combinations of zeromodes in order to get single eigenmode peaks localized to regions with nonvanishing topological charge contribution. {\bf Q=0:} For the configuration with topological charge $Q=0$ we intersect two anti-parallel vortices with the same vortex centers as for the $Q=2$ configuration described above ($x_1=z_1=6$ resp. $x_2=z_2=16$) at $y=t=11$ respectively on a $22^4$-lattice. For this configuration we do not get fundamental zeromodes, but with periodic boundary conditions we find two adjoint overlap (four asqtad staggered) zeromodes of each chirality. Fig.~\ref{fig:plq0adj} shows the scalar density plots of the adjoint overlap zeromodes for the $Q=0$ configuration. The left-handed zeromodes (Fig.~\ref{fig:plq0adj}a) peak at the intersection point $(6,11,6,11)$ of topological charge $Q=1/2$ with a maximum density of $8.595\cdot10^{-6}$ but a second maximum can be found near the intersection plane at $(16,12,6,12)$ with a maximum value of $8.601\cdot10^{-6}$ (Fig.~\ref{fig:plq0adj}b) which is next to the intersection with $Q=-1/2$ at $(16,11,6,11)$. These zeromodes rather localize the $xy$-vortex sheet at $z=6$, pronouncing the intersections whereas the right-handed zeromodes of Fig.~\ref{fig:plq0adj}c locate the other $xy$-vortex sheet at $z=16$, but with notches at the two intersection points. The individual zeromodes of the same chirality show identical scalar density distributions, therefore it is not possible to find linear combinations of the zeromodes which locate single vortex intersections. \begin{figure}[htb] \centering \psfrag{x}{$x$} \psfrag{z}{$z$} \psfrag{-6}{} \psfrag{10a}{\scriptsize $10^{-6}$} a)\includegraphics[width=.3\linewidth]{plq0madjpbc22a}b)\includegraphics[width=.3\linewidth]{plq0madjpbc22b}c)\includegraphics[width=.3\linewidth]{plq0padjpbc22a} \caption{$Q=0$ configuration: Scalar density plots of the two adjoint overlap zeromodes (n=1-2) with negative chirality (chi=-1) in the $xy$-plane at a) $y=t=11$ (intersection plane, red lines indicate the P-vortices), b) $y=t=12$ and c) the two right-handed (chi=1) adjoint overlap zeromodes at $y=t=11$.} \label{fig:plq0adj} \end{figure} {\bf Q=2:} The configuration with topological charge $Q=2$ on the $22^4$-lattice from above gives 16 adjoint asqtad staggered zeromodes with negative chirality for antiperiodic boundary conditions in time direction. For adjoint overlap fermions we use a $16^4$-lattice with vortex centers $x_1=z_1=4$ and $x_2=z_2=12$ at $y=t=8$ respectively and we find eight left-handed zeromodes. With periodic boundary conditions we find two positive (non-topological) and ten negative zeromodes. Fig.~\ref{fig:plq2adj} shows the scalar density plots of adjoint overlap and asqtad staggered zeromodes of the $Q=2$ configuration. The zeromodes locate the $xy$-vortex pairs but even individual modes do not show single peaks locating one intersection with $Q=1/2$. Linear combinations of the eight zeromodes with negative chirality show six distinct IPR maxima. This was obtained by a systematic study with the eight coefficients of the linear combination varying from $-10$ to $10$ in integer steps. Further we started from 20.000 random points in the parameter space and determined the nearest maximum by the gradient method. Each of the six maxima was found between $2.000-6.000$ times and no other maxima were obtained. In Fig.~\ref{fig:plq2adj}e we plot a 2D cut through the first three IPR maxima in the 8D parameter space of linear combinations. The scalar density of the linear combination of the eight zeromodes with maximal IPR is presented in Fig.~\ref{fig:plq2adj}f, it still peaks at two vortex intersections. The ten zeromodes for periodic boundary conditions look more symmetric, but again no single mode locates one vortex intersection. We again performed a systematic search with the ten coefficients of the zeromodes varying from $-6$ to $6$ in integer steps and again $20.000$ random points. In this case we found only five maxima, each $3.000-5.000$ times. Again, it is not possible to find linear combinations of the zeromodes to locate at single vortex intersections. \begin{figure}[p] \centering \psfrag{x}{$x$} \psfrag{z}{$z$} a)\includegraphics[width=.44\linewidth]{plq2stadj22a}b)\includegraphics[width=.44\linewidth]{plq2madj16a} \textcolor{white}{emty line}\\ c)\includegraphics[width=.44\linewidth]{plq2padjpbc16a}d)\includegraphics[width=.44\linewidth]{plq2madjpbc16a} \psfrag{IPR}[r][t][.8][0]{IPR} e)\includegraphics[width=.52\linewidth]{iprmaxpeaks2}f)\includegraphics[width=.48\linewidth]{plq2madjmaxipr16} \caption{$Q=2$ configuration: Scalar density plots of the a) 16 asqtad staggered and b) eight overlap left-handed adjoint zeromodes for antiperiodic boundary conditions, the modes locate the $xy$-vortex pair. Further, c) two right-handed (non-topological) and d) ten left-handed adjoint overlap zeromodes for periodic boundary conditions. e) Plane through the highest three IPR maxima in the parameter space of linear combinations of the eight zeromodes. The peaks are very broad and easy to identify. f) The scalar density of the linear combination of the eight zeromodes with maximal IPR still peaks at two vortex intersections.} \label{fig:plq2adj} \end{figure} \subsection{Adjoint zeromodes for center vortex fields with one single "thick" intersection and topological charge $Q=1/2$} We analyze a configuration of "thin-thick" vortex intersections apparently having topological charge $|Q|=1/2$. The profile of a "thin-thick" vortex is plotted in Fig.~\ref{fig:1hlfconfig}a. The thin vortex sheet is defined by the jump of the $y$- or $t$-link from $+1$ to $-1$ at the boundary. The thick vortex is located symmetrically around the center of the lattice with thickness $d$. The thin-thick $xy$- and $zt$-vortices still intersect at four points, but the plaquette or hypercube definitions of topological charge do not recognize the thin vortex sheets and therefore only measure one topological charge contribution $Q=1/2$ of the "thick" vortex intersection, see Fig.~\ref{fig:1hlfconfig}b. \begin{figure}[htb] \centering \psfrag{x}{$x$} \psfrag{z}{$z$} \psfrag{P}[c][c][.8][0]{$P$} \psfrag{profile}[r][c]{\scriptsize $\phi(x)$} \psfrag{polline}[r][c]{\scriptsize tr$U_y(x)$} \psfrag{sigma3}[r][c]{\scriptsize $\sigma_3$-comp.} \psfrag{2d}{\scriptsize $d$} \psfrag{1}{\scriptsize $1$} \psfrag{0}{\scriptsize $0$} \psfrag{3}{\scriptsize $3$} \psfrag{f}{\textcolor{blue}{$\phi$}} \psfrag{12}{\scriptsize $N_z$} \psfrag{p}{$\pi$} \psfrag{-1}{\scriptsize $-1$} \psfrag{t}{\scriptsize \textcolor{red}{${\bf Tr} U_A$}} a)\includegraphics[width=.44\linewidth]{phithinthicks}$\qquad$ b)\includegraphics[width=.4\linewidth]{topdenthinthicks2} \caption{"$Q=1/2$ configuration": a) Link profile of a "thin-thick" plane vortex, the link angle $\phi$ (blue) decreases from $\pi$ to $0$ within a certain vortex thickness $d$. The arrows (links) rotate counterclockwise with decreasing $\phi$. The thin vortex is given by the jump at the boundary. The red dashed line shows the trace of adjoint links ${\bf Tr} U_A$ (see text below) b) Topological charge density in the intersection plane.} \label{fig:1hlfconfig} \end{figure} We compute the Dirac eigenmodes on a $22^4$-lattice with two intersecting "thin-thick" $xy$- and $zt$-vortices with vortex thickness $d=20$ at $t=y=11$, respectively. We get no fundamental zeromodes for the above topological charge "$|Q|=1/2$ configuration". But the adjoint Dirac operator does not recognize the thin vortices, just like the field theoretic operators we used to measure topological charge density. We find two adjoint overlap and four adjoint staggered zeromodes with negative chirality, which due to the index theorem again results in $Q=1/2$. For the adjoint fermions this configuration truncates three of the four vortex intersections and in this way simulates a situation related to the one achieved by twisted boundary conditions, namely, a single detectable intersection. Therefore it is possible to have a configuration that looks like having fractional topological charge. The eigenmode density distributions of overlap and asqtad staggered zeromodes are identical, Fig.~\ref{fig:1hlfmodes} shows the former for the intersection plane and for orthogonal planes to it at the intersection point. The modes are clearly sensitive to the traces of the adjoint link $U_A$ (see also Fig.~\ref{fig:1hlfconfig}a) \begin{equation} ({\bf Tr} U)({\bf Tr} U)^\dagger=({\bf Tr} U)^2=1+{\bf Tr} U_A, \label{eq:adjtrace} \end{equation} which in our case define the adjoint Polyakov lines $P_A$ (Wilson lines), since we have only nontrivial $t$-links in one time-slice and nontrivial $y$-links in one $y$-slice. The zeromodes prefer regions of positive Polyakov lines and avoid negative Polyakov lines with respect to the boundary conditions. For the antiperiodic boundary conditions in time direction the signs of the Polyakov lines are exchanged. Hence, the zeromode densities peak at the $xy$-vortex center at $z=t=11$ and avoid the $zt$-vortex center at $x=y=11$, or rather peak at the boundary parallel to the $zt$-vortex at $x=0$ or $x=22$ in the $y=11$-plane. For completeness we should mention that we get exactly the same results if we analyze a "$Q=-1/2$ configuration", which can be realized if we rotate the links of one of the thick vortices in the opposite direction. In this case we get two adjoint overlap and four adjoint staggered zeromodes of {\it positive} chirality which show the same behavior. We further vary the vortex thickness $d$, see Fig.~\ref{fig:1hlfmodes2} and do find analogue results: The adjoint zeromodes approach the thick vortex intersection from regions of positive Polyakov lines, but do not localize exactly at the region with topological charge contribution $|Q|=1/2$. They rather spread over the whole lattice avoiding regions of negative traces of adjoint Wilson lines. \begin{figure}[p] \psfrag{x}{$x$} \psfrag{z}{$z$} \psfrag{y}{$y$} \psfrag{t}{$t$} \psfrag{10a}{\scriptsize $10^{-7}$} \centering a)\includegraphics[width=.44\linewidth]{thinthicksmadj2022}b)\includegraphics[width=.44\linewidth]{thinthicksmadj2022yt}\\ \textcolor{white}{emty line}\\ c)\includegraphics[width=.435\linewidth]{thinthicksmadj2022xy}d)\includegraphics[width=.44\linewidth]{thinthicksmadj2022zt}\\ \textcolor{white}{emty line}\\ \caption{"$Q=1/2$ configuration": Scalar eigenmode density of two adjoint overlap (identical to four adjoint asqtad staggered) zeromodes of negative chirality in various planes through the thick-thick vortex intersection: a) $xz$-plane (intersection plane): The zeromodes avoid regions of negative adjoint Polyakov (Wilson) lines ($P_A$, red dots) with respect to boundary conditions (see text above, Eq.~(\ref{eq:adjtrace}) and Fig.~\ref{fig:1hlfconfig}a) and therefore do not peak at the topological charge contribution $Q=1/2$ of the "thick-thick" vortex intersection at $x=z=11$; the "thin-thin" and "thin-thick" intersections are not recognized by the Dirac modes; b) $yt$-plane: The antiperiodic boundary conditions invert the profile in time- compared to the one in $y$-direction, hence the zeromodes prefer the $xy$-vortex at $t=11$ and avoid the $zt$-vortex at $y=11$ (the lines indicate the $xy$ (red) and $zt$ (green) vortex sheets in the $t=11$ resp. $y=11$ slices); c) $xy$-plane: The zeromodes reflect the profile of the adjoint $y$-Wilson lines ($P_A$, red dots) of the $zt$-vortex in $x$-direction at $y=11$; d) $zt$-plane: The zeromodes reflect the profile of the adjoint Polyakov lines ($P_A$, red dots) of the $xy$-vortex in $z$-direction at $t=11$, inverted by the antiperiodic boundary conditions in time direction.} \label{fig:1hlfmodes} \end{figure} \begin{figure}[p] \centering \psfrag{x}{$x$} \psfrag{z}{$z$} a)\includegraphics[width=.44\linewidth]{thinthicksmadj1622}b)\includegraphics[width=.44\linewidth]{thinthicksmadj1222} \textcolor{white}{emty line}\\ c)\includegraphics[width=.44\linewidth]{thinthicksmadj822}d)\includegraphics[width=.44\linewidth]{thinthicksmadj422} \textcolor{white}{emty line}\\ e)\includegraphics[width=.44\linewidth]{thinthickstadj20822}f)\includegraphics[width=.44\linewidth]{thinthickstadj8222} \caption{"$Q=1/2$ configuration": Scalar eigenmode density of two adjoint overlap (n=1-2) or four adjoint asqtad staggered (n=1-4) zeromodes of negative chirality in the intersection plane of $zt$- and $xy$-vortices for various vortex thicknesses: a) $d_x=d_z=16$ b) $d_x=d_z=12$ c) $d_x=d_z=8$ d) $d_x=d_z=4$ d) $d_x=20$, $d_z=8$ f) $d_x=8$, $d_z=2$. The adjoint zeromodes approach the thick vortex intersection at $x=z=11$ from regions of positive, adjoint Polyakov lines ($P_A$), but do not localize exactly the topological charge contribution $Q=1/2$ because they strictly avoid regions of negative, adjoint Polyakov (Wilson) lines. We also plot the profile of the adjoint Polyakov (Wilson) lines ($P_A$, red dots).} \label{fig:1hlfmodes2} \end{figure} \section{Dirac modes in the background of single center vortex pairs} Since we found that the Dirac zeromodes seem to be more sensitive to the Polyakov (Wilson) lines than to topological charge contributions we analyze configurations of single vortex pairs apparently without topological charge. We use parallel $xy$-vortices with $t$-links in one time-slice ($t_0$) varying in $z$-direction according to Eq.~\ref{eq:phi-pl0}, shown in Fig.~\ref{fig:phis}a). Due to the translation symmetry in $x$ and $y$, {\it i.e.}, parallel to the vortex planes, and the fact that the links vary only within an abelian U(1)-subgroup of $SU(2)$, our configuration corresponds to an abelian 2-dimensional problem which we can compare with the abelian center vortices on $\mathbbm T^2$ considered in~\cite{Reinhardt:2002cm}. The nontrivial transition functions used in~\cite{Reinhardt:2002cm} are incorporated in our nontrivial links and the analytical result should be equivalent to our configuration with periodic fermion boundary conditions. The occurrence of zeromodes can be related to a U(1) index theorem for $\mathbbm T^2$, Ref.~\cite{Reinhardt:2002cm} finds for $m_+$ vortices and $m_-$ anti-vortices a number $\Delta m/2 = |m_+ - m_-|/2$ of zeromodes (for more details see also~\cite{Falomir:1996as}). Since we work in four dimensions and with $SU(2)$, we expect to find four times more zeromodes: we have two times more color indices and two times more spinor components. This is indeed our result for periodic boundary conditions in the $x$- or $y$-directions and antiperiodic boundary conditions in the $z$- or $t$-directions, {\it i.e.}, perpendicular to the vortex surfaces~\footnote{For antiperiodic boundary conditions in the $x$- or $y$-directions, {\it i.e.}, parallel to the vortex surfaces we do not find zeromodes. Because of the translation invariance in $x$ and $y$, the Dirac modes should contain a free wave factor with a wave vector $k=(k_x, k_y)$. But for antiperiodic boundary conditions the smallest allowed modulus of $k$ is $\frac{\pi}{aN} \neq 0$ and the corresponding lowest eigenvalue must be greater than zero.}. We find two overlap zeromodes of each chirality\footnote{With the introduction of a little noise, these zeromodes would mix and be lifted to low lying modes.} spreading over the whole lattice parallel to the vortex surfaces, {\it i.e.}, their scalar density distribution is constant in the $x$- and $y$-direction, strictly avoiding regions with negative Polyakov values. At the time-slice with nontrivial $t$-links ($t_0$) the density distribution clearly follows the profile of these links, which correspond to the Polyakov loops. We confirm these results also for staggered fermions and adjoint representations with adjoint Polyakov traces $P_A=P^2-1$ (see Eq.~\ref{eq:adjtrace}). Fig.~\ref{fig:pl2modes} shows our Polyakov profiles $P$ and $P_A$ together with scalar density plots of the fundamental and adjoint Dirac zeromodes with periodic and antiperiodic boundary conditions in $t$-direction (the latter invert the Polyakov profiles, corresponding to a multiplication with $-1$). The individual zeromodes for both chiralities show equivalent density distributions, but they differ from the analytic results found in~\cite{Reinhardt:2002cm}. Further, the lowest non-zeromodes also propagate parallel to the vortex surfaces, showing similar density distributions but with opposite response to the Polyakov profiles, {\it i.e.}, exchanged boundary conditions. \begin{figure}[p] \centering \psfrag{x}{$x$} \psfrag{y}{$y$} \psfrag{z}{$z$} \psfrag{t}{$t$} a)\includegraphics[width=.44\linewidth]{pl2fun}b)\includegraphics[width=.46\linewidth]{profil} \textcolor{white}{emty line}\\ c)\includegraphics[width=.46\linewidth]{pl2funxz}d)\includegraphics[width=.44\linewidth]{pl2funap} \textcolor{white}{emty line}\\ e)\includegraphics[width=.44\linewidth]{pl2adjxz}\; f)\includegraphics[width=.48\linewidth]{pl2adjap} \caption{Scalar density plots of Dirac zeromodes in the background of a single $xy$-vortex pair at $t=6$ and $z_{1,2}=3.5,9.5$ (see Eq.~\ref{eq:phi-pl0}): a,c) two fundamental zeromodes for each chirality (the four individual modes have equivalent density distributions) show the Polyakov profile at $t=6$ and are constant in $x$- ($y$-) direction; b) profile of the (adjoint) Polyakov traces $P$ ($P_A$) for our configuration (indicated with red dots in the density plots); d) two fundamental zeromodes for each chirality with antiperiodic boundary conditions show the inverted Polyakov profile at $t=6$; e,f) four adjoint zeromodes for each chirality (the eight individual modes again have equivalent density distributions) with periodic resp. antiperiodic boundary conditions.} \label{fig:pl2modes} \end{figure} The above configuration consists of two identical flux lines because the field strength is invariant under a $z$-translation $T_z$ by half the lattice size, which exchanges the two vortices. But this symmetry is spoilt by the fact that in the $zt$-plane we need to distinguish two regions which differ by the value of the Polyakov loop $P$. More generally, we could multiply all links $U_t$ at a fixed $t$-coordinate (not necessarily $t_0$) by an arbitrary $SU(2)$-element. While this would not alter any plaquettes ({\it i.e.}, the flux), $P$ would change at a given point $z$ and could be modified to any desired value. Invariably, however, the Polyakov loop varies by a center element at the locations of the vortices. This simply follows from Stokes theorem. Two Polyakov loops on either side of the vortex can be deformed into a single curve encircling it. The line integral along this curve equals the center element of the vortex in its interior. Consequently, $T_z$ corresponds to a center transformation switching the sign of $P$. Since the Polyakov loop is a gauge-invariant quantity, the configurations before and after a translation with $T_z$ cannot be gauge-equivalent and in general the Dirac operator will not be invariant under $T_z$. Since the scalar densities of zeromodes and the near-zeromodes are asymmetric with respect to the translation $T_z$ and the only effect of $T_z$ is a change of $P$, we are forced to conclude that the Dirac eigenmodes are sensitive to the value of the Polyakov loop. For completeness we mention the anti-parallel vortex (Fig.~\ref{fig:phis}b). Since the total flux vanishes, in accord with the 2-dimensional U(1) index theorem~\cite{Falomir:1996as} we observe no zeromodes. In this case the translation $T_z$ changes both, the direction of fluxes and the value of $P$. Still, the lowest-lying eigenmodes are sensitive to the value of the Polyakov loop with respect to boundary conditions \section{Conclusions} In this paper we have studied low lying asqtad staggered and overlap Dirac operator eigenmodes in the background of intersecting vortex configurations. Each intersection region carries topological charge $\pm 1/2$. We found that the zeromodes are sensitive to the value of the Polyakov loops (Wilson lines), avoiding regions with negative Polyakov loops. In particular, we studied configurations with two parallel planar vortices, which can be exchanged by a discrete translation $T$ by half the lattice size. While the field strength (and the topological charge distribution obtained from the field strength) is invariant under this translation, the Dirac operator is not, since the translation modifies (shifts) the Polyakov loop. Therefore, the distribution of the scalar density of the Dirac operator zeromodes is not forced to be invariant under this translation, and we do find that they are not. This is in contrast to the analytical results of Ref.~\cite{Reinhardt:2002cm} which appear to be invariant under the discrete translations. This happens even though our planar vortex configurations are presumed to be equivalent to the abelian gauge configurations studied there. We think the difference might be related to differences in the Polyakov loop. Configurations with the same flux distribution can differ in the distribution of the Polyakov loop values, which can lead to differences in the scalar density of Dirac operator zeromodes, as those tend to avoid regions of negative Polyakov loops. We find that the zeromodes in the fundamental representation are not quite spread along the center vortex sheets and don't quite peak at the vortex intersection points. Rather, they approach these topological structures from regions with positive Polyakov lines while avoiding regions of negative Polyakov lines. With adjoint fermions we tried to find zeromodes which identify exactly one topological charge contribution $Q=1/2$ of a single vortex intersection. We analyzed linear combinations of zeromodes which maximize the inverse participation ratio (IPR), {\it i.e.}, localize as much as possible. We found that the scalar eigenmode density of adjoint fermions always peaks at least at two intersections. Further, we analyzed a lattice configuration with only one "thick" vortex intersection. Since both, gluonic and (adjoint) fermionic definitions of topological charge fail to detect intersections with at least one thin vortex, both definitions of $Q$ merely signal the value $Q=1/2$. We find it remarkable that the two corresponding adjoint zeromodes are not localized to the region with nonvanishing topological charge contribution but spread over the whole lattice, avoiding regions of negative traces of adjoint Polyakov (Wilson) lines. Therefore we conclude that the Dirac zeromodes are more sensitive to the Polyakov (Wilson) lines than to the topological charge contributions. Finally we confirm this conjecture with Dirac eigenmodes for configurations with nontrivial Polyakov profiles of single center vortex pairs, apparently without topological charge. In principle, the sensitivity of the Dirac operator to the Polyakov (Wilson) lines with respect to boundary conditions is nothing new. In calorons for example {\it the zeromode hops with the boundary conditions in the compact direction}~\cite{GarciaPerez:1999ux,Bruckmann:2007ru} between the monopole constituents, where the Polyakov loop actually passes through $\mathbbm{1}$ and $-\mathbbm{1}$. Since calorons seem only to be important in the deconfinement phase, {\it i.e.} with finite temporal lattice size, these effects may be interpreted as finite size effects. Similarly, our specially constructed vortex configurations discussed in this paper, lacking a natural size for a correlation length, could be viewed as having large finite size effects. Thus our observations may not completely hold in a realistic vortex vacuum, where the correlations contained in vortices and Dirac zeromodes decay within a finite length, which should ideally be much smaller than the extent of the lattice. Nevertheless, a better understanding of the sensitivity of Dirac operator eigenmodes to Polyakov (Wilson) lines in realistic situations should be pursued, and might shed more light on the mechanism of chiral symmetry breaking. We plan to study this in future work. \acknowledgments{We thank Rob Pisarski and Jeff Greensite for the suggestion to investigate fractional topological charge configurations with adjoint fermions. We are grateful to Lorenz von Smekal and Falk Bruckmann for helpful discussions. This research was partially supported by the Austrian Science Fund (``Fonds zur F\"orderung der Wissenschaften'', FWF) under contract P22270-N16 (R.H.).} \bibliographystyle{unsrt}
{ "redpajama_set_name": "RedPajamaArXiv" }
8,223
\section{Introduction} In the loop quantum gravity field variable is a self-dual connection, instead of the metric, and this new field variable is called the Ashtekar connection ~\cite{a1, a2}. The Wheeler-DeWitt equation is then reformulated in terms of the traces of the holonomies of the Ashtekar connection ~\cite{b1, b2}. In loop quantum gravity, the area and volume are represented by operators with discrete eigenvalues ~\cite{c1}. This discretization occurs near Planck scale, and this leads to a modification of the usual energy momentum dispersion relation to a modified dispersion relation at Planck scale ~\cite{lqg1, lqg2}. Even though this modified dispersion relation reduces to the usual dispersion relation in the IR limit, it considerably deviates from the usual energy momentum dispersion relation in the UV limit. This behavior of modification in the UV limit is also observed in the Horava-Lifshitz gravity, which is motivated by Lifshitz scaling between space and time ~\cite{12a, 12b}. In fact, such Lifshitz scaling also produces a deformation of the standard energy momentum dispersion relation to modified dispersion relation in the UV limit of the theory ~\cite{12d, 12e}. The Lifshitz deformation of supergravity theories in the UV limit has also been studied ~\cite{lf1, lf2, lf4, lf5}. It is also possible to motivate a different form of modified dispersion relation ~\cite{6, 7, 6ab, 7ab} using the high energy cosmic ray anomalies ~\cite{1, 2}. The modification of standard dispersion relation to modified dispersion relation has motivated the construction of double special relativity, where the Planck energy acts as another universal constant ~\cite{4,5}. This theory of double special relativity has been constructed using a non-linear modification of the Lorentz group. Such modification to the dispersion relation have also been obtained from string field theory ~\cite{7a,kuch,kuch1}. Thus, along with the phenomenological reasons, there are strong theoretical reasons to modify the energy momentum dispersion relation ~\cite{6, 7, 6ab, 7ab}. It is possible to generalize double special relativity to curved space-time, and the resultant theory is called gravity's rainbow ~\cite{gr, gr12}. In this theory the metric depends on the energy of the probe used to analyze the geometry. The energy dependence is introduced into the metric using rainbow functions. As these rainbow functions depend on the energy of the probe, which in turn depend implicitly on the coordinates, they cannot be removed by rescaling ~\cite{gr14, gr14ab, gr17ab, gr18ab}. In fact, this is expected as the gravity's rainbow is related to Lifshitz deformation of geometries ~\cite{gr14}. Here the energy of the probe is converted into the length scale at which the probe is investigating the geometry, and this in turn has upper bound. Hence in gravity's rainbow, we cannot probe the space-time below Planck scale, as it is not possible to obtain an energy greater than the Planck energy. The experimental constraints on the rainbow functions from various experiments have been proposed ~\cite{w1}. The effect of these rainbow functions on the black information paradox have also been investigated ~\cite{w2}. Such a modification of a higher curvature gravity from gravity's rainbow has been studied, and used to analyze its quasinormal modes ~\cite{w5}. The gravity's rainbow geometries have been used to study the modification to the physics of neutron stars ~\cite{w7}. In all these systems, the gravity's rainbow only changes the UV Planck scale behavior of the system. However, the system behaves as the original un-deformed system, in the IR limit. Furthermore, the effect of rainbow function on a system depends on the kind of rainbow functions used to deform it ~\cite{w1}. Thus, in this paper, we will use the rainbow function obtained from the modification of energy momentum dispersion relation due to loop quantum gravity ~\cite{y1, y2}. It has been proposed that naked singularity will not form due to loop quantum gravitational effects, and this was done by analyzing the non-perturbative semi-classical modifications to a collapsing system near the singularity ~\cite{singu}. Here we will demonstrate that these results can be obtained by analyzing the modification of energy momentum dispersion relation from loop quantum gravity ~\cite{gr, gr12}. To analyze such effect from loop quantum gravitational modifications on the formation of naked singularity, we will use gravity's rainbow. It has been proposed that naked singularities cannot form due to the weak cosmic censorship conjecture ~\cite{cc12, cc14}. However, several violation of cosmic censorship conjecture have been studied, and thus it seems that it is possible to form naked singularity in space ~\cite{cc16, cc18, cc19, cc20}. In fact, it has been argued that accretion properties of a collapsing system can distinguish between a naked singularity, a wormhole and a black hole ~\cite{si12}. It has been suggested that a naked singularity can form during a critical collapse of a scalar field ~\cite{si14}. The gravitational lensing by a strongly naked null singularity has been investigated ~\cite{si15}. It has been demonstrated that the nature of this divergence are not logarithmic. It has also been suggested that the formation of a naked singularity can be tested using astrophysical observation ~\cite{si17}. The shadow of a naked singularity without photon sphere has been analyzed ~\cite{si16}. It has been argued that the naked singularity cannot form due to quantum gravitational effects ~\cite{si18, si19}. Thus, it becomes important to analyze the effect of loop quantum gravity on the formation of naked singularities. As such modification to black hole geometries have already been studied ~\cite{gr14, gr14ab, gr17ab, gr18ab}, we will use gravity's rainbow to analyze the effect of loop quantum gravity on the formation of naked singularities. The effect of rainbow functions on the formation of naked singularities has also been studied, and it was observed that rainbow deformation can violate the cosmic censorship conjecture ~\cite{cca12}. However, we will argue that this cannot be the case, as due to the rainbow deformation, it is not possible to probe space-time below Planck scale. This presents the formation of naked singularities. \section{Collapse in gravity´s rainbow } To properly analyze the formation of naked singularities, and weak cosmic censorship conjecture, we will first analyze the deformation of a solution with Einstein equation by rainbow functions, which are consistent with loop quantum gravity ~\cite{lqg1, lqg2}. It has been proposed that the Einstein equations depend on the energy due to rainbow deformation, $ {G_{\mu \nu } (E / E_p)=\mathcal{R}_{\mu\nu}(E / E_p)-\frac{1}{2}g_{\mu\nu}(E / E _p)\mathcal{R}(E / E_p)=8{\pi T}_{\mu \nu }}$ (with the standard unit $G=c=1$). Here $E_p$ is Planck energy, and $E$ is the energy at which the system is probed. Thus, the geometry depends on the energy used to probe it. However, for $E<< E_p$, we can neglect the rainbow deformation. It is possible to incorporate this rainbow deformation into the spherically symmetric line element in comoving coordinates $(t, \; r,\; \theta, \; \phi)$ as (with (-,+,+,+) as the signature) \begin{align}\label{1} ds^{2}= -\frac{e^{2 \lambda (t,r)}}{f(E)^2} dt^2 +\frac{e^{2 \psi (t,r)}}{g(E)^2} dr^2 +\frac{R(t,r)^{2}}{g(E)^2} d\theta^{2} +\frac{R(t,r)^{2} \sin^{2}{\theta}}{g(E)^2} d\phi^{2} \end{align} where $R(t,r)$ is the physical radius at time $t$ of the shell labeled by $r$. Here $f(E)$ and $g(E)$ are rainbow functions which make the metric depend on the energy of the probe $E$. As in the IR limit, the gravity's rainbow has to reduce to the usual general relativity, we expect that these rainbow function satisfy \begin{align} \lim_{E / E _p\to 0} f(E/ E _p) &= 1 & \lim_{E /E _p \to 0} g(E / E_p) & = 1 \end{align} It is possible to obtain a specific form of these rainbow function from loop quantum gravity ~\cite{lqg1,lqg2}. We will use these these specific rainbow functions, and demonstrate that they will prevent the formation of naked singularities. However, before that we observe that any rainbow functions will limit the scale to which we can probe the system. It is possible to translate the uncertainty $\Delta p \geq 1/ \Delta x$ into a bound on energy of the probe $E$, as $E \geq 1/\Delta x$. Here $\Delta x$ would correspond to the scale to which any length scale in the system can be measured. We cannot take this length to be the same order as the Planck length, as such a bound is restricted by black hole physics ~\cite{uncer1, uncer2,quant}. As there is this minimal length in the system, we have a bound on the maximum energy need to probe such a minimal length. Now if $E< E_p$ is such a maximum energy needed to probe, then we have to analyze rainbow modification to the systems when $E$ is of the same order as $E_p$ (and can be neglected for $E<< E_p$). The formation of naked singularity from a collapsing spherically symmetric object has been already been studied ~\cite{Singh:1994tb, Singh:1997wa}. Here we will investigate the rainbow deformation of a collapsing spherically symmetric object. To explicitly analyze such a system, we express the energy momentum tensor for a spherically symmetric object in comoving coordinates as $T^{\mu}_{~{\nu}}=\text{diag}(-\rho,\; p_r, \; p_\theta, \;p_\theta), $ where $\rho,\; p_r\; \text{and} \; p_\theta,$ are functions of $t$ and $r$. After solving for different non-zero components of Einstein equations, and suitably deforming it by rainbow functions (using the $OGRE$ package ~\cite{Shoshany:2021iuc}), we get the following set of equations, \begin{align} G^t{}_t &= -\frac{g(E )^2\left(1-e^{-2\psi }\left(R'^2+2R R''-2R R'\psi '\right)\right)+e^{-2\lambda }f(E )^2\left(\dot{R}^2+2 R \dot{R}\dot{\psi }\right)}{R^2} \nonumber \\ & = -8\pi \rho \label{4.1} \\ G^r{}_r &= -\frac{e^{-2\psi }g(E )^2}{R^2}\left(e^{2\psi }-R'^2-2R R'\lambda '\right)-\frac{e^{-2\lambda }f(E )^2}{R^2}\left(\dot{R}^2+2R \ddot{R}-2R \dot{R}\dot{\lambda }\right) \nonumber \\& =8\pi p_r \label{5.2} \\ G^{\theta }{}_{\theta } &= G^{\phi }{}_{\phi }=\frac{e^{-2\psi }g(E )^2}{R}\left(R''+R'\left(\lambda '-\psi '\right)+R\left(\lambda ''+\lambda '^2-\lambda '\psi '\right)\right) \nonumber \\& -\frac{e^{-2\lambda }f(E )^2}{R}\left(\ddot{R}-\dot{R}\left(\dot{\lambda }-\dot{\psi }\right)+R\left(+\ddot{\psi }+\dot{\psi }^2-\dot{\lambda }\dot{\psi }\right)\right)\nonumber \\& = 8\pi p_\theta \label{8} \\ G^r{}_t& = \frac{2e^{-2\psi }g(E )^2\left(\dot{R}\lambda '+R'\dot{\psi }-\dot{R}'\right)}{R}=0\label{7} \end{align} where $dot$ and $prime$ are the derivatives with respect to time and radial coordinates respectively. Now using the definition of Misner–Sharp Mass~$F(t,r)$ as $ 1-(F(t,r)/R)=g^{\mu \nu }\triangledown _{\mu } R \triangledown _{\nu }R$ ~\cite{Bambi:2019xzp}, we can write a rainbow deformation of Misner–Sharp Mass as \begin{align}\label{4} F(t,r)=R\left(1+f(E )^2e^{-2\lambda }\dot{R}^2-g(E )^2e^{-2\psi }R^{\prime ^2}\right) \end{align} It would be useful to define $j=1-g(E)^2$. Using this, we observe $F'=j R'+8\pi \rho R^2R'$ and $\dot{F}=j \dot{R}-8\pi p_rR^2\dot{R}$. Now due to the conservation of energy momentum tensor ${T^{\mu }{}_{\nu ;\mu }=0}$, we obtain $\dot{\rho }+\left(\rho +p_r\right)\dot{\psi }+{2\left(\rho +p_{\theta }\right)\dot{R}}/{R}=0$ and $p_{\theta }'+\lambda '\left(\rho +p_r\right)+{2\left(p_r-p_{\theta }\right)R'}/{R}=0$. The acceleration equation can be obtained by defining $h(t,r)=1-g(E)^2e^{-2\psi }R^{\prime ^2}$. Here $h(t,r)$ depends on the rainbow function $g(E)$, hence is an energy dependent function. Thus, the initial density and velocity profile conditions of the collapsing dust ~\cite{Singh:1994tb} would depend on the maximum energy of the system, and would get deformed in the UV limit. We observe that using $h(t,r)$, we can obtain, \begin{align}\label{12} {F(t,r)}/{R}=f(E )^2e^{-2\lambda}\dot{R}^2+h(t,r) \end{align} and ${\dot{h}}/{(1-h)}=- {2\dot{R}\lambda '}/{R'}$. Now we can write the equation for acceleration as \begin{align}\label{14} \ddot{R}&=\dot{R}\dot{\lambda }+\frac{e^{2\lambda }}{f(E )^2}\left(\frac{j }{2R}-4\pi R p_r-\frac{F}{2R^2}+\frac{(1-h)}{f(E )^2R'}\left(-\frac{p_r'}{\rho +p_r}+\frac{2R'\left(p_{\theta }-p_r\right)}{R\left(\rho +p_r\right)}\right)\right) \end{align} We can define the proper time as ${{d\tau } =\frac{e^{\lambda }}{f(E)}}{dt} $, and use it to obtain \begin{align}\label{16} \frac{d^2R}{d \tau ^2} &= \frac{\left(1-f(E )^2\right)\dot{R}\dot{\lambda }}{e^{2\lambda }}+\frac{1}{f(E )^2}\bigg(\frac{j}{2R}-4\pi R p_r-\frac{F}{2R^2}+\frac{(1-h)}{f(E )^2R'}\bigg(-\frac{p_r'}{\rho +p_r}\nonumber\\&+\frac{2R'\left(p_{\theta }-p_r\right)}{R\left(\rho +p_r\right)}\bigg)\bigg) \end{align} This modified equation depends on the maximum energy of the system due to the rainbow functions $f(E)$ and $g(E)$. So, the conditions for the collapse from this equation will be implicitly energy dependent. This interesting energy dependence constraints can have important consequences for the formation of naked singularity. Let us take an example of perfect fluid where the radial and tangential pressures are equal ($p_r=p_\theta=p$), and write the equation of acceleration for this perfect fluid as \begin{align}\label{17} \frac{d^2R}{d \tau ^2}=\frac{\left(1-f(E )^2\right)\dot{R}\dot{\lambda }}{e^{2\lambda }}+\frac{1}{f(E )^2}\left(\frac{j}{2R}-4\pi R p-\frac{F}{2R^2}-\frac{(1-h)}{f(E )^2R'}\frac{p'}{\rho +p}\right) \end{align} Now by setting the acceleration and velocity equal to zero, we obtain the Oppenheimer–Volkoff equation for hydrostatic equilibrium ~\cite{Oppenheimer:1939ne} \begin{align}\label{18} {-R^2p'=\frac{f(E )^2(\rho +p)}{1-\frac{F}{R}}\left(\frac{F}{2}+4\pi R^3 p-\frac{R(1 - g(E)^2) }{2}\right)} \end{align} where we have used ${h={F}/{r}}$ (after putting velocity equal to zero). The derivative of radial pressure appears in the acceleration equation. The fate of the collapse is determined by the radial and tangential pressure. It is noted from Eq.\eqref{16} that positive tangential pressure and negative tangential pressure opposes and supports the collapse, respectively. The rainbow functions that appear in Eq.\eqref{14} and \eqref{16} could reduce the value of acceleration, but could not change the overall sign of these equations, because these rainbow functions are always less or equal to one ($f(E) \leq 1 \; \text{and} \; g(E) \leq 1)$). Thus, they can only change the relative magnitude of these forces, and the effect these forces will have on the collapsing system. \section{Conditions for collapse from acceleration equation} In this section, we will discuss different cases for the spherical collapse, and the effect of the rainbow functions on them. For the dust case, we set the tangential and radial pressures equal to be zero ($p_r=p_\theta=0$). So, let us assume that collapse starts from rest at $t=0$, and we set the initial conditions as $ R(t,r)\big|_{t=0}=r ~ \text{and} ~ F(t,r)\big|_{t=0}=F_c(r)$. Using these initial conditions in expression of $\dot{F}$, Misner- Sharp mass becomes \begin{align} F(t,r)=F_c(r) + (1-g (E)^2)(R-r) \label{18.1} \end{align} Here $ p_{\theta}= p_{r} =0$ for the dust case and the $\lambda$ is only the function of $t$. Thus, we can redefine $t$ and set $\lambda=0$. Similarly, for the dust case, $h(t,r)$ will be function of $r$ only. With these re-definitions of the variables, we can write the rainbow deformed metric for the dust case as \iffalse From Eq.(\ref{13}) it is evident that $h(r)$ is a function of $r$ only. The metric (\ref{1}) with the help of Eq.(\ref{11}) can be written as, \fi \begin{align}\label{20} {{ds}^2=\frac{-{dt}^2}{f(E )^2}+\frac{R^{{\prime 2}}}{1-h(r)}{dr}^2+\frac{R^2}{g(E )^2}{d\Omega }^2} \end{align} where $d\Omega^2=d\theta^2+sin^2\theta ~ d\phi^2$. Here $h(r)$ is less than $1$, so that the coefficient of $dr^2$ remain spacelike. From Eq.(\ref{12}), we find the equations, governing the behaviour of dust in the framework of rainbow functions that are consistent with quantum gravity. \begin{align}\label{21} \frac{F_c(r)+ j(R-r)}{R}=f(E)^2\dot{R}^2+h(r) \end{align} Eq.\eqref{21} is discussed in details in the next section. For collapse to start, the acceleration has to be negative or inward. We will use this fact to derive the condition for collapse to discuss different cases of tangential pressure, radial pressure and perfect fluid. \subsection{Tangential Pressure}\label{tangential} Now if the tangential pressure $p_{\theta}$ is non-zero, and the radial pressure $p_{r}$ is zero, a simplification of Eq.(\ref{16}) occurs. This simplified equation of acceleration can be written as \begin{align}\label{22} \frac{d^2R}{d \tau ^2}=\frac{\left(1-f(E )^2\right)\dot{R}\dot{\lambda }}{e^{2\lambda }}+\frac{1}{f(E )^2}\left(\frac{j}{2R}-\frac{F}{2R^2}+\frac{(1-h)}{f(E )^2R'}\frac{2R'p_{\theta }}{\rho R}\right) \end{align} For collapse to begin, the acceleration has to be negative. Assuming the collapse to start from rest($\dot{R}=0$), and from Eq.(\ref{12}) we obtain ${h=\frac{F_c}{r}}$, then the condition for collapse to begin at $t=0$ turns out to be, \begin{align} \frac{F_c}{2r}>\frac{\frac{j }{2}+\frac{2p_{\theta }}{f(E )^2\rho }}{1+\frac{4p_{\theta }}{f(E )^2\rho }}=p_\theta \; \text{condition} \label{23} \end{align} If the condition of Eq.\eqref{23} is satisfied at $t=0$ and holds at all later times, the collapse will begin. In this case, the singularity will form at $r=0$, if acceleration is negative, throughout the later time evolution of the system. We also observe from Eq.\eqref{23}, that this system depends on the energy because of the rainbow functions. \begin{figure}[htp] \centering \includegraphics[height=6cm,width=8cm]{pf.pdf} \includegraphics[height=6cm,width=8cm]{pg.pdf} \caption{Plots of R.H.S. of Eq.\eqref{23} versus f($E)$ and g($E)$ for different of ratio of $p_\theta/\rho.$} \label{pfg} \end{figure} The dependence on the ratio $p_{\theta}/\rho$ generally evolves with time. However, to examine the dependency of collapse on these rainbow functions, we assume the ratio $p_{\theta}/\rho$, remains constant in time. For positive tangential pressure and density, we assume that this ratio lies in the range $0 \leq p_\theta/\rho \leq 1$. In Fig.\eqref{pfg}, we have plotted Eq.\eqref{23}. It follows from both the graphs, as the energy of the system tends towards the Planck scale, $f(E)$ and $g(E)$ tends towards zero. This makes it difficult for collapse to happen, which in turn has consequences for the singularity formation. Hence we can conclude that the rainbow functions modify the collapsing system. In the IR limit, $f(E) = g(E)=1$, we get back the same condition as described in ~\cite{Barve:1999ph}. \begin{align} {\frac{F_c}{r}>\frac{4\left.p_{\theta }\right/\rho }{1+4\left.p_{\theta }\right/\rho }} \end{align} \subsection{Radial Pressure} Here we can analyze the effect of a non-zero radial pressure $p_{r}$, with a vanishing tangential pressure $p_{\theta}$. This assumption again leads to a simplification of Eq.(\ref{16}), and this modified equation of acceleration is given by \begin{align}\label{48} \frac{d^2R}{d \tau ^2} &= \frac{\left(1-f(E )^2\right)\dot{R}\dot{\lambda }}{e^{2\lambda }}+\frac{1}{f(E )^2}\bigg(\frac{j}{2R}-4\pi R p_r-\frac{F}{2R^2}-\frac{(1-h)}{f(E )^2R'}\left(\frac{2R'p_r}{R\left(\rho +p_r\right)}+\frac{p_r'}{\rho +p_r}\right)\bigg) \end{align} Now the condition on radial pressure for the collapse to begin at $t=0$ turns out to be, \begin{align}\label{49} \frac{F_c}{2 r}>\frac{\frac{j }{2}-4\pi r^2 p_r-\frac{2p_r+r p_r'}{f(E)^2(\rho +p_r)}}{1-\frac{4p_r+2 r p_r'}{f(E)^2(\rho +p_r)}} \end{align} Again, in the IR limit $f(E) = g(E)=1$, we obtain the condition for collapse which can be derived from general relativity ~\cite{Barve:1999ph}. This depends on the density $\rho $, radial pressure and its derivative \begin{align} \frac{F_c}{2 r}>\frac{4\pi r^2 p_r+\frac{2p_r+r p_r'}{\rho +p_r}}{-1+\frac{4p_r+2 r p_r'}{\rho +p_r}} \end{align} \subsection{The Perfect Fluid} In this approximation, called the perfect fluid approximation both the radial and tangential pressures are set to zero, $p_r = p_\theta = p$. In this case, Eq.(\ref{16}) becomes, \begin{align}\label{24} {\frac{d^2R}{d \tau ^2}=\frac{\left(1-f(E )^2\right)\dot{R}\dot{\lambda }}{e^{2\lambda }}+\frac{1}{f(E )^2}\left(\frac{j}{2R}-4\pi R p-\frac{F}{2R^2}-\frac{(1-h)}{f(E )^2R'}\frac{p'}{\rho +p}\right)} \end{align} For collapse to begin, the acceleration has to be negative. Now using Eq.(\ref{12}), we obtain ${h=\frac{F_c}{r}}$ and the condition for collapse to begin at $t=0$ turns out to be, \begin{align}\label{24.01} {\frac{F_c}{2r}>\frac{\frac{j }{2}-4\pi r^2 p-\frac{r p'}{f(E )^2(\rho +p)}}{1-\frac{2r p'}{f(E )^2(\rho +p)}}} \end{align} Here in the IR limit, $f(E) = g(E)=1$, we obtain the condition which can be obtained from general relativity \begin{align} \frac{F_c}{2r}>\frac{4\pi r^2 p+\frac{r p'}{\rho +p}}{-1+\frac{2r p'}{\rho +p}} \end{align} We have investigate the dependence of this conditions for the different physical values of the pressure, such as when the tangential pressure or radial pressure are zero or both are set equal. We are interested in studying the case of dust and the effect of loop quantum gravitational modifications on the formation of naked singularity using gravity's rainbow framework. \section{The Dust Solution}\label{dust} The Tolman-Bondi dust collapse has been investigated in general relativity ~\cite{Singh:1994tb,Barve:1999ph,Singh:1997wa,Gundlach:1997wm}. The results for a marginally bound case and a non-marginally bound case have been analyzed in these studies. However, we will restrict our discussion here to only the marginally bound case $i.e.~ h(r) = 0$. The results for the non-marginally bound case can be derived using the same procedure. Now we will explicitly use the rainbow function motivated from loop quantum gravity ~\cite{y1, Amelino-Camelia:2008aez} \begin{eqnarray} f(E )=1, &&\,\,\,\,\,\,\,\,\,\,\,\,\,\, g(E )=\sqrt{1-\eta \frac{E}{E_p}} \end{eqnarray} Using these rainbow functions, we can explicitly write the metric as \begin{align}\label{29} {ds}^2=-{dt}^2+R^{{\prime 2}}(t,r){dr}^2+\frac{R^2(t,r)}{g(E )^2}{d\Omega }^2 \end{align} Here $ {\dot{R}}$ will also depend on the energy of the probe \begin{align}\label{30} {\dot{R}=-\sqrt{j +\frac{F_c(r)-rj }{R}}} \end{align} We note that, along radial null geodesic, we can write \begin{align}\label{29.1} \frac{\partial t}{\partial r}=R' \end{align} Solving the above equation, at constant $r$ using the boundary condition $R_{t=0}=r$, we obtain \begin{align}\label{31} t&= \frac{\sqrt{r F_c}}{j }-\frac{F_c-rj}{j^{3/2}}\tanh ^{-1}\big(\sqrt{{F_c}}{(rj)^{-1}}\big) -\frac{R}{j }\sqrt{j +\frac{F_c-rj}{R}}\nonumber\\&+\frac{F_c-rj}{j^{3/2}}\tanh ^{-1}\bigg(\sqrt{1+\frac{F_c-rj}{Rj}}\bigg) \end{align} Using the standard procedure, we introduce the auxiliary variables $u,X$ ~\cite{Dadhich:2003gw}, \begin{eqnarray} u=r^\alpha ,\; \alpha>0, && \; X=\frac{R}{u} \end{eqnarray} In order for the singularity at $r = 0$ to be naked, radial null geodesics should be able to propagate outwards, starting from the singularity. A necessary and sufficient condition for this to happen is that the area radius $R$ increases along an outgoing geodesic, because $R$ becomes negative in the unphysical region. Thus in the limit of approach to singularity, we write \begin{align}\label{ghosh} X_0 &= \underset{R\to 0,u\to 0}{\lim}\frac{R}{u}=\underset{R\to 0,u\to 0}{{\lim}}\frac{{dR}}{{du}} \nonumber \\ & =\underset{R\to 0,r\to 0}{{\lim}}\frac{1}{\alpha r^{\alpha -1}}\frac{{dR}}{{dr}}=\underset{R\to 0,r\to 0}{{\lim}}\frac{1}{\alpha r^{\alpha -1}}\bigg(R'+\frac{\partial t}{\partial r}\dot{R}\bigg)\nonumber\\&=\underset{R\to 0,r\to 0}{{\lim}}\frac{1}{\alpha r^{\alpha -1}}R'\left(1+\dot{R}\right) \end{align} We can evaluate $R'$ from Eq. \eqref{31} and in the resulting expression, substitute $R=Xr^\alpha$. Then divide by $r^{\alpha-1}$, we obtain the following expression \begin{align}\label{32} \frac{R'}{r^{\alpha -1}}&=X\big\{-2r^{3/2} A_1 A_2F'_c\sqrt{j } +r \sqrt{F_c} \big(A_2+r^{\alpha /2} A_1\sqrt{X} \sqrt{j} +2A_1A_2\big(\tanh ^{-1}\big(\sqrt{{F_c}{(rj)^{-1} }}\big)\nonumber\\&-\tanh ^{-1}A_1\big) \big) \big(-j +F'_c\big)\big\}\big\{\sqrt{F_c} \big(r A_2j +r^{\alpha /2} A_1\sqrt{X} \big(r-2 r^{\alpha } X\big)j^{3/2} \nonumber\\& +F_c \big(A_2-r^{\alpha /2} A_1\sqrt{X} \sqrt{j } \big)\big)\big\}^{-1} \end{align} where $A_1^2= {1-{r^{-\alpha } (j r+F_c)}{(j X)^{-1}}} \; \text{and} \; A_2^2=j (r-r^{\alpha } X )+F_c$. Assuming $u=r^\alpha$ along the radial null geodesic, we can write the following \begin{align} \frac{dR}{du}=\frac{1}{{\alpha r}^{\alpha -1}}\frac{{dR}}{{dr}}=\frac{1}{{\alpha r}^{\alpha -1}}\left(R'+\frac{\partial t}{\partial r}\dot{R}\right) \end{align} Using the expression from Eq. \eqref{29.1}, the above equation takes the form, \begin{align}\label{33} {\alpha \frac{{dR}}{{du}}=\left(1-\sqrt{j +\frac{F_c-rj }{R}}\right)\frac{R'}{r^{\alpha -1}}} \end{align} here $\frac{R'}{r^{\alpha -1}}$ is given by Eq. \eqref{32}. Roots analysis is done for the above case numerically as it is not possible analytically. To proceed further we consider the power series form of $F_c(r)$ as \begin{align}\label{seriesF} F_c(r)=F_0+F_1 r+F_2 r^2+F_3 r^3+ F_4 r^4... \end{align} \begin{figure}[H] \centering \includegraphics[scale=0.4]{53Xr.pdf} \includegraphics[scale=0.4]{73Xr.pdf} \includegraphics[scale=0.4]{93Xr.pdf} \caption{Plots of X versus r for various values of Energy $E=0,0.1,10$ and $100$ in Rainbow energy function $f(E )=1, g(E )=\sqrt{1-\eta \frac{E}{E_p}}$. Here $E_p$ is taken as $10^{19}$ and $\eta=10^{17}$} \label{fig:Xr} \end{figure} We have plotted the value of $dR/du=R/u$ versus $r$ to see how the plot behaves near $r\to0$. This is done to investigate if $dR/du=R/u$ has a solution, in the limit $r\to0$. As shown by red curves in Fig. (\ref{fig:Xr}) that, in general relativity there is a real and positive value of $X$ in the limit $r\to 0$, while in gravity's rainbow, the curves departs farther from the vertical axis with increasing the value of $E$ as depicted from the plots itself. This suggest that there is no real and positive value of $X$ exists as $r\to 0$ due to the deformation by gravity´s rainbow as opposed to general relativity. In fact, one can check numerically that it gives the complex roots of equation $dR/du=R/u$ for all values of $r<r_0$ near $r=0$. This can be checked for different values of the energy of the probe $E$. We observe as long as $E<E_p$, and of the same order (such that we cannot neglect $E/E_p$), the naked singularity will not form. This has been explicitly demonstrated for $E=0.1,10$ and $100$. For numerical analysis values of all model parameters are mentioned in plots . It can be concluded from above analysis that due to the deformation from the rainbow function which are motivated from loop quantum gravity ~\cite{y1, Amelino-Camelia:2008aez}, the naked singularity will not form. \section{Conclusion} It is known that the energy momentum dispersion relation would be modified at Planck scale due to loop quantum gravitational effects. This loop quantum gravitational modified energy momentum dispersion relation can be used to obtain suitable rainbow functions. We have used those rainbow function to analyze the effect of loop quantum gravity on a collapsing system. We demonstrate that the modifications to the collapsing system by loop quantum gravity prevents the formation of a naked singularity. We comment that this was expected, as in loop quantum gravity the Planck scale structure of space-time is modified, and so we would expect the singularities would be removed due to these effects. This is explicitly demonstrated in this paper using gravity's rainbow. The maximum energy of the system, which acts as a probe, is fixed using the uncertainty principle. Thus, the energy of the rainbow functions is expressed in terms of distance scale in the collapsing system. It would be interesting to analyze the collapse in different modified theories of gravity using this Planck scale modification. Thus, we can analyze this system gravity with higher curvature terms, and then deform that system by gravity's rainbow. We would also like to point out that the deformation by gravity's rainbow depends on the rainbow functions. Here the rainbow functions were obtained using results from loop quantum gravity. However, it is possible to obtain rainbow functions from other motivations. It is expected that the formation of naked singularities would also depend critically on the kind of rainbow functions used to deform the system. \section{Acknowledgement} M. V. Akram would like to thank Sukratu Barve for useful discussions. I.A.Bhat would like to thank the Centre for Theoretical Physics, Jamia Millia Islamia, New Delhi for its hospitality where a major part of this work was carried out.
{ "redpajama_set_name": "RedPajamaArXiv" }
8,215
Agrilus voriseki es una especie de insecto del género Agrilus, familia Buprestidae, orden Coleoptera. Fue descrita científicamente por Jendek, 1995. Referencias voriseki
{ "redpajama_set_name": "RedPajamaWikipedia" }
764
Q: SOAP response is not displaying in XML Format I am new to SOAP. I am trying to create a SOAP API which converts long url to short link. Here is my code In shortlen_url.php (Client Page) require_once "nusoap/lib/nusoap.php"; if(isset($_REQUEST['url'])) { $longUrl= $_REQUEST['url']; $currentPath = $_SERVER['PHP_SELF']; $pathInfo = pathinfo($currentPath); $hostName = $_SERVER['HTTP_HOST']; $protocol = strtolower(substr($_SERVER["SERVER_PROTOCOL"],0,5))=='https://'?'https://':'http://'; $client = new nusoap_client($protocol.$hostName.$pathInfo['dirname']."/functionalities.php?wsdl"); $client->soap_defencoding = 'UTF-8'; $client->decode_utf8 = false; $error = $client->getError(); $doc = new DomDocument('1.0'); $doc->preserveWhiteSpace = false; $doc->formatOutput = true; $root = $doc->createElement('root'); $root = $doc->appendChild($root); $result = $client->call("getProd", array("longUrl" => $longUrl)); $resultScheme = parse_url($result); if(isset($resultScheme['scheme'])) { if($resultScheme['scheme'] == "http" || $resultScheme['scheme'] == "https") { $occ = $doc->createElement('message'); $occ = $root->appendChild($occ); $child = $doc->createElement('shortURL'); $child = $occ->appendChild($child); $value = $doc->createTextNode($result); $value = $child->appendChild($value); $child = $doc->createElement('Success'); $child = $occ->appendChild($child); $value = $doc->createTextNode('A URL will be received by C4C which will be embedded in the SMS and sent to the customer'); $value = $child->appendChild($value); } } else { $occ = $doc->createElement('error'); $occ = $root->appendChild($occ); $child = $doc->createElement('errorMessage'); $child = $occ->appendChild($child); $value = $doc->createTextNode($result); $value = $child->appendChild($value); $child = $doc->createElement('Failure'); $child = $occ->appendChild($child); $value = $doc->createTextNode('A task will be created and assigned to the System Admin.'); $value = $child->appendChild($value); } if ($client->fault) { $occ = $doc->createElement('error'); $occ = $root->appendChild($occ); $child = $doc->createElement('errorMessage'); $child = $occ->appendChild($child); $value = $doc->createTextNode($result); $value = $child->appendChild($value); $child = $doc->createElement('Failure'); $child = $occ->appendChild($child); $value = $doc->createTextNode('A task will be created and assigned to the System Admin.'); $value = $child->appendChild($value); } // get completed xml document $xml_string = $doc->saveXML() ; echo $xml_string; } ?> In functionalities.php (Server Page) <?php require_once "nusoap/lib/nusoap.php"; function getProd($longUrl) { function get_bitly_short_url($longUrl,$login,$appkey,$format='txt') { $connectURL = 'http://api.bit.ly/v3/shorten?login='.$login.'&apiKey='.$appkey.'&uri='.$longUrl.'&format='.$format; return curl_get_result($connectURL); } function curl_get_result($url) { $ch = curl_init(); curl_setopt($ch,CURLOPT_URL,$url); curl_setopt($ch,CURLOPT_RETURNTRANSFER,1); $data = curl_exec($ch); curl_close($ch); return $data; } $short_url = get_bitly_short_url($longUrl,'x_xxxxxxxxxx','x_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx','txt'); return $short_url; } $server = new soap_server(); $server->configureWSDL("ShortLink", "urn:ShortLink"); $server->register("getProd", array("shortLink" => "xsd:string"), array("return" => "xsd:string"), "urn:shortLink", "urn:shortLink#getProd", "rpc", "encoded", "Converting Long Url to Short Link"); $server->service(file_get_contents("php://input")); ?> When I hit this API http://localhost/myproject/shorten_url.php?url=https://www.w3schools.com/php/showphp.asp?filename=demo_func_string_strpos It displays proper result but in string format not in xml format when i click on ctrl+U it shows view code like this How can I get response in xml format(not in string) when I hit link. Now it results in string format. Please Help. A: XML format is a string. Sort of like how a square is a rectangle, but not all rectangles are squares. JSON is another string format. If what you're trying to do is get the browser to recognize it as XML (like telling it "hey, I know you're expecting a rectangle and I'm sending you one, but specifically it's also a square!"), you can add a header via PHP by putting this line before you print anything to the screen: header("Content-type: text/xml");
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,132
{"url":"https:\/\/math.stackexchange.com\/questions\/3195781\/simplify-using-laws-and-axiom-of-logic","text":"# simplify using laws and axiom of logic [closed]\n\n$$(\u00aca\u2228b)\u2227(a\u2228b)\u2227\u00aca$$\n\nSo I have been looking at this question all day and and i have no idea how to start.\n\nCan someone please help with me proving this algebra logic and what laws I would need to use?\n\nI have also looked through already answered questions, and could not find anything similar.\n\nSimplify the following statement using the laws and axioms of logic\n\n## closed as off-topic by Graham Kemp, DMcMor, Yanior Weg, Leucippus, GNUSupporter 8964\u6c11\u4e3b\u5973\u795e \u5730\u4e0b\u6559\u6703Apr 22 at 23:06\n\nThis question appears to be off-topic. The users who voted to close gave this specific reason:\n\n\u2022 \"This question is missing context or other details: Please provide additional context, which ideally explains why the question is relevant to you and our community. Some forms of context include: background and motivation, relevant definitions, source, possible strategies, your current progress, why the question is interesting or important, etc.\" \u2013 Graham Kemp, DMcMor, Yanior Weg, Leucippus, GNUSupporter 8964\u6c11\u4e3b\u5973\u795e \u5730\u4e0b\u6559\u6703\nIf this question can be reworded to fit the rules in the help center, please edit the question.\n\n\u2022 Hello. That's not an equation, it's a formula of propositional logic, and it's not tautological, therefore it can't be proven. Also: I'm not sure what you mean by \"simplify\"; do you mean having a logically equivalent formula with only one logical operator? \u2013\u00a0Simone Apr 21 at 11:53\n\u2022 Use the rules of commutation, absorption, and redundancy. \u2013\u00a0Graham Kemp Apr 21 at 12:08\n\u2022 Thank you for helping me. but for me to understand this can you please show me the step thanks \u2013\u00a0Daisoh Apr 22 at 5:52\n\nThis expression is a \"formula\" because it has a well-defined mathematical value (for each allowable set of values for the variables in it). It is a \"statement\" because that mathematical value is either \"true\" or \"false\". But an \"equation\" is a statement claiming that two expressions are equal to each other (equal $$\\equiv$$ equation), and there is no \"=\" sign in this statement. Therefore it is not an \"equation\".\nExamine when your statement will be true. It has three parts joined together by ands: $$(\\lnot a\\lor b)\\\\(a\\lor b)\\\\\\lnot a$$\nFor the whole expression to be true, all three must be true. But the last one requires $$a$$ to be false. And if $$a$$ is false, the first will be true, regardless of $$b$$. Finally $$a\\lor b$$ requires either $$a$$ or $$b$$ to be true. But we already know $$a$$ has to be false, so $$b$$ has to be true.\nTherefore, this will be true only if $$a$$ is false and $$b$$ is true.","date":"2019-05-27 01:38:34","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 13, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5540022253990173, \"perplexity\": 476.0126200491533}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-22\/segments\/1558232260358.69\/warc\/CC-MAIN-20190527005538-20190527031538-00268.warc.gz\"}"}
null
null
Palkon Ki Chhaon Mein 2 27th January 2023 Full Episode Emily in Paris Season 4 Expectations: Is it Renewed Yet? Fire Emblem Engage: How to Unlock Weapon Proficiencies Home/APK/GPS Speedometer Mod APK v2.2.4 (Premium Unlocked) Download GPS Speedometer Mod APK v2.2.4 (Premium Unlocked) Download MUHAMMAD IQRAR1 week ago GPS Speedometer Mod APK is an essential tool for all modes of transportation, making it easier to accurately measure the speed of your trips. With the speed meter app, you can see both analog and digital values of the speed limit at a moment's glance. What's more, the train speed live meter app can track your velocity on multiple scales so you know exactly how fast you're going. GPS-based Speedometer apps for motorcycles frequently feature an alarm to alert you when the speed limit has been exceeded – giving riders peace of mind that they're not in violation while they cruise. This app brings accuracy and convenience to transportation, so no matter what you use to get around – car, bike, or something else – GPS has the technology to track it. GPS Speedometer in a single app is an amazing combination that lets you track the speed of your vehicle anytime on your phone. With this app, you can get detailed insight into current speed, travel time, or even the entire distance traveled without the need to start and stop on your car's dashboard each time. What's more, you have options to choose from different units of measurement such as miles per hour (mph), kilometers (km), or knots for more accurate readings. The nifty GPS Speedometer – Odometer app will not only help drivers drive safer but also offer peace of mind when it comes to tracking essential information about their vehicle's performance. The app is quickly becoming one of the most useful and reliable tools when it comes to tracking your speed on the go. A GPS Speedometer app can help you keep track of the road while driving or jogging, providing an invaluable resource for understanding your route and how far you have traveled. Not only will it track speed, but GPS navigation also allows users to create routes, plan trips, and map out destinations. GPS Speedometer apps offer an incredibly accurate source of real-time information at all times – a must-have for anyone who is serious about getting to their destination in the safest possible way. This useful tool allows you to measure speed and distance, as well as set a speed limit. You can also use them offline, making them ideal for travel, long drives, or any other transportation you need. GPS Speedometers have an easy-to-use and attractive user interface that shows your speed in a heads-up display (HUD). GPS Speedometer also records the average speed, maximum speed, and GPS-measured distance meter accurately. Its GPS tracker also displays your speed notifications so you can always stay informed of your journey's progress. With GPS Speedometer technology, you'll never be lost again. GPS Speedometer is an app that can be a great asset for many different activities. It offers a variety of features that allow users to switch between scales of measure, track their trip by GPS speed tracking, and much more. GPS Speedometer makes it easy to track the speed of your transport while cycling, driving, running, or sailing at any given time. You can also use this app to track the speed of your train in real time easily and accurately. GPS Speedometer is a useful tool for anyone who wants an accurate way to measure the speed or the velocity of the transport they are on. GPS Speedometers are invaluable tools for travelers looking to venture away from the beaten path. With its versatile tracking features, you can easily measure your speed in Kmph mode, mph mode, or knots – so no matter your destination, you're guaranteed a smooth and safe ride. You can even check the time of your trip, along with the location and distance of your route. This is also capable of tracking a train's live speed meter. With a GPS Speedometer mph, you can play music in the background while enjoying your run, walk or cycle. The GPS speedometer app works quickly and accurately, even if you don't have an internet connection. On top of that, this GPS speedometer also comes with an odometer feature which makes it perfect for motorcycle riders to measure their average and maximum speeds easily and accurately. Aside from that, this app is great for cars due to its fast operations and accuracy. Game How To Raise A Harem QooApp v1.3 MOD FOR ANDROID | MENU MOD | DMG MULTIPLE | DEFENSE MULTIPLE | READ NOTE IN MENU – Best Site Hack Game Android – iOS Game Mods Strava 272.8 Apk Mod Premium Let's Survive 1.4.3 Mod Apk Free Shopping Racing Fever: Moto v1.86 (MOD, Unlimited Money) APK – Best Site Hack Game Android – iOS Game Mods SNOW MOD APK v11.4.1 (Premium Unlocked)
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,651
Daily Archives: 5 Mar 2019 Modern Culture 5 Mar 2019 1 Comment "Don't argue with apologists. They're trying not to learn". — Ian Mills Michael Servetus Celebrates Ash Wednesday… Modern Culture 5 Mar 2019 Jerry Falwell Jr Wants Muslims to be Killed We have now reached the point where Falwell has publicly renounced Christianity. If you admire him, support him, or defend him, you are a partner in his evil. Lent With Minimal Effort; Or, Lent for People Who are Only Committed Christians For About a Month Modern Culture 5 Mar 2019 Comments: 2 Amateurs give up things for Lent that are near and dear to their hearts. Professionals give up meaningless things in order to look holy with the smallest possible effort. See the difference? Pick something off this list that you either hate or don't use anyway, and now you can brag that you got a little more sanctified this Lenten season, but you didn't even lift a finger! Nice! 1.) Google Plus – Want to give up social media, but hooked to your Facebook, Instagram, and Twitter accounts? Just give up Google Plus! You're not missing a thing! 2.) Decaf coffee – Give up decaf coffee for Lent. Now you can post pictures of yourself drinking actual coffee and remind people how holy you are for giving up the decaf version! 3.) Hillsong – We know it's going to be difficult, but try turning off Christian radio for the duration of Lent, so you can keep your vow not to hear any Hillsong tunes. Now you can listen to good music for a full 40 days! 4.) Pineapple pizza – This one's easy—just eat pizza that actually tastes good for 40 days, and you're holier in no time. 5.) Rewatching 1983's Krull on DVD – This one might be hard for some of you, but why not give up rewatching classic sci-fi/fantasy film Krull on DVD this year? (LOOPHOLE ALERT: find the movie on a streaming service and you can still watch it, technically!) 6.) Winning a gold medal in the Olympics – When people ask why you didn't win gold in the men's snowboarding competition in this year's Winter Olympics, you can tell them that you gave it up for Lent. Now you can veg out on your couch and play 1080 Snowboarding on your Nintendo 64 instead of getting some exercise. LENT STATUS: PRO. 7.) Reading your Bible – Give up reading your Bible for Lent, and you don't even have to wake up 15 minutes early for daily devos anymore! Genius! 8.) Dating supermodels – This will make you more chaste and free up time for more sanctified activities. Make sure not to date any supermodels during Lent. 9.) Lent – Give up Lent for Lent, and you won't even have to celebrate Lent! This option gives you the additional bonus of getting to reply "Lent!" whenever someone asks you what you're giving up for Lent—a hilarious joke no one's ever made before! Now go get some ashes smeared on your face and give up something completely meaningless. Godspeed on your Lenten journey! UPDATE: This year, add plastic to the list! Giving up plastic makes you both super sanctified and offers you the chance to be super sanctimonious! All the holy kids are denouncing plastic, so join in and use paper, because it comes with no ecological strings… Another 'Good Guy With a Gun'… As it Were… Perhaps the good guys with guns should learn how to use them? Or perhaps this is just God's judgment. A 46-year-old man accidentally shot himself in a genital Thursday after a gun slipped from his waistband, police said. Marion Police Department issued a news release via Facebook that said Mark Anthony Jones did not have a license for the Hi-Point 9mm gun he was carrying. 'A genital'? I don't even know what that is supposed to mean. But given the circumstances, I'm going to assume this was God's judgment. Jones, in the emergency room of Marion General Hospital, told police he was walking about 6:44 a.m. Thursday on a riverside trail near the Marion Girl Scout Cabin. That's west of the Washington Street bridge over the Mississinewa River. He said when the gun began to slip from his waistband, he reached to adjust the firearm and it discharged. "The bullet entered just above his penis and exited his scrotum," the release said. The gun was not in a holster. Grant County prosecutors will review the case to consider possible criminal charges. A Stunning Stat Via Candler School of Theology, Emory University- Jim *The Septuagintizer* Aitken is Finally on the Twitter Follow him- The Invention of Lent Church History 5 Mar 2019 The number 40 [as the observed days of Lent] was not made up in the Latin Church until the 7th century, when the four days from Ash Wednesday to the First Sunday in Lent were added, a practice first attested by the Gelasian Sacramentary and spreading from Rome throughout the West.* *F. L. Cross and Elizabeth A. Livingstone, eds., The Oxford Dictionary of the Christian Church (Oxford; New York: Oxford University Press, 2005), 971. Conference Announcement: "Let me Hear Your Voice! The Song of Songs, Women and Public Discourse" Conferences 5 Mar 2019 'I am just writing to share a quick note about an upcoming conference at the University of Chicago Divinity School this June that may be of interest to SOTS members. The conference, "Let me Hear Your Voice! The Song of Songs, Women and Public Discourse, will take place from 3-5 of June and the website is now live at https://voices.uchicago.edu/songofsongs. The conference sessions are free and open to the public. They will be video-recorded and posted on the website as a resource after, and an edited volume will ensue.' Via SOTS. #ICYMI – John Calvin: On The Superstition of Lent Church History, Theology 5 Mar 2019 Comments: 17 I'll happily stand with Calvin on the issue of Lent and leave those who wish to lie in the filth of the pigsty of 'tradition' simply for the sake of 'tradition' to do so. Institutes 4.12.20 reads thusly (with particularly useful descriptions of lenten observance and observers bold-faced) Then the superstitious observance of Lent had everywhere prevailed: for both the vulgar imagined that they thereby perform some excellent service to God, and pastors commended it as a holy imitation of Christ; though it is plain that Christ did not fast to set an example to others, but, by thus commencing the preaching of the gospel, meant to prove that his doctrine was not of men, but had come from heaven. And it is strange how men of acute judgment could fall into this gross delusion, which so many clear reasons refute: for Christ did not fast repeatedly (which he must have done had he meant to lay down a law for an anniversary fast), but once only, when preparing for the promulgation of the gospel. Nor does he fast after the manner of men, as he would have done had he meant to invite men to imitation; he rather gives an example, by which he may raise all to admire rather than study to imitate him. In short, the nature of his fast is not different from that which Moses observed when he received the law at the hand of the Lord (Exod. 24:18; 34:28). For, seeing that that miracle was performed in Moses to establish the law, it behoved not to be omitted in Christ, lest the gospel should seem inferior to the law. But from that day, it never occurred to any one, under pretence of imitating Moses, to set up a similar form of fast among the Israelites. Nor did any of the holy prophets and fathers follow it, though they had inclination and zeal enough for all pious exercises; for though it is said of Elijah that he passed forty days without meat and drink (1 Kings 19:8), this was merely in order that the people might recognise that he was raised up to maintain the law, from which almost the whole of Israel had revolted. It was therefore merely false zeal, replete with superstition, which set up a fast under the title and pretext of imitating Christ; although there was then a strange diversity in the mode of the fast, as is related by Cassiodorus in the ninth book of the History of Socrates: "The Romans," says he, "had only three weeks, but their fast was continuous, except on the Lord's day and the Sabbath. The Greeks and Illyrians had, some six, others seven, but the fast was at intervals. Nor did they differ less in the kind of food: some used only bread and water, others added vegetables; others had no objection to fish and fowls; others made no difference in their food." Augustine also makes mention of this difference in his latter epistle to Januarius. True words, Calvin. Truly said. Let's see how the rabid lent-ianists like those apples. Free Access to the Journal for the Study of Judaism *Free access to JSJ* To celebrate the 50th Volume of the Journal for the Study of #Judaism, selected articles from the past 50 Volumes will be available for free downloading during 2019. For the free articles visit https://www2.brill.com/JSJ50 Copernicus Banned Nicolaus Copernicus' book De Revolutionibus Orbium Coelestium (On the Revolutions of the Heavenly Spheres) was temporarily suspended and added to the Index of Prohibited Books in 1616. First published in Nuremberg in 1543, the book proposed that Earth and other planets orbited the Sun. At the time of its publication, Copernicus' book had not created much controversy, however, following Galileo's support for Copernicus' theory, De Revolutionibus was officially banned. Copernicus, who died in 1543, would never know what controversy his work had caused. – Via CSSR Bible 5 Mar 2019 He is also able to save to the uttermost those who come to God through Him, since He always lives to make intercession for them. (Heb. 7:25) Modern Culture, Theology 5 Mar 2019 "When early Christian bishops were made of gold, their crosses were made of wood. But bishops became like wood when their crosses appeared as gold. The more that there was simplicity in the administration of the Word of God and the sacraments, the more that pastors were small and humble in the eyes of the world, and the church had fewer troubles. And who can dare to despise poverty in a faithful servant of God in the presence of the prophets, apostles, confessors and martyrs, and Jesus Christ himself—who were all poor?" —Simon Goulart I Think NT Wright's Stuff Comes From Fishgut Too… That was the day Martin Luther's Latin works were published in a collection, with a foreword by Luther himself. Writes he For a long time I strenuously resisted those who wanted my books, or more correctly my confused lucubrations, published. I did not want the labors of the ancients to be buried by my new works and the reader kept from reading them. Then, too, by God's grace a great many systematic books now exist, among which the Loci communes of Philip excel, with which a theologian and a bishop can be beautifully and abundantly prepared to be mighty in preaching the doctrine of piety, especially since the Holy Bible itself can now be had in nearly every language. But my books, as it happened, yes, as the lack of order in which the events transpired made it necessary, are accordingly crude and disordered chaos, which is now not easy to arrange even for me. Persuaded by these reasons, I wished that all my books were buried in perpetual oblivion, so that there might be room for better ones. But the boldness and bothersome perseverance of others daily filled my ears with complaints that it would come to pass, that if I did not permit their publication in my lifetime, men wholly ignorant of the causes and the time of the events would nevertheless most certainly publish them, and so out of one confusion many would arise. Their boldness, I say, prevailed and so I permitted them to be published. At the same time the wish and command of our most illustrious Prince, Elector, etc., John Frederick was added. He commanded, yes, compelled the printers not only to print, but to speed up the publication. He continues on towards the end of a lengthy description of how he came to understand the papacy as a tool of Satan, I relate these things, good reader, so that, if you are a reader of my puny works, you may keep in mind, that, as I said above, I was all alone and one of those who, as Augustine says of himself, have become proficient by writing and teaching. I was not one of those who from nothing suddenly become the topmost, though they are nothing, neither have labored, nor been tempted, nor become experienced, but have with one look at the Scriptures exhausted their entire spirit. It almost sounds like he's talking about certain journalists these days, doesn't it? In any event, he concludes Farewell in the Lord, reader, and pray for the growth of the Word against Satan. Strong and evil, now also very furious and savage, he knows his time is short and the kingdom of his pope is in danger. But may God confirm in us what he has accomplished and perfect his work which he began in us, to his glory, Amen. March 5, in the year 1545. Oh Martin… Every Emergent Church Fun Facts From Church History: Luther's Lectures on Psalm Two and a Post-Mortem Slam on Zwingli Church History, Zwingli 5 Mar 2019 In 1532 Luther lectured on Psalm two on the following dates: March 5, April 9, April 16, May 27, May 28, June 8, July 5. He took his time with the text (obviously) and in the course of those lectures snidely remarked That the kings and rulers rage against us at the present time, that Zwingli, Carlstadt, and others cause disturbances in the church, that burghers and peasants condemn the Gospel, is therefore nothing new or unusual. Münzer stirs up an uproar in Thuringia. Carlstadt and Zwingli stir up horrible disturbances in the church when they try to persuade others that in Communion the body and blood of Christ are not received orally, but only bread and wine. Others join them, and gradually this pernicious doctrine fills France, Italy, and other nations. "These things have happened through no fault of mine, therefore let the authors of these evils torture themselves. Not I. I shall do and I shall indeed try everything I can to alleviate these evils somewhat, but if I am unable to do so, I shall not on that account consume myself in sorrow. If one Münzer, Carlstadt, or Zwingli is not enough for Satan, he may stir up many more. I know that the nature of this kingdom is such that Satan cannot bear it. He labors with hands and feet with all his might that he may disturb the churches and oppose the Word." And several other times as well. That Luther lumps Zwingli with the Radicals is no surprise. What is surprising is his willingness to speak so ill of the dead. Indeed, of the dead not long dead! Luther: he was a real jerk. (He's been dead long enough one can say so without any twinge of guilt).
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,584
Stocks Maintain Solid Gains After Donald Trump Wins the White House Stocks post solid gains Wednesday after plunging overnight following Donald Trump's surprising victory in the U.S. presidential race. Valerie Young Updated Nov 9, 2016 3:16 PM EST U.S. stocks maintained solid gains Wednesday afternoon as Wall Street weighed a surprising victory by Republican Donald Trump in the presidential election. The S&P 500 rose 1.3%, the Dow Jones Industrial Average gained 1.6% and the Nasdaq rose 1.1%. European stocks traded higher following losses earlier in the trading session. Trump, who will serve as the 45th president of the United States, surprisingly won New Hampshire, North Carolina, Florida, and Wisconsin, key battleground states both candidates needed to secure to win the election. Prior to the election, markets were factoring in a win by Democratic nominee Hillary Clinton, who said in her concession speech Wednesday that "we must accept this result and then look to the future. Donald Trump is going to be our president. We owe him an open mind and the chance to lead." President Obama spoke today, saying he and his team will work with Trump on a smooth transition of power. The election saw Senate and House Republicans maintain their majorities in Congress. "We think the stock market selloff will be short lived," Deutsche Bank analysts wrote in a note prior to the markets opening on Wednesday. "This apparent Republican sweep is positive for the broad market and especially health care stocks. The only sector that might suffer from this election outcome is energy, as we think a greater U.S. oil supply outlook weighs on the oil price recovery." The energy sector was higher Wednesday after seeing losses earlier in the session. The Energy Select Sector SPDR ETF (XLE - Get Report) was up 2%. But SunPower (SPWR - Get Report) fell 14.2% as investors believe Trump's focus on coal and oil is seen as a negative for the global energy company. Domestic crude oil supplies increased by 2.4 million barrels in the week ended Nov. 4, according to the U.S. Energy Information Administration. West Texas Intermediate crude oil settled 0.6% higher to $45.27 a barrel. Health care stocks were rising Wednesday in reaction to Trump's victory. Biotech stocks, including Gilead Sciences (GILD - Get Report) , Celgene (CELG - Get Report) and Biogen (BIIB - Get Report) were higher as was the iShares Nasdaq Biotechnology exchange-traded fund (IBB - Get Report) that tracks these stocks. The ETF rose 8.7% in trading Wednesday. Drugmaker Pfizer (PFE - Get Report) was up 8.6%. A Hillary Clinton victory and a Democratic takeover of Congress would have been the worst-case scenario for biotech and drug stocks given her relentless criticism of rising drug costs and promises to curb them. Trump hasn't been as vocal or openly hostile to drug companies, but he hasn't embraced or even defended their pricing practices, either, wrote TheStreet's Adam Feuerstein. Financial stocks saw a jump as Bank of America (BAC - Get Report) shares rose 5.7%, Wells Fargo (WFC - Get Report) gained 6%, and JPMorgan Chase (JPM - Get Report) shares bounced higher by 5.7%. The SPDR Financial Sector exchange-traded fund (XLF - Get Report) that tracks the financial-services sector was up 4.4% Wednesday. "The re-pricing of everything has already begun," Stephen Guilfoyle, Stuart Frankel's chief market economist, wrote in a note. "Donald Trump has some isolationist ideas, but is not as anti-trade as presented in the media. His administration will likely be pro-business. His economics are pro-growth in the short-term. Reduced taxes, and increased fiscal spending provided by increased borrowing should produce a higher GDP." Health care insurers were mixed as they prepared for a possible repeal of the Affordable Care Act under a Trump presidency. While Trump hasn't said what he will replace the health care law with, UnitedHealth (UHN) fell 0.6% Wednesday, while competitors Anthem (ANTM - Get Report) , Cigna (CI - Get Report) , Humana (HUM - Get Report) and Aetna (AET) saw gains. TheStreet's Alicia McElhaney has more analysis. Investors will be assessing the Federal Reserve's next move to determine whether the U.S. central bank will raise interest rates at its December meeting as the U.S. economy has strengthened. An economist told the Associated Press that Trump's victory rules out a rate hike entirely. "Given the adverse market reaction we have already seen, the Fed's planned December rate hike is now off the table," Paul Ashworth, chief U.S. economist at Capital Economics, told the AP. He said Fed Chair Janet Yellen and other top policymakers might even resign in the event of a Trump presidency, because his victory demonstrates that that many Americans share his view that the Fed has been "overtly political." Safe-haven assets such as gold and government bonds moved higher on Wednesday as appetite for riskier assets fell. Gold prices settled lower at $1,273.50 an ounce, down less than 0.1%. Gold had risen to above $1,300 an ounce after it became clear Trump would win the U.S. presidency. U.S. Treasury bond yields, which move in the opposite direction of their price, jumped to 2.068% for the first time since January. Gold mining stocks jumped, including Newmont (NEM - Get Report) up 1.4% and Goldcorp (GG) up 0.5% paring some of their earlier gains. The Mexican peso fell at one point early Wednesday by more than 12% against the U.S. dollar. Throughout this election season, the peso has fallen when Trump's chances have risen as investors worried over the state of trade relations between the two countries under a Trump administration. Trump has promised to build a wall between the U.S. and Mexico. United States Steel (X - Get Report) gained 17.8% to $24.68 in afternoon trading as Jefferies analysts wrote that "the U.S. steel industry should stand out as a unique beneficiary of a Trump presidency." Analysts upgraded their rating of U.S. Steel to a buy, saying that "increased infrastructure spend may significantly boost demand for long steel product." They also highlighted that the U.S. "is already short steel, and falling imports will improve domestic pricing power." Corrections Corporation of America (CXW - Get Report) jumped 46.2% as investors anticipated the usage of prisons will increase with Trump's promise to deport illegal immigrants. Viacom (VIAB - Get Report) reported adjusted quarterly earnings of 69 cents a share, topping analysts' estimates by 3 cents. Revenue of $3.23 billion came in below forecasts. Viacom investors will be looking for any commentary on a possible merger to reunite with CBS (CBS - Get Report) . Viacom shares increased 1.6% Wednesday. Wendy's (WEN - Get Report) shares were up about 1.9% after the fast-food chain beat earnings estimates on revenue of $364 million. Mylan (MYL - Get Report) , the maker of the allergic reaction emergency treatment, EpiPen, is scheduled to report earnings after the market closes, as is Shake Shack (SHAK - Get Report) . MarketsOpinionStocksFinancial ServicesHealthcarePoliticsFunds JPMorgan Beats Profit Estimates on Tax Benefit, Faster Credit Card Lending JPMorgan said second-quarter profit rose 16% from a year earlier to $9.65 billion. Earnings per share were $2.82, or $2.59 excluding a big benefit from tax credits. On that basis, the figure exceeded the $2.50 average estimate of Wall Street analysts. Bradley Keoun
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,706
{"url":"https:\/\/scicomp.stackexchange.com\/questions\/37358\/howo-to-implement-complex-step-derivative-for-complex-functions","text":"# Howo to implement complex step derivative for complex functions?\n\nI have a complex analytic function of which I want to take the numerical derivative.\n\n\\begin{align} f(z) &\\equiv f(x,y) = u(x,y) + i v(x,y) \\\\ \\frac{d f(z)}{d z} & = \\lim_{h \\to 0} \\frac{f(z + h) - f(h)}{h} = \\frac{\\partial u(x,y)}{\\partial x} + i\\frac{\\partial v(x,y)}{\\partial x} \\end{align}\n\nNow, I can do the partial derivatives via the complex step method, as $$\\frac{\\partial u(x,y)}{\\partial x} = \\lim_{h \\to 0} \\frac{\\Im{u(x + i h,y)}}{h} \\\\ \\frac{\\partial v(x,y)}{\\partial x} = \\lim_{h \\to 0} \\frac{\\Im{v(x + i h,y)}}{h} \\\\ \\frac{d f(z)}{d z} = \\lim_{h \\to 0} \\left[ \\frac{\\Im{u(x + i h,y)}}{h} + i \\frac{\\Im{v(x + i h,y)}}{h}\\right]$$\n\nHere, $$h$$ is a real infinitesimally small number.\n\nI want to implement this for a general complex analytic function. I am starting with a complex function as\n\n#include<complex>\n#include<iostream>\n#include<limits>\nusing namespace std;\n\ncomplex <double> f(complex< double > z) { return z*z; }\ncomplex <double> u(complex< double > z) { return real(f(z)); }\ncomplex <double> v(complex< double > z) { return imag(f(z)); }\n\ndouble h = std::numeric_limits<double>::epsilon();\ndouble h_inv = 1.\/std::numeric_limits<double>::epsilon();\nint main()\n{\n\/\/Calculating derivative at z = 1 + I;\ndouble x = 1; y = 1;\ncomplex<double> xh{x.,h};\ncomplex<double> z = xh+complex<double>(0,y);\ncomplex<double> dudx = imag(u(z))*h_inv;\ncomplex<double> dvdx = imag(v(z))*h_inv;\ncomplex<double> dfdz{dudx,dvdx};\nreturn 0;\n}\n\n\nThis doesn't work as I am taking the real part and imaginary part of $$f$$, both of which are real-valued functions, and I can't get any imaginary part in the complex step method, so dfdz = 0.\n\n\u2022 I want this whole process to be numerical so that I don't have to analytically calculate real and imaginary part and plug in the code explicitly\n\n\u2022 How to implement a complex step method in this case?\n\n\u2022 Well, the second equation doesn't seem right to me. In fact, we have: $$df = du + i dv = \\frac{\\partial u}{\\partial x} dx + \\frac{\\partial u}{\\partial y} dy + i (\\frac{\\partial v}{\\partial x} dx + \\frac{\\partial v}{\\partial y} dy) = (\\frac{\\partial u}{\\partial x} + i \\frac{\\partial v}{\\partial x}) dx + (\\frac{\\partial u}{\\partial y} + i \\frac{\\partial v}{\\partial y}) dy$$ So it seems that you are confusing $\\frac{\\partial f}{\\partial x}$ with $\\frac{d f}{dz}$. May 5 at 18:34\n\u2022 For analytic functions it doesn't matter how you choose your $dz$ it can be $dz = dx + I dy$ or just $dz = dx$ or so on, For example have a look at www1.spms.ntu.edu.sg\/~ydchong\/teaching\/\u2026 equation 23 May 5 at 18:45\n\u2022 Have you heard about directional derivatives? No, it really matters in which direction you want to do differentiation. taking $dz = dx$ is one choice from many possible choices and it is equal to taking gradient along the x direction. May 5 at 18:47\n\u2022 Sure, but in complex analysis, it really doesn't matter what is your $dz$, if it would have been mattered then the limit in the first principle of derivative (2nd equation in the post) would not exist and also the Cauchy's theorem wouldn't be valid. May 5 at 18:58\n\u2022 Well, sorry but no. We know $$\\frac{df}{dz} = \\frac{du + i dv} {dx + i dy} = \\frac{(du + i dv) \\wedge (dx - i dy)}{(dx + i dy) \\wedge (dx - i dy)} = \\frac{du \\wedge dx - dv \\wedge dy + i (du \\wedge dy + dv \\wedge dx)}{i 2 dx \\wedge dy} = \\frac{-(\\frac{\\partial u}{\\partial y}+\\frac{\\partial v}{\\partial x}) dx \\wedge dy + i (\\frac{\\partial u}{\\partial x} - \\frac{\\partial v}{\\partial y}) dx \\wedge dy}{i 2 dx \\wedge dy} = \\frac{1}{2} (\\frac{\\partial u}{\\partial x} - \\frac{\\partial v}{\\partial y}) + \\frac{i}{2} (\\frac{\\partial u}{\\partial y}+\\frac{\\partial v}{\\partial x})$$ May 5 at 19:20","date":"2021-09-20 00:19:34","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 4, \"wp-katex-eq\": 0, \"align\": 1, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8485170006752014, \"perplexity\": 862.5727275774641}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-39\/segments\/1631780056902.22\/warc\/CC-MAIN-20210919220343-20210920010343-00404.warc.gz\"}"}
null
null
Elon Musk Buys Glock and Smith & Wesson. Plans To Have Both Companies Shut Down By Years End May 25, 2022 Bonine potpourri 0 The world's richest man is making headlines in the news again. This time Musk has thrown his hat in the firearms arena. But hold on a second, it's not what you think. That's right, Musk has purchased both Glock and Smith & Wesson firearms, but not to bring the two companies to new heights. Musk has plans set in motion to have both companies shut down by years end. Firearms production is expected to be halted by mid August. All purchased orders will be fulfilled according to predated contracts. From August until years end will be a complete shutdown of both companies in all facets. Customer service is expected to end at years end as well. What does this mean for the consumer? Well unless you have already ordered a weapon you may be out of luck as remaining weapons will be sold as "first come, first served". Companies like; Beretta and Mossberg are expecting an increase of orders by 30%. While this is not an end to firearms as a whole, it could be the start of a disturbing new trend. Stayed tuned to dailynewsreported.com for more on this and all the news that matters most! Man Dressed Like The Joker Destroys 1000's Of Copies Of Morbius. Claims Jared Leto And The Movie Are "Complete Garbage"
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,533
{"url":"http:\/\/rutherford.org.nz\/hon_rf.htm","text":"Biography | Milestones | DVD\/Books | Birthplace | Other Places | Awards | Honouring Ern | Bibliography | Miscellaneous | Site Map\n\n Honouring Ern Banknote Stamps Rf - element 104 - A little background - Beyond Uranium - Element 104 - References - Appendix Rutherfordine Rutherford Cafe Buildings Awards\n\n Rutherfordium - Elementary my dear Ernest. John Campbell (A version of this was first written for newspapers throughout New Zealand. Another was published in the New Zealand Science Teacher 86 36-7 1997.) After years of international haggling and horse trading, Ernest Rutherford has finally and formally become the only New Zealander to have a chemical element named in his honour. First a little background in chemistry and physics. Our world and extensive galaxies comprise just 92 chemical elements. Of these only 81 are truely stable. Those heavier than bismuth are radioactively decaying away until eventually there will be none left. Those existing naturally, such as uranium, do so solely because they are decaying at such a slow rate that there are still significant numbers of these atoms remaining today, some 15 billion years since the elements first formed. For a hundred years we have known of the existence of the electron, the first object to be discovered which was smaller than an atom. For eighty years we have known that the chemistry of an atom is governed by the number of electrons in orbit about an atom. Just prior to that it was Ernest Rutherford who showed the atom was a nuclear entity, with almost all its mass in a nucleus less than a thousandth the diameter of an atom. (If the orbital electrons of the atoms making up our body were pushed into the nucleus, as happens in a neutron star, our body would fit into a small grain of sand.) The nucleus consists of protons (the nucleus of a hydrogen atom) and neutrons. Ernest Rutherford was the first person to propose that the neutron had to exist and one of his colleagues discovered it. Neutrons are electrically uncharged and, if isolated outside the nucleus, decay after eleven minutes into a proton and an electron. Atoms have as many protons in their nucleus as they have electrons in orbit. Hence the chemistry of an element is effectively determined by the number of protons in its nucleus, the atomic number, which, for those known to occur in nature, ranges from 1 for hydrogen to 92 for uranium. back to top Beyond Uranium The heavy elements just above uranium were made in nuclear reactors where neutrons entered a nucleus and the resultant heavier nucleus decayed by beta decay, thus moving one higher in the periodic table. To the discoverer went the honour of proposing a name for each new element. Neptunium (Np, A = 93) and plutonium (Pu, A = 94) were natural successors to uranium. Names such as americium (Am, A = 95), berkelium (Bk A = 97) and californium (Cf, A = 98) commemerated the place of discovery. Curium (Cm, A = 96), einsteinium (Es, A = 99), fermium (Fm, A = 100), mendelevium (Md, A = 101) and lawrencium (Lr, A = 103) honoured the discoverers' scientific heroes. Nobelium (Nb, A = 102), claimed first, but never substantiated, by an international group working at the Nobel Institute in Stockholm, was named in honour of the man who left his fortune for prizes to promote science. Fermium (A = 100) was the heaviest element obtainable from reactors. Those beyond fermium became special. The production of these heavy nuclei became more important when, between 1966 and 1972, new theories of the nucleus predicted that the arrangement of particles in nuclei of atomic number around 114 should again be stable. But where were these elements? Renewed extensive searches of nature still failed to find them. They could only be made by accelerating nuclei of around ten to twenty protons and neutrons to high speeds in large accelerators and smashing them into the heaviest nuclei abundently produced in the reactors. The heavier the element the harder its manufacture become and the shorter time it survived, until only three laboratories specialised in this work: Berkeley in California, Darmstadt in Germany and Dubna in Russia. And from element 102 on, the controversies started. These were fueled by the cold war of the time. back to top Element 104 In 1964 the Dubna group, led by G N Flerov, claimed to have manufactured one isotope of element 104 by smashing neon nuclei into plutonium. They proposed the name kurchatovium (Ku), in honour of the Soviet nuclear physicist Igor Kurchatov. Albert Ghiorso and co-workers at the Lawrence Berkeley Laboratory of the University of California spent a year attempting to repeat this work but finally had to conclude that element 104 could not have been manufactured by Dubna. In 1969 the Berkeley team produced element 104 in an entirely different way. They bombarded the world's supply of californium with high speed nuclei of carbon atoms. The isotope of element 104 thus produced survived typically less than four seconds so could be uniquely identified by its half-life and the energy of the alpha particles it emitted in the process of spontaneous radioactive decay. In November of 1969, at celebrations marking the centennial of Mendeleev, the father of the periodic table, Al Ghiorso proposed that element 104 be named rutherfordium (Rf) because Ernest Rutherford was one of his heroes. We are suggesting that element 104 be called rutherfordium, after Lord Rutherford, the great pioneer of nuclear science. If in the course of further experiments, contrary to our present expectations, we do confirm the earlier findings of the Dubna group of approximately three-tenths of a second spontaneous-fission activity, we will withdraw our suggested name and accept that proposed by the Soviet group, kurchatovium.'' back to top This was most fitting as it was Rutherford who had first explained the nature of radioactivity, that one element was decaying into another. That concept had been such an advance in science that Rutherford had been awarded the 1908 Nobel Prize in Chemistry. Also he named the alpha particles. For nearly two decades the world lived with three names for element 104. Each country used its own name, the Oxford Dictionary listed both but politically correct periodic tables used an interim name Unnilquadlium, the latin for one zero four, or Unq for short. To solve the impass, a Transfermium Working Group, a joint committee of the International Union of Pure and Applied Physics and the International Union of Pure and Applied Chemistry, was set up in 1985 to determine precedence of discovery for all elements beyond fermium. This would allow unique names to be assigned. In 1992 the committee concluded that the two groups should share credit for discovery of the elements 104 and 105. This conclusion was bitterly rejected by the Berkeley group and others. An August 1994 meeting adopted a new rule that no element could be named after a living person. Since both Albert Einstein and Enrico Fermi had been alive when they had had element names proposed in their honour, this move was a ploy to take the very much alive Glenn Seaborg's name off element 106 so that that element could then be renamed rutherfordium. This left element 104 open to be be renamed dubnium. Confusion reigned. Elements 104 to 109 were to be named but international arguments continued over most of these. As I had already set an exam question for 1994 based on element 104 being named rutherfordium I supported the Americans. back to top In 1997 a final compromise was reached and all phases of naming were passed. 104 rutherfordium (Rf), 105 dubnium (Db), 106 seaborgium (Sg), 107 bohrium (Bh), 108 hassium (Hs) and 109 meitnerium (Mt). There are still problems. Bohrium, rather than the proposed nielsbohrium, has the same name as the element boron (borii) in both Russian and German. And the American group probably wont, in practice, accept dubnium in place of hahnium which they had proposed in honour of Otto Hahn. It is significant that both Niels Bohr and Otto Hahn first became internationally famous while working with Ernest Rutherford. The suggestion has been made that future names be picked jointly by the Americans, the Germans and the Russians who work in the field. When the other two groups have repeated the work the discoverer will be asked to suggest a name which is satisfactory to all three. Only then will it go to the formal naming committee. The naming of elements 110 to 112, all recently discovered, is being held over. Even so, the holy grail of stable heavy-nuclei still eludes their creators. But our Ern has his element. From its position in the periodic table, rutherfordium should have similar chemistry to hafnium. Its longest living isotope has a half-life of about 70 seconds. Only a few thousand atoms of rutherfordium have ever been manufactured and probably no more than 100 of these atoms have ever been chemically isolated using a special cation exchange column. As you read this there will most likely be not one atom of rutherfordium in existence, unless one of the three groups are painstakingly manufacturing it for further experiments. Fame is but fleeting. However it is too much of a gamble to hold back a favourite name on the chance that stable heavy-nuclei will eventually be manufactured. back to top References I am most grateful for many discussions with Al Ghiorso concerning events mentioned in this article. Rutherford Scientist Supreme} John Campbell, AAS Publications, 1999 contact: aas@its.canterbury.ac.nz The Elements Beyond Uranium Glenn Seaborg and Walter Loveland, John Wiley and Sons, 1990. The Discovery of Elements 95-106 Al Ghiorso, in the Welch Conference Proceedings {\\it Fifty Years with the Transuranium Elements, Oct 22-23, 1990. A History and Analysis of the Discovery of Elements 104 and 105 Hyde, Hoffman and Keller, Radiochimica Acta 42 57-102 1987. back to top Appendix - Recent Work Towards Stable Heavy Elements. As the researchers pushed towards higher elements the yields got smaller and the half-life of the elements became shorter, ie they were less stable than their predecessors. For example, in 1996 the Darmstadt group produced element 112 with mass number 277 which had a half-life of only 280 microseconds. In December of 1998 the Russian group completed a joint experiment in which they had bombarded Livermore's plutonium-244 sample with calcium-48 ions accelerated in a heavy ion cyclotron. The bombardment was maintained for 40 days during which they produced just one atom of element 114. This work was reported in the Oct 18 1999 issue of Physical Review Letters. What was exciting is that that isotope had a half-life 100,000 times longer than that of element 112, the last new element found before element 114. Throughout 1999 the experiment was repeated twice with the discovery of another isotope whose half-life was one second. Two other isotopes were discovered independently by the Lawrence Berkeley team and another Dubna team. Are they approaching the island of stability? Looks like it. The long lived isotope re-invigorated heavy element research at Lawrence Berkeley National Laboratory where by the end of 1999 they had discovered elements 116 and 118. One problem is how does one know when a stable element has been produced in quantities of just one or two atoms? It was Rutherford who proclaimed radioactivity as by far and away the most sensitve method of detection of atoms. So probably we will have to await the discovery of even higher elements whose half-lives first increase then a gap to those whose half-lives once again decrease with element number. Watch this space. back to top\nBiography | Milestones | DVD\/Books | Birthplace | Other Places | Awards | Honouring Ern | Bibliography | Miscellaneous | Site Map\n\nWebsite maintained by John Campbell, author of Rutherford Scientist Supreme","date":"2017-12-11 11:07:37","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3810405135154724, \"perplexity\": 2243.812645503405}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-51\/segments\/1512948513478.11\/warc\/CC-MAIN-20171211105804-20171211125804-00570.warc.gz\"}"}
null
null
Q: Notepad save button messed up line returns In notepad when I have a large file open, when I press save EVERY line feed is deleted and all compressed! Any suggestions? A: Choose File, Save As and note the encoding. Use a Hex Dump tool to check the line endings - notepad expects CR LF but files from non-Windows sources, or files downloaded with incorrect FTP settings might have just CR or just LF. Open the file in Wordpad instead. A: I figured it out, I had used a registry hack to enable the status bar AND word wrap at the same time, which bugged it. Disabling the hack fixed it.
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,516
Q: MySQL : Selecting the rows with the highest group by count I have a table with records that are updated every minute with a decimal value (10,2). To ignore measure errors I want to have the number that has been inserted the most. Therefor I tried: SELECT date_time,max(sensor1),count(ID) FROM `weigh_data group by day(date_time),sensor1 This way I get the number of records Datetime sensor1 count(ID) 2020-03-19 11:49:12 33.22 3 2020-03-19 11:37:47 33.36 10 2020-03-20 07:32:02 32.54 489 2020-03-20 00:00:43 32.56 891 2020-03-20 14:20:51 32.67 5 2020-03-21 07:54:16 32.50 1 2020-03-21 00:00:58 32.54 1373 2020-03-21 01:15:16 32.56 9 2020-03-22 08:35:12 32.52 2 2020-03-22 00:00:40 32.54 575 2020-03-22 06:50:54 32.58 1 What I actually want is for each day one row which has the highest count(ID) Anyone can help me out on this? A: With newer MySQL (8.0 and later) you can use the RANK window function to rank the rows according to the count. Note that this will return all "ties" which means if there are 100 readings of X and 100 readings of Y (and 100 is the max), both X and Y will be returned. WITH cte AS ( SELECT DATE(date_time), sensor1, RANK() OVER (PARTITION BY DATE(date_time) ORDER BY COUNT(*) DESC) rnk FROM `weigh_data` GROUP BY DATE(date_time), sensor1 ) SELECT * FROM cte WHERE rnk=1 If you just want to pick one (non deterministic) of the ties, you can instead use ROW_NUMBER in place of RANK A DBfiddle to test with. A: Here is a solution based on a correlated subquery, that works in all versions of MySQL: select w.* from weigh_data w where w.datetime = ( select w1.datetime from weigh_data w1 where w1.datetime >= date(w.datetime) and w1.datetime < date(w.datetime) + interval 1 day order by sensor1 desc limit 1 ) Just like the window function solution using rank(), this allows top ties. For performance, you want an index on (datetime, sensor1).
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,184
Pinpin Co, a Chinese artist raised in Japan, uses a simple gel ink pen to turn her subjects' faces into temporary works of art that are then washed away in a few seconds. You've probably seen impressive body painting before, but what Pinpin Co does is truly unique. Using an 0.38mm gel ink pen, the young artist spends around five hours drawing on people's faces, creating fascinating artworks that often capture physical or mental scars that each of them possesses. She is inspired by every person she uses as her canvas, their lives and experiences help Pin Pin create new and exciting works of art every time. "It often becomes a therapeutic process," she told Japanese website Antenna7, in an interview, referring to the doctor-patient relationship that often develops between her and her subjects. But her unique fine art has also been a healing process for Pin Pin, as well. She recently decided to use her father as a subject, whom she has lived apart from for most of her life. During 1 month's time, the artist traveled to Aomori, visiting her father and re-establishing a relationship which ultimately resulted into one of her most impressive works yet. "Coming in contact with a stranger's skin and a family member's skin are two completely different experiences," Co said after drawing on her father's face.
{ "redpajama_set_name": "RedPajamaC4" }
2,673
Just When You Thought Big Food Couldn't Sink Any Lower… In All Health Watch, Big Pharma, Diabetes, Diet and Nutrition, Featured Article, Health Warning In the 1960s, the sugar industry perpetrated a huge fraud on the public. People were starting to blame sugar for the sudden rise in heart disease in America. So sugar makers hatched a plot to keep their profits rolling in. They called it Project 226.1 Under the scheme, they bribed Harvard professors to publish fake research. Their "studies" showed heart disease was not caused by sugar, but by fat. One of the scientists taking sugar industry payoffs was Dr. Frederick Stare. He was chair of Harvard's Public Nutrition Department. Dr. Stare praised Coca-Cola as "a healthy between-meals snack." He recommended sugar as a good "quick energy food."2 As a result of the false "research" and statements like Dr. Stare's, Americans continued to eat foods packed with sugar. You know the rest of the story… Heart disease became our nation's number one killer. It still is today. It takes the lives of more than 600,000 Americans every year. It wasn't until more than 50 years later that Project 226 was exposed. Recommended for You: You'd never think this grocery store food would hold so much age-defying power… But study after study shows that this overlooked food can help fight cancer…slow Alzheimer's…combat heart disease…slow aging…lower blood pressure…increase testosterone…enhance libido…improve mood…reduce inflammation…and so much more. If you thought Big Food had learned its lesson…you'd be wrong. They recently tried again to manipulate sugar research to boost their profits. Big Food even managed to get a prestigious scientific journal, the Annals of Internal Medicine, to publish its "findings."3 The new research purported to refute earlier studies showing that sugar wrecks your health. It concluded that warnings to cut sugar intake couldn't be trusted because there is no evidence to back them up. Unlike the fraud of the 1960s, this time scientists not on Big Food's payroll immediately exposed the scam. They revealed the study was funded by Coca-Cola, General Mills, Hershey's, Kellogg's, Kraft Foods, and Monsanto.4 One of the whistleblowers was Marion Nestle. She's a professor of nutrition, food studies, and public health at New York University. "It's shameful," she said. "This is a classic example of how industry funding biases opinion." Barry Popkin is a professor of nutrition at the University of North Carolina. He said he was stunned the pro-sugar paper was even published. Its authors "ignored the hundreds of randomized controlled trials" that have documented the harms of sugar, he noted. "They ignored the real data, created false scores, and somehow got through a peer review system that I cannot understand," he said. "It is quite astounding." The study appeared as the sugar industry is threatened by new initiatives aimed at fighting the global obesity crisis. Sugar taxes and new government dietary guidelines will cut into their huge profits. But it shows Big Food will stop at nothing to mislead Americans and keep people eating their toxic products.5 The White Powder Will Kill You Don't let the processed food industry hoodwink you. We've been telling you for years about the toxic effects sugar has on your body. Study after study has shown that sugar is killing Americans. It's linked to obesity, diabetes, high blood pressure, heart disease, stroke, cancer, depression, and more. Sugary drinks alone kill about 184,000 people a year, according to a study by Tufts University.6 But Big Food just keeps trying to keep Americans in the dark. In 2015, Coca-Cola provided millions of dollars to researchers who sought to downplay the link between sugary drinks and obesity. In 2016, the Associated Press reported that candy makers funded a study claiming that children who eat candy are less overweight than those who don't. Misleading headlines like "Does Candy Keep Kids from Getting Fat?" appeared in the media.7 One more thing… Now that sugar has finally been identified as one of the worst foods you can eat, food makers have found ways to hide it. Instead of calling it "sugar" on ingredient labels, they use weasel words to cover their tracks. That's why our team of researchers created this important guide. It lists 80 food label words that signal one thing: hidden sugar. 1https://tobacco.ucsf.edu/sugar-papers-reveal-industry-role-shifting-national-heart-disease-focus-saturated-fat 2http://www.economist.com/node/1086689 3http://www.nytimes.com/2016/12/19/well/eat/a-food-industry-study-tries-to-discredit-advice-about-sugar.html?hpw&rref=health&action=click&pgtype=Homepage&module=well-region&region=bottom-well&WT.nav=bottom-well 4http://www.naturalnews.com/2016-12-21-sugar-study-that-tried-to-discredit-new-guidelines-tied-to-coca-cola-kelloggs-and-monsanto.html 6https://www.institutefornaturalhealing.com/2017/02/80-words-mean-one-thing-added-sugar/ 7http://bigstory.ap.org/article/f9483d554430445fa6566bb0aaa293d1/ap-exclusive-how-candy-makers-shape-nutrition-science
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,146
New Foreign Trade Policy 2021-2026 to be rolled out from April, says Govt LG's 2021 TV lineup is promising, but includes bizarre software features Top 5 Things to Watch in Markets in the Week Ahead Twitter has 'permanently suspended' Donald Trump's account Home Business No, Robinhood Traders Aren't Impacting the Stock Exchange No, Robinhood Traders Aren't Impacting the Stock Exchange Finley Back The countless brand-new Robinhood stock traders aren't moving the marketplaces as much as individuals believe Image: Jim Watson/Getty Images There is a force that is moving the stock exchange. No, it's not the Fed. It's not the hedge funds. It's an army of day traders numbering in the numerous thousands. A minimum of, that is what a few of the headings would have you think. They are calling it the Robinhood result and for excellent factor. In early May, Robinhood reported 3 million brand-new account openings in the very first 4 months of the year, half of which were for newbie financiers. And it's these brand-new financiers that are allegedly having an outsize effect on the stock exchange. However is that real? Prior to we can respond to that, we require to return to where this all began. Though March 11, 2020 might not be an especially unforgettable day for the stock exchange, it set in movement a set of occasions that would eventually develop the flood of Robinhood financiers. What took place on March 11? The cancellation of the 2019–2020 NBA season. It may appear strange to correspond completion of sports with the renewal of the retail financier, however this linkage is commonly accepted at this moment. The very first connection in between the lack of sports and the increase in retail investing began April 29 when Jessica Rabe at DataTrek presented the theory that the absence of sports wagering was pressing prospective bettors to the stock exchange. Quickly afterwards, Bloomberg writer Matt Levine went over the exact same theory, which he ultimately described as the "monotony market hypothesis." Long story short, the cash that normally would have discovered its method into sports books throughout the nation needed to discover a brand-new house, which brand-new house was the stock exchange. From this understanding that the beginners were beating the pros, came the impression that Robinhood traders were affecting the stock exchange in a significant method. I initially observed this shift in late March when Dave Portnoy, of Barstool Sports popularity, was vlogging about his trading activity. What I believed was a one-off video rapidly ended up being a rebranding for Portnoy that landed him on CNBC less than 2 months later on. By mid-June, Fox Service was reporting that Portnoy was leading "an army of day traders." ALSO READ: The most effective blood strain monitor for at-home use However it wasn't simply the rise in gamblers-turned-traders that promoted the Robinhood result. As soon as news companies started reporting on how these newbie financiers were beating the billionaires at their own video game, Wall Street began to take notification. For instance, an analysis from Goldman Sachs declared that Robinhood traders' stock choices exceeded those of hedge funds and Societe Generale concluded that Robinhood traders "nailed the marketplace bottom." From this understanding that the beginners were beating the pros, came the impression that Robinhood traders were affecting the stock exchange in a significant method. However, was it real? Was the Robinhood result real after all? The very first indications of proof versus the Robinhood result appeared in early June when an analysis from Barclays discovered that Robinhood traders weren't behind the marketplace rally. Their analysis compared what stocks Robinhood users were purchasing with motion in the S&P 500, however they discovered no relationship. Though Castle Securities just recently reported that retail trading now comprises 25% of all market volume (up from 10% in 2019), it is most likely that these trades are too little to make an effect. I discovered that for the majority of these stocks, there was little to no connection in between the one-day modification in stock rate and the one-day modification in the variety of Robinhood users holding them. However rather of taking their word for it, I chose to check this hypothesis by scraping all of the information from Robintrack a couple of days back, a site revealing the variety of Robinhood users that hold a specific stock gradually. For instance, listed below is a chart of Ford's (F) stock rate together with the variety of Robinhood users holding its stock considering that May 2018: Variety of Robinhood users holding Ford stock (green) compared to the rate of the stock (pink) The great feature of Robintrack is that you can download the variety of Robinhood users holding any offered stock gradually. (Regrettably, this won't hold true for a lot longer. Bloomberg just recently reported that Robinhood is closing down this information feed to avoid other individuals from "utilizing it in manner ins which they can't monitor/control." The good news is, the historic information will be enabled to stay on the website.) ALSO READ: UFC schedule 2020: All significant approaching occasions consisting of UFC PEAK Centre reveals and UFC 251 With this in mind, I took the top 200 stocks on Robintrack and downloaded their holdings information. The variety of holders in the top 200 varied from 900,000 for the most popular stock (Ford) to 40,000 for the 200th most popular (Azul Airlines). When integrating the holdings information with rates information from Yahoo Financing, I had the ability to take a look at the one-day modification in variety of Robinhood users holding a stock and see how well it associated with the one-day rate return of that stock. I did this due to the fact that I wished to check whether a boost (or reduction) in Robinhood users holding a stock was met a comparable boost (or reduction) because stock's rate. I comprehend that the number of Robinhood users holding a stock is not the like the overall dollar effect that Robinhood users have on a stock (that is, not all Robinhood traders have the exact same bankroll), however let's presume that they are comparable in size in the meantime. Furthermore, I developed a subset of the information to begin on February 19 (the day prior to the Covid-19-influenced sell-off started) to just catch the connection from when Robinhood users began ending up being more active on the platform. After doing this workout for the top 200 most popular stocks on Robintrack, I discovered that for the majority of these stocks, there was little to no connection in between the one-day modification in stock rate and the one-day modification in the variety of Robinhood users holding them: Each point in the plot above represents one stock's connection in between the modification in Robinhood holders and the modification in rate. For instance, the far leftmost point in the plot above is for the most popular stock (Ford), which has a connection near no. As you can see, the majority of the connections are in between 0.3 and -0.3, which recommend a weak relationship, if any. It is those connections near 1 (or -1) that indicate more powerful relationships. Furthermore, I purchased this plot by appeal rank on Robinhood to reveal that there is likewise no relationship in between appeal on Robinhood and the connection metric I determined. This indicates that, on Robinhood, more popular stocks don't have greater connections than less popular stocks. If we had actually discovered that the most popular stocks on Robinhood had a high connection in between the modification in Robinhood holders and the modification in rate, then we may be able to argue that Robinhood traders had an influence on markets as a whole. However, that doesn't appear to be the case here. Since the smaller sized, more speculative stocks get more headings, this perpetuates the Robinhood Impact misconception even further. Nevertheless, if you take a look at the plot above you will observe that some stocks do display a high connection in between the modification in the variety of Robinhood users holding it and its one-day rate modification. To take a look at this in more depth, I ranked these leading 200 stocks from the one with the most affordable connection to the one with the greatest and found that Robinhood activity is considerable for some stocks: The stocks on the far best side of the chart appear to be impacted by Robinhood users while those in the center and to the left appear to be untouched. It might come as not a surprise that smaller sized speculative stocks like Kodak, Nikola, Hertz, and Moderna reveal a high connection in between Robinhood users holding the stocks and rate modifications. On the other hand much bigger stocks like Apple, Amazon, and Tesla reveal generally no connection. Considering that these are greater market cap stocks, it's possible that even great deals of volume from Robinhood traders can't move the marketplace, compared to, state, a stock like Hertz, which is trending towards deserving $0. Nevertheless, due to the fact that the smaller sized, more speculative stocks get more headings, this perpetuates the Robinhood result misconception even further. You can see this plainly when you compare the variety of Robinhood users holding a stock like Kodak and Kodak's rate: Variety of Robinhood users holding Kodak stock (green) compared to the rate of the stock (pink) Yes, Robinhood traders are affecting Kodak, however that doesn't indicate they are materially affecting bigger stocks (like Apple or Amazon) or the stock exchange in general. This shows that there appears to be some stocks in which the Robinhood result is genuine, however, for many stocks this isn't real. Appeal on Robinhood is not predictive of rate modifications, however it is predictive of what will make the headings. While I wish to act on this analysis in the future to check if the Robinhood result ever gets more powerful, based upon Robinhood's current actions with Robintrack, we understand this won't be possible. This modification in information openness recommends that Robinhood wishes to shed its day trader image in order to bring in a various type of customers. Following the current suicide of young Robinhood trader, I am not amazed that the business is attempting to distance itself from a motion they assisted develop. For a business whose objective has actually constantly been to equalize financing, perhaps they prospered a little too well. Previous articleMmhmm: it simply works! Next article11 Practical Tips for Effective Education in your home Finley works as an editor who monitors all the articles being published over the site for content accuracy and language consistency. He also jots down intellectual news pieces for the technology section. How to monitor remote workers humanely 4 Ways to Overcome AI Obstacles Concept: It's time for Apple to let apps have multiple instances on iPhone New Apple patents show MacBook that can wireless charge an iPhone, other devices Lin-Manuel Miranda wrote 'In The Heights' as a result of Latinx tales had been 'lacking' from musical theater Aaron Donnelly - October 15, 2020 0 The Hollywood Foreign Press Association's annual Grants Banquet is usually a schmoozy cocktail party and dinner stop on the awards campaign trail — where a starry lineup of contenders press the flesh with Golden Globe voters in between presenting grants to non-profit organizations.However, this year's event had to make a virtual pivot. The 2020 Grants… How Food Wrappers Dethroned Cigarette Butts as one of the most Typical Beach Garbage, by the Numbers Finley Back - September 17, 2020 0 Turning our beaches into a receptacle for Huge PlasticImage: Xinhua News Agency/Getty Images4.7 million: That's the number of food wrappers the Ocean... This is Your Brain on Anger Samuel Sanders - June 9, 2020 0 Anger is natural, even required, and it can be extremely efficient if effectively transportedPicture: Olivier Douliery/AFP/Getty ImagesThere's no scarcity of anger in America... Aaron Donnelly - January 12, 2021 0 It seems like the company is running out of picture enhancement ideas LG has expanded its OLED lineup, added mini-LED sets and more in 2021, delivering one of the most robust television selections available. Beyond just releasing better hardware, the company is also revamping its smart TV software with a new look and cool new… Michael Statham - January 11, 2021 0 © Reuters By Noreen Burke Investing.com -- Politics will continue to dominate the headlines in the week ahead amid calls for President Donald Trump's removal from office after he encouraged a mob of his supporters to storm the U.S. Capitol. But investors will likely be focusing more on the prospects for a bigger stimulus package… Amazon will suspend social media site Parler from its server hosting service on Sunday over violent content that has also prompted Google Play and Apple to remove the platform from its app stores. Parler, the "Twitter for conservatives," was used by supporters of President Donald Trump to express hatred and threaten violence that culminated in… @2020 GlobeVisions.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,575
Q: (CSS) Creating a box with a blurred background So here is what I want to do I have a text box in a page that I'm making, and what I want it to do is have the picture behind it blur ONLY inside the textbox. I'm terrible at explaining this, so I'll make a diagram here ~~~~background~~~ ~~~~~~~~~~~~~~~~~ ~~~~XXXXXXXXX~~~~ ~~~~XXXXXXXXX~~~~ ~~~~XX-Box-XX~~~~ ~~~~XXXXXXXXX~~~~ ~~~~XXXXXXXXX~~~~ ~~~~~~~~~~~~~~~~~ what I want is for the background behind the "X" to blur A: This can be achieved by using filter: blur();, z-index and pseudo elements :before or :after property, but this should be applied in a way so that the text on the box should look in front of the image. Please have a look at the below code or checkout this Codepen. .box { padding: 10px 20px; border: 1px solid #fff; color: #fff; position: relative; z-index: 2; } .box p { position: relative; z-index: 1; } .box:before { content: ''; position: absolute; top: 0; left: 0; width: 100%; height: 100%; background: url('https://d2lm6fxwu08ot6.cloudfront.net/img-thumbs/960w/T1G833T4BI.jpg'); filter: blur(6px); z-index: 1; } <div class="holder"> <div class="box"> <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit. Beatae tempora sapiente, tenetur in aliquid? Adipisci facilis animi, nam reiciendis tempora, incidunt voluptatibus quis dicta soluta culpa quam dolore laborum, obcaecati!</p> </div> </div> Hope this helps!
{ "redpajama_set_name": "RedPajamaStackExchange" }
795
CREATE TABLE "Tasks" ( id INTEGER PRIMARY KEY AUTOINCREMENT, title TEXT NOT NULL DEFAULT CardTitle, containerId INTEGER NOT NULL DEFAULT 0, creationTime TEXT, text TEXT, boardId INTEGER NOT NULL DEFAULT 0, selectionTime TEXT, finishedTime TEXT, categoryId INTEGER NOT NULL DEFAULT 0, FOREIGN KEY(containerId) REFERENCES Containers(id), FOREIGN KEY(categoryId) REFERENCES Categories(id) )
{ "redpajama_set_name": "RedPajamaGithub" }
9,020
Q: Is there a good emacs mode for displaying and editing huge delimiter separated files? I've been searching without finding for a while for a mode that makes editing huge tab/comma/colon-separated files easy. I've been wanting a mode that ensures that columns always line up, just like org-mode tables. I know I can easily turn the whole file into an org-mode table and then turn it back when I'm done, but that gets really slow with huge files, and is a hassle for quick edits (there's also the problem of what happens if a field contains a vertical bar). So does anyone know of either a mode or a built-in function/variable I can use so that I can get a file like col1\tcol2\tcol3 very long column1\tcol2\tcol3 displayed like col1 col2 col3 very long column1 col2 col3 ? (perhaps with some color lining the separator) A: As @choroba mentioned, use csv-mode. To answer your question specifically: * *Make sure your separator is in csv-separators, which for example you can set with (setq csv-separators '("," " ")) *Use csv-align-fields (default keybinding C-c C-a) to line up the field values into columns. @unhammer's comment about aligning only visible lines is great. Their code properly indented: (add-hook 'csv-mode-hook (lambda () (define-key csv-mode-map (kbd "C-c C-M-a") (defun csv-align-visible (&optional arg) "Align visible fields" (interactive "P") (csv-align-fields nil (window-start) (window-end)) ) ) ) ) A: There is pretty-column.el, which I found in Group gnu.emacs.sources years ago (it was added in 1999). That group is now blocked, by Google. I just used pretty-column.el on a ~5000 line tab-separated text file that Org mode choked on (Org mode has a 999 line limit on converting such a file--for that reason). Added in edit: This seems to now be called delim-col.el (see this Emacs Wiki entry); the author is the same person. A: Perhaps you could tell us what you've already found and rejected? If you've been searching, then you must surely have seen http://emacswiki.org/emacs/CsvMode ? You don't mention it, or say why it wasn't any good, though. SES (Simple Emacs Spreadsheet) might be a useful approach: C-hig (ses) RET You can create a ses-mode buffer and yank tab-delimited data into it (that's the import mechanism). It's probably more hassle than you were after, though, and I'm not sure how well it will perform with "huge" files. A: Try csv-mode, which works in at least Emacs 24. You can set the variable csv-separators to change the separator if you do not use the default one (comma). See EmacsWiki.
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,883
Q: Change colour of Label.BTN I am using bootstrap3 & fontawesome. I have the following HTML: <label for="foo" class='radio btn'> <input type="radio" id='foo' name='foo[]'> <span></span> Foo </label> With the followign CSS: label.radio {background: #e8e8e8;font-size: 25px;text-align: center;width: 100%;padding-top: 30px;padding-bottom: 30px;} label.radio.active {background:#f00;} label.radio > input {display: none;} label.radio > input+span:after {font-family: FontAwesome;content:"\f00d";} label.radio > input:checked+span:after {font-family: FontAwesome;content:"\f00c";} However I want to change the background of the label when it's clicked. I can't see a way to do this with CSS, so I tried jQuery: $("label.radio").click(function (evt) { $(this).toggleClass("active") }) But the click() function fires twice. How can I achieve this? A: You can use checkbox change event to change the color of the label. <label for="foo" id="fooLabel" class='radio btn'> <input type="radio" id='foo' name='foo[]'> <span></span> Foo </label> $('#foo').change(function() { $('#fooLabel').toggleClass("active") });
{ "redpajama_set_name": "RedPajamaStackExchange" }
388
. ${BUILDPACK_TEST_RUNNER_HOME}/lib/test_utils.sh createPom() { cat > ${BUILD_DIR}/pom.xml <<EOF <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.mycompany.app</groupId> <artifactId>my-app</artifactId> <version>1.0-SNAPSHOT</version> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> </properties> <dependencies> $1 </dependencies> </project> EOF } withDependency() { cat <<EOF <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.0</version> <type>jar</type> <scope>test</scope> </dependency> EOF } createSettingsXml() { if [ ! -z "$1" ]; then SETTINGS_FILE=$1 else SETTINGS_FILE="${BUILD_DIR}/settings.xml" fi cat > $SETTINGS_FILE <<EOF <?xml version="1.0" encoding="UTF-8"?> <settings xmlns="http://maven.apache.org/SETTINGS/1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd"> <profiles> <profile> <id>jboss-public-repository</id> <repositories> <repository> <id>jboss-no-bees</id> <name>JBoss Public Maven Repository Group</name> <url>http://repository.jboss.org/nexus/content/groups/public/</url> </repository> </repositories> </profile> </profiles> <activeProfiles> <activeProfile>jboss-public-repository</activeProfile> </activeProfiles> </settings> EOF } # Helpers _assertMaven305() { assertCaptured "Installing Maven 3.0.5" assertFileMD5 "7d2bdb60388da32ba499f953389207fe" ${CACHE_DIR}/.maven/bin/mvn assertTrue "mvn should be executable" "[ -x ${CACHE_DIR}/.maven/bin/mvn ]" assertCaptured "executing $CACHE_DIR/.maven/bin/mvn -B -Duser.home=$BUILD_DIR -Dmaven.repo.local=$CACHE_DIR/.m2/repository -DskipTests=true clean install" assertCaptured "BUILD SUCCESS" } _assertMaven311() { assertCaptured "Installing Maven 3.1.1" assertFileMD5 "08a6e3ab11f4add00d421dfa57ef4c85" ${CACHE_DIR}/.maven/bin/mvn assertTrue "mvn should be executable" "[ -x ${CACHE_DIR}/.maven/bin/mvn ]" assertCaptured "executing $CACHE_DIR/.maven/bin/mvn -B -Duser.home=$BUILD_DIR -Dmaven.repo.local=$CACHE_DIR/.m2/repository -DskipTests=true clean install" assertCaptured "BUILD SUCCESS" } _assertMavenLatest() { assertCaptured "Installing Maven 3.2.5" assertFileMD5 "9d4c6b79981a342940b9eff660070748" ${CACHE_DIR}/.maven/bin/mvn assertTrue "mvn should be executable" "[ -x ${CACHE_DIR}/.maven/bin/mvn ]" assertCaptured "executing $CACHE_DIR/.maven/bin/mvn -B -Duser.home=$BUILD_DIR -Dmaven.repo.local=$CACHE_DIR/.m2/repository -DskipTests=true clean install" assertCaptured "BUILD SUCCESS" } # Tests testCompileWithoutSystemProperties() { createPom "$(withDependency)" assertTrue "Precondition" "[ ! -f ${BUILD_DIR}/system.properties ]" compile assertCapturedSuccess _assertMavenLatest assertCaptured "Installing OpenJDK 1.8" assertTrue "Java should be present in runtime." "[ -d ${BUILD_DIR}/.jdk ]" assertTrue "Java version file should be present." "[ -f ${BUILD_DIR}/.jdk/version ]" } testCompile() { createPom "$(withDependency)" compile assertCapturedSuccess _assertMavenLatest } testCompilationFailure() { # Don't create POM to fail build compile assertCapturedError "Failed to build app with Maven" } testDownloadCaching() { createPom # simulate a primed cache mkdir -p ${CACHE_DIR}/.m2 mkdir -p ${CACHE_DIR}/.maven/bin cat > ${CACHE_DIR}/.maven/bin/mvn <<EOF echo "Apache Maven 3.2.3" EOF chmod +x ${CACHE_DIR}/.maven/bin/mvn compile assertNotCaptured "Maven should not be installed again when already cached" "Installing Maven" } testNewAppsRemoveM2Cache() { createPom rm -r ${CACHE_DIR} # simulate a brand new app without a cache dir assertFalse "Precondition: New apps should not have a CACHE_DIR prior to running" "[ -d ${CACHE_DIR} ]" assertFalse "Precondition: New apps should not have a removeM2Cache file prior to running" "[ -f ${CACHE_DIR}/removeM2Cache ]" compile assertCapturedSuccess assertTrue "removeM2Cache file should now exist in cache" "[ -f ${CACHE_DIR}/removeM2Cache ]" assertFalse ".m2 should not be copied to build dir" "[ -d ${BUILD_DIR}/.m2 ]" assertFalse ".maven should not be copied to build dir" "[ -d ${BUILD_DIR}/.maven ]" } testNonLegacyExistingAppsRemoveCache() { createPom touch ${CACHE_DIR}/removeM2Cache # simulate a previous run with no cache dir assertTrue "Precondition: Existing apps should have a CACHE_DIR from previous run" "[ -d ${CACHE_DIR} ]" assertTrue "Precondition: Existing apps should have a removeM2Cache file from previous run" "[ -f ${CACHE_DIR}/removeM2Cache ]" compile assertCapturedSuccess assertTrue "removeM2Cache file should still exist in cache" "[ -f ${CACHE_DIR}/removeM2Cache ]" assertFalse ".m2 should not be copied to build dir" "[ -d ${BUILD_DIR}/.m2 ]" assertFalse ".maven should not be copied to build dir" "[ -d ${BUILD_DIR}/.maven ]" } testCustomSettingsXml() { createPom "$(withDependency)" createSettingsXml compile assertCapturedSuccess assertCaptured "Downloading: http://repository.jboss.org/nexus/content/groups/public" } testCustomSettingsXmlWithPath() { createPom "$(withDependency)" mkdir -p $BUILD_DIR/support createSettingsXml "${BUILD_DIR}/support/settings.xml" export MAVEN_SETTINGS_PATH="${BUILD_DIR}/support/settings.xml" compile assertCapturedSuccess assertCaptured "Downloading: http://repository.jboss.org/nexus/content/groups/public" unset MAVEN_SETTINGS_PATH } testCustomSettingsXmlWithUrl() { createPom "$(withDependency)" mkdir -p /tmp/.m2 createSettingsXml "/tmp/.m2/settings.xml" export MAVEN_SETTINGS_URL="file:///tmp/.m2/settings.xml" compile assertCapturedSuccess assertCaptured "Installing settings.xml" assertCaptured "Downloading: http://repository.jboss.org/nexus/content/groups/public" unset MAVEN_SETTINGS_URL } testIgnoreSettingsOptConfig() { createPom "$(withDependency)" export MAVEN_SETTINGS_OPT="-s nonexistant_file.xml" compile assertCapturedSuccess unset MAVEN_SETTINGS_OPT } testMaven311() { cat > ${BUILD_DIR}/system.properties <<EOF maven.version=3.1.1 EOF createPom "$(withDependency)" compile assertCapturedSuccess _assertMaven311 } testMaven305() { cat > ${BUILD_DIR}/system.properties <<EOF maven.version=3.0.5 EOF createPom "$(withDependency)" compile assertCapturedSuccess _assertMaven305 } testMavenUpgrade() { # travis doesn't have openjdk8 yet, and some setting it uses causes maven # to pick up -XX:MaxPermSize, which writes a warning to STD_OUT on jdk8, # which causes this to fail. if [ "$TRAVIS" != "true" ]; then cat > ${BUILD_DIR}/system.properties <<EOF maven.version=3.0.5 EOF createPom "$(withDependency)" compile assertCapturedSuccess _assertMaven305 cat > ${BUILD_DIR}/system.properties <<EOF maven.version=3.2.3 EOF compile assertCapturedSuccess fi } testMavenSkipUpgrade() { cat > ${BUILD_DIR}/system.properties <<EOF maven.version=3.0.5 EOF createPom "$(withDependency)" compile assertCapturedSuccess _assertMaven305 rm ${BUILD_DIR}/system.properties compile assertCapturedSuccess assertNotCaptured "Installing Maven" assertFileMD5 "7d2bdb60388da32ba499f953389207fe" ${CACHE_DIR}/.maven/bin/mvn assertTrue "mvn should be executable" "[ -x ${CACHE_DIR}/.maven/bin/mvn ]" assertCaptured "executing $CACHE_DIR/.maven/bin/mvn -B -Duser.home=$BUILD_DIR -Dmaven.repo.local=$CACHE_DIR/.m2/repository -DskipTests=true clean install" assertCaptured "BUILD SUCCESS" } testMavenInvalid() { cat > ${BUILD_DIR}/system.properties <<EOF maven.version=9.9.9 EOF createPom "$(withDependency)" compile assertCapturedError "you have defined an unsupported Maven version" }
{ "redpajama_set_name": "RedPajamaGithub" }
3,411
Q: What can using the "opposite" combination for integration by parts be used for For example, $$I=\int xe^{x}dx$$ By taking the derivative of x, and then repeating integration by parts once, the integral can be evaluated trivially. However, when taking the derivative of $e^{x}$ and integrating $x^2$, the process goes on forever. Another example would be $$J=\int \sin{x}\cos{x}dx$$ By repeating integration by parts, $$J= sin^2x + cos^2x + sin^2x + cos^2x ....$$ Does this have any use/any interesting links? e.g. could be used to evaluate integrals where other "combination" of choosing which term is differentiated/integrated is not plausible. A: To expand on my comment, I will explicitly use your first example and another example I have. For $I(z) = \int_0^z xe^x\:dx$, going in the "opposite" direction gives us $$I(z) = \frac{1}{2}z^2e^z - \int_0^z \frac{1}{2}x^2e^x\:dx \implies I(z) = e^z\left(\frac{1}{2}z^2-\frac{1}{6}z^3+\cdots\right)$$ $$=e^z\left(e^{-z}-1+z\right)=1-e^z+ze^z$$ getting the series from repeated integration by parts. Another example would be $$\int_0^{\frac{1}{2}}\frac{1}{\sqrt{1-x^2}}\:dx = \frac{x}{\sqrt{1-x^2}}\Biggr|_0^{\frac{1}{2}}-\int_0^{\frac{1}{2}}\frac{x^2}{(1-x^2)^{\frac{3}{2}}}\:dx$$ which gives a series for $$\arcsin\left(\frac{1}{2}\right) = \frac{\pi}{6} = \frac{1}{\sqrt{3}}-\frac{1}{3}\left(\frac{1}{\sqrt{3}}\right)^3+\frac{1}{5}\left(\frac{1}{\sqrt{3}}\right)^5-\cdots$$ A: On example I could see is the following : Imagine you want to compute $$\int_{0}^{1}\frac{\text{d}x}{\left(1+x^2\right)^2}$$ You can use that $$ \int_{0}^{1}\frac{\text{d}x}{1+x^2}=\frac{\pi}{4} $$ and then you integrate by part "in the other way" you usually do $$ \int_{0}^{1}\frac{\text{d}x}{1+x^2}=\int_{0}^{1}\frac{1}{1+x^2}\times 1\text{ d}x=\left[\frac{x}{\left(1+x^2\right)^2}\right]^{1}_0+\int_{0}^{1}\frac{2x^2}{\left(1+x^2\right)^2}\text{d}x $$ Hence $$ \frac{\pi}{4}=\frac{1}{4}+2\int_{0}^{1}\left(\frac{1}{1+x^2}-\frac{1}{\left(1+x^2\right)^2}\right)\text{d}x$$and we find $$\int_{0}^{1}\frac{\text{d}x}{\left(1+x^2\right)^2}=\frac{2+\pi}{8}$$
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,669
A driver has been taken to hospital after a bus and car collided near a roundabout in Leeds. Police were called to Domestic Street in Holbeck shortly after 5pm today to reports of a collision. A West Yorkshire Police spokeswoman said a single decker bus and a black Fiat 500 had been involved. She said initial reports suggested a number of parked cars might also have been damaged. The driver of the Fiat was taken to Leeds General Infirmary, the spokeswoman said.
{ "redpajama_set_name": "RedPajamaC4" }
2,165
Posts tagged Duxford-Hinxton rail crossing CANTAB45 February 2008 CANTAB45 February 2008 published on February 28, 2008 Read more posts by the author of CANTAB45 February 2008, Paul Somewhere to sit down? It's getting a bit late to wish you "Happy New Year", so I shall say, "Welcome Spring", albeit a little prematurely. This has been the season not only for brisk Winter walks, but also a good deal of armchair walking, and planning strategies for the longer days. I visualise maps, the guide books, diary, perhaps holiday catalogues, a beverage to hand, and most important, a comfortable chair, with good lumbar support. When out-of-doors, how often do you sit down? On most walks, one takes a mid-morning break, sometimes a brief afternoon teatime stop, and indeed, the lunchbreak may well be taken as a picnic. Sitting on the ground is not much fun in Winter. Where do you sit down? Cambridge City riverside and open spaces serve its residents and tourists quite well in this respect. Some Cambs villages, such as Thriplow, Coton, Whittlesford and Toft are well-blessed with seats on public open spaces and near road junctions. Generally the recreation ground in a village will have a bench or so, and perhaps a sheltered seat under a pavillion (but don't count on it!). In cold, wind or rain, try the sanctuary of a bus shelter, or a church porch. For the less able, or someone recovering from an injury, the need to sit down at regular intervals becomes a necessity. The best options are then country parks and the like. Top of this list is Magog Down, with a memorial seat every 100m! Wandlebury does fairly well, but the seats become sparser towards the Roman Road. Milton Country Park has a good supply of rather austere benches. Of our local National Trust properties, Anglesey Abbey is well supplied with resting places, but Wimpole has very few away from the vicinity of the house, other than a couple of benches near the lake. Why are there no seats up by the viewpoint in front of the folly? I once led an elderly relative up there, only to have to prop her, panting, against a tree! Perhaps walks organisers should consider the need to sit down, amongst the many factors pertinent to a well-planned walk. Fallen or felled trunks or stout branches in the countryside are an obvious solution, so long as the tree from which they derived is not waiting to drop a further limb on unwary travellers. Then there might be a section of wall – mind it doesn't collapse! For those who don't sit comfortably on flat ground, what about the edge of a ditch or a bank, "I know a bank whereon the wild thyme blows..." Finally, one can carry one's own seat, of which various commercial versions are available. The snag with all is they add weight and bulk to the rucksac. Think about it. So are you sitting comfortably? Then I'll begin… Janet Moreton A 'Missing Link' path in Willingham – can anyone help? In 1995, Willingham Parish Council was successful in having a new Public Footpath added to the Definitive Map, on the basis that people had used the path "as of right" over a 20-year period, so that it could now be deemed to be a public right of way. The path runs west from the corner of the Rampton Road at TL 408 695, to the southern end of Mill Road at TL 404 695. Unfortunately although Mill Road itself is recorded as a public road for most of its length, there is a short section of the unmade road over which no public rights seem to have been recorded. Historically, the road was used during the WWII for the transport of fruit; but since then there have been arguments as to whether it was intended to be public, and now there is a gate across its southern end which is often locked so that people have to squeeze round the gate-posts in order to reach the public footpath on the other side. Willingham Parish Council is seeking to redress the situation by claiming Public Footpath rights along the unregistered stretch of Mill Road, again on the basis that people have walked it freely for many years. If anyone can help with evidence of their own usage (during any period, but obviously the longer, the better!), you are invited to contact the Parish Clerk at The Parish Office, Ploughman Hall, West Fen Road, Willingham CB24 5LP, or by e-mail to email@willinghampc.org.uk . Duxford / Hinxton rail crossing opened at last! In 1981, when preparing a total survey of path problems in South Cambridgeshire, one of our first complaints to Cambs.C.C. was that of missing stiles to cross the Cambridge-Liverpool Street railway line at TL 491451. Duxford Footpath 8 should have been accessible across the line from Hinxton Footpath 4. Over the years, in our capacity as RA Footpath Secretaries for South Cambridgeshire, we complained again and again…and again. Promises were made and nothing happened. Then last year, Cambs.C.C. approached the Rail Regulator, who decreed that Network Rail must open the public right of way, and erect stiles over the railside fences. The immediate outcome was that, around Christmas, Network Rail put up notices at the site "Walkers using footpath 8 Duxford, 4 Hinxton are requested to use the route shown in green on the map below. Network Rail is in process of applying for a formal modification of the definitive map". Users were directed along a grass track adjacent to the railway, emerging beside a roadside level crossing at TL 494445. As an alternative route to Hinxton, this could hardly have been longer. A similar notice had been erected here, and alongside, was an indignant hand-printed notice "The instructions of Network Rail to use this land as a footpath have been placed without the consent of any of the landowners, who have not been consulted regarding these proposed changes.". We reported the on-site notices to Cambs C.C., and returned two weeks later, having been told the stiles were in position at the proper place. Yes, we found that you can cross the line here: the stiles in the railway fence are good, and there are boards over the railway lines for safe pedestrian use. A few hazards remain. On the Duxford side of the line, there is a double fence, and no stile over the farmer's rabbit-netting, supported by a single low wire, that is not too difficult to step over. On the Hinxton side, one soon encounters a crossing fence in the grass field. We found a single delicate plank stile here with a crack in it – very dangerous! Preferably circumnavigate this, to proceed through scrub, which has grown up in the years when the crossing was unusable. Beyond the junction with Hinxton Footpath 1 & 3, at TL 493448 a clear path goes ahead to the level crossing at TL 494445, or one can turn left across the field to Hinxton Mill. Naturally, we have reported the above remaining problems, and hope to have them resolved soon. Meanwhile, please do use this path, for which we have fought hard for very many years. If there are still problems, then do report to Cambs.C.C. Parish of the Month – Hinxton After the exciting developments with the long-lost path over the railway, I could not but choose Hinxton as "Parish of the Month", although there are reservations, as noted below. The little village is sited on a chalky rise by the floodplain of the River Cam or Granta and 2 out of 4 of its footpaths cross the low-lying fields, becoming occasionally impassable in a wet Winter or Spring. The other two lead to the A1301, one behind the church, the other near the Stump Cross junction, which is reached from Mill Lane at Ickleton. The remainder of the parish, extending the other side of the A1301, has not a single right of way. However, as well as the rights-of-way from village to floodplain, there are in addition, a permissive path along the river bank, and others elsewhere, including in the grounds of the old Hinxton Hall, the Genome Campus, a vast new development which has put Hinxton on the map. Some of the permissive paths are labelled for use of local residents only, to the disgust of other ramblers. In particular, that along the raised river bank (marked for villagers only) is sometimes the only feasible route, when the fields are underwater. But do visit Hinxton – the attractive C17th restored Watermill is in the hands of the Cambridge Preservation Society, and is open some Summer Sundays, when teas are provided. Generally, floods permitting, it is possible to walk across the fields to Ickleton, or in the other direction from the Mill towards Duxford. Cyclists have been considered recently, by construction of a tarmac cycleway / footway beside the A1301 towards the MacDonalds at the major Sawston roundabout. It would be possible to walk to Sawston this way, but really not very pleasant. The village is picturesque, with attractive thatched cottages, and some fine timbered structures. The church, dating from the C14th, is set back in Church Green behind the war memorial. There is a Norman doorway (now blocked) to the North, and within the South porch, a moulded C15th doorway with traceried oak door. The graceful tapered leaded spire can be seen to best effect across the meadows. The Red Lion Hostelry is housed in one of the fine timbered buildings. There is no shop. A bus service runs once an hour from Cambridge via Sawston. A three mile circuit from Hinxton Map – Explorer 209 Park considerately in the wide main street. Walk N, admiring the old properties on the right, noticing the former front doors are 2 feet from the ground, a flood precaution. Turn left down the lane to the Mill. Footpath 1 is signed passing behind the Mill Buildings, and crossing a bridge over the sluice. Shortly the path divides. The raised path along the river bank is signed for villagers only: the hoi-poli turn half-right, off the river bank. There is a permissive path which goes along in the field, fenced beside the river bank, which will take you to the side of the railway at TL 493448. The right of way goes diagonally across the pasture directly to this point. Continue SSE beside the railway, to emerge on the road at a level crossing. Cross the railway and turn left immediately through a kissing gate into a grass field, which cross diagonally, cutting off a corner of the road, rejoined at TL 494443. Cross the road, turn right, and soon enter a field by a sign. Go forward to use a footbridge on the left. Follow the worn path through the pasture, to emerge up a charming sunken lane and through a tall iron gate into Butchers Lane, Ickleton. Turn right, and find the signpost for a narrow walled lane leading towards the church, famed for its wall-paintings. Emerge by a pleasant green, where there is a circular seat. (Turn right for the shop and recreation ground.) Turn left for the church and to circle the old part of the village, returning to Butchers Lane, and the sunken path from which you had emerged. Retrace as far as the footbridge, but now continue forward in the fields, noting a cemetery chapel away to the right. Pass a junction in a sunken lane at TL 491444. Continue ahead, first between hedges and fences, then across open fields, passing into Duxford parish. Go through a hedge at a T-junction, and turn right on a good grass path. Shortly, another junction is reached at TL 488452. The left branch leads past a rubbish heap to a stile giving onto the road to Duxford. The right branch leads to the newly constructed stiles over the railway. Cross with care. Once back in Hinxton, continue in the field beside the railway and reach the ricketty stile (hopefully mended). Cross or avoid to the left, and pass through an overgrown area. Reach a kissing-gate at a junction of paths, and return to Hinxton Mill – perhaps for variety along the permissive path fenced at the foot of the flood bank? Extensions (making 8 to 12 miles in all depending how much Strethall is explored) Use the above circuit as a core route, continuing the walk through Great Chesterford, via the Icknield Way path towards Strethall and returning to Ickleton down the quiet road over Coploe Hill. Linton – Great Chesterford Ridge: Wind Turbine Threat Many local people are very concerned about the proposed development of an enormous wind farm on land between Linton, Hadstock, The Abingtons and Great Chesterford. The area contains many attractive paths, some of which will be very close to the proposed turbines. People who walk & ride in this area are worried that the rural tranquility will be lost and outstanding views ruined by these industrial structures. A planning application is expected soon. For more information, contact the Action Group at www.stoplwf.org.uk. (Information provided by an Essex member of The Action Group) Cantab Rambler 45 by E-Mail & Post Categories CANTAB ArchiveTags 2008, Duxford-Hinxton rail crossing, Hinxton, Willingham
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,767
Orioles play 'most complete game of season' in win over Yankees; López allows 1 hit in 6 innings; Stray cat steals 8th Photo Credit: Vincent Carchietta-USA TODAY Sports NEW YORK—It was the strangest pitching line of the season. In six innings, Jorge López allowed one run on one hit, yet he walked five and hit a batter. Most important, López got his first win since June 6th, breaking a streak of nine winless starts as the Orioles backed him with four solo home runs on their way to a 7-1 win over the New York Yankees before 28,879 at Yankee Stadium Monday night. "Just really proud of him. The guy's been dealing with a ton," manager Brandon Hyde said. The stadium was abuzz before the game because it was the first home game for new Yankees Joey Gallo and Anthony Rizzo, obtained at the trade deadline in the hopes they'd jumpstart the team. The Yankees also picked up a new starting pitcher, Andrew Heaney from the Los Angeles Angels, and he served up the four homers. Cedric Mullins smacked his 18th and Austin Hays immediately followed with his 11th in the third. It was the fourth time the Orioles hit back-to-back home runs this year, the second time by Mullins and Hays. Ryan Mountcastle led off the fourth with his 18th home run. Two batters later, Ramón Urías hit his fifth to give the Orioles (38-67) a 4-0 lead. "That was our most complete game of the year," Hyde said. He acknowledged that there were too many walks but cited the baserunning, the defense and the situational hitting. And the gritty pitching of Lopez, in particular. "It was nice to have a little bit of a lead for him to attack the strike zone … We just did a lot of good things tonight." López (3-12) hit Rizzo with a pitch with one out in the first and balked him to second. With two outs, he walked Gallo, but Giancarlo Stanton grounded to third to end the inning. In the second, López walked Gary Sánchez with one out, and Brett Gardner with two outs. A groundout by DJ LeMahieu ended the inning. López recorded two 1-2-3 innings in the third and fourth. But in the fifth, he walked Torres and Gardner. Both moved up a base on LeMahieu's fly to center, and Torres scored on Rizzo's foul fly to left that Hays caught up against the wall. Lopez lost his shutout before issuing his first hit. In the sixth, the Orioles took a 6-1 lead as Joely Rodriguez, who came along with Gallo from Texas, gave up a single to Anthony Santander, whom he balked to second. Urías walked, and both runners moved up on a wild pitch. Pedro Severino and Maikel Franco delivered sacrifice flies, and the Orioles had a five-run lead. López pitched at least six innings for the fourth time this year and for just the second time since June 6th. "Everyone is aware of the struggles that he's had," Mullins said of Lopez. "Going into some of the later innings, the fourth, fifth. He's always had command of all of his pitches. Today he was able to keep command to all his pitches and keep damage to a minimum." Gallo doubled to lead off the sixth for the Yankees' first hit. López retired the next three batters to end his night. "It was good progress," López said. "I was aggressive. I just never gave up today. The guys played great defense." Torres singled against César Valdez to start the seventh. Gardner walked, and LeMahieu hit into a double play. Paul Fry struck out Rizzo looking to end the seventh. Yankee Stadium has not been a friendly place for the Orioles in recent years. Their win on April 7th snapped a club record 12 straight losses. Now, they've won two straight. "it just feels different because we got a big-time start, and we scored some runs early," Hyde said. He appreciates that it's a difficult environment for his young team. "This is a major payroll team with superstars and guys that make a ton of dough for a reason," Hyde said. "It's a tough place to play. We played super scrappy tonight." Mountcastle and Santander singled to begin the eighth against Albert Abreu. It was Mountcastle's third hit. He scored from third when Urías grounded into a double play. In the bottom of the eighth, play was interrupted for several minutes when a cat rain on the field and perched near the Orioles' bullpen. After security guards ran onto the field, the cat attempted to climb into the Orioles' bullpen, but couldn't scramble up the glass. Eventually, the cat found an open door and went into the stands. Fans chanted MVP as it eluded guards who were clearly overmatched. "I was wondering what everyone was waiting for," Hyde said. "A pitcher sitting on the mound with Aaron Judge at the plate…The cat showed some good quickness and agility and vertical." Notes: Alexander Wells (1-1, 5.28 ERA) will pitch for the Orioles on Tuesday night. Gerrit Cole was supposed to pitch for the Yankees, but he tested positive for Covid-19, manager Aaron Boone announced after the game. … Mullins had two stolen bases. He has 20 steals to go with his 18 home runs. Call for questions: I'll be answering your Orioles questions later this week. Please leave them in the comment box or email them to: [email protected]. I would really like to see Mountcastle be ROY, maybe has a shot if he keeps playing like yesterday. Mullins unbelievable year continues. Hays seems to be playing well. Os showing glimpses of what could be, and may be coming in the next year or two. It would be awesome to see Baumann make his debut by the end of the season too. Fun to see Cowser get in the action in his first game. That's promising! Good to see Lopez get over that mental hurdle of the fifth inning. Hays once again showed why he should play everyday. I'd love to know mountcastle numbers when Tony bats behind him. Seems like since Tony been back from COVID he's been hot again. Maybe there's something to it. Valdez sure knows how to make a game interesting. Rich, are there waiver wife deals allowed this year as in previous years? Waiver wire deals are a thing of the past, Phil. Once again, it's Hyde who panics in the fifth inning, not Lopez. The pitcher has never turned to the dugout pleading to be removed; the manager has come out of the dugout to yank him. It seems Lopez has to be pitching a no-hitter to be left in. For Hyde this is an improvement from his first year, when he pulled a guy pitching a no-hitter late in the game. Mountcastle is probably the O's best young player. Last night, he was a triple short of the cycle; not that rare, but worth noting. I agree about Hyde pulling Lopez. But it's not just Lopez he does it with. He does with everyone which to me goes back to the fact he has no feel for in game situations. Waiver wife? Sounds promising. Hyde panicked? Lopez pitched 6 innings, threw 106 pitches walked 5 and hit a batter. I think Hyde pulled Lopez at the right time. Thank you, cedar, for your question. First, I have no problem with Lopez coming out after the sixth inning. He had the opportunity to complete a "quality start" and left the game with a good chance at a win and no chance for a loss on his record. I don't want to be too cute, but I did not write that "Hyde panicked." I wrote that he "panics." using the present tense in the sense of an habitual action, such as "He walks to work." If I were referring to Monday night, I would have used the past tense, as you did. My comment was in the context of an ongoing discussion in which many maintain that, as a rule, Lopez cannot get through the fifth inning while I contend that if he is always removed before the end of the fifth inning, we will never find out. In some recent outings, he has been taken out in the fifth inning either with a lead or a one-run deficit, depriving him of the opportunity for a win. I know panic is a strong word, but those who disagree with me also use strong language against Lopez, implying that he's a quitter or, more recently, that he has a problem in his head. I have nothing special against Hyde, but when he enters the discussion putting the onus on Lopez when, in fact, he, and only he, is making the decision to remove the pitcher, I see a need to say that the player is not getting a fair shake. The man pulling the strings can't blame the puppet. Go O's. Keep putting it to the overpaid team in the Bronx loaded with big oafs. I'd prefer a rule 5 wife……try her out and send her back if it doesn't work. Yes the most complete game and best game of the year. Lopez showed moments of shakiness but overcame them. When he let out that big sigh of relief and smiled to Severino after closing out the 6th you could see his confidence jump by leaps and bounds. Listened to Paul O'Neill(Yankee broadcast) keep harping on Lopez' 2-12 record. Lopez has shown the ability to succeed but obviously not to outsiders(O'Neill). When this team eventually increases it's collective OBP 4 dingers will add up to 6-7 runs instead of 4. In other words–a lotta solo HRs. That game was so perfect that even Valdez was able to pitch out of his jam. Great game. Keep it up boys. Great game to watch last night. Good alert hustle on the base paths, too. Amazing what can happen when your starter is keeping the opponent from scoring, and can go 6+ innings while our guys bang out some hits. High entertainment with the crazy kitty in the 8th inning. I was cracking up. Though rare, it is aways refreshing to watch the gazillionaire players on the Yankees lose. It's amazing how a few decently pitched games can drastically alter the perception of the team. Not being down by double digits midway through a game is a good thing. Now the continuation of this all depends on whether Harvey has really found something, Means can stay healthy and Lopez continues to improve. Wells and Watkins need to be serviceable until such time, and possibly beyond such time as a few of our younger pitchers are able to contribute be it experience or injury …. referring to Bauman, Tyler Wells and Lowther. I've given up on Akins, but still have hope that Kremer can find his way. I love our lineup even though catcher and the left hand side of the infield still needs to be addressed. We all know help is on the way in those spots. Frankly, I'm hot sure how many other teams outfields I'd trade for Hays, Mullins and Santander. I said it before the season, this team should have been a lot closer to .500 this year than they will be. I also put the stipulation that Management had to allow for that. I'm not sure they have. But then again, I'm not sure they haven't. Akins and Kremer have been huge let downs. WHATEVER ….It's good to see the ball club playing so well .. it's good to see a few well pitched games … and it's good to see the boys smiling and yucking it up in the dugout. Maybe the football season can wait a few more weeks …. "This is a major payroll team with superstars" so how does a underpayroll team with no superstars compete not just one game but a whole season? "Our most complete game of the season" more complete then Means no hitter. Answer is we don't compete with a team like that over a whole season. We high five over David and Goliath moments like these, and wait until we are more competitive. The obvious question is how long will that wait be? We held the man with the giant head hitless! I finally caught the replay of the cat on the field incident at Yankee stadium last night and all I can say is I'm a Kevin Brown fan FOR LIFE now! BirdsCaps What names could be called up for the expanded roster? Are there any quality prospects that will likely get a cup of coffee? Getting a glimpse into the future would be nice.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,357
Every month we try to curate an interesting mix of posters from around the world - popular favorites, cult classics, examples of great poster design, rare treasures and some that are just plain fun. Finding and sharing great movie paper is the best part of the job, and it's especially gratifying with a lineup like the group we have this month. Take, for example, the header image above - an Italian photobusta poster for the original release of Godard's BREATHLESS. When I think about the film, this is the scene that first comes to mind - yet I had never seen an original release poster featuring the image. That is, until last month when I came across this poster. Godard's budget was minuscule, but BREATHLESS has style to burn, and these Italian posters look like a million bucks: practically perfect combinations of image, layout, font, color. We also have another "unicorn" - a huge British poster of Roger Moore in his first Bond film, LIVE AND LET DIE - along with rarities such as Japanese posters for THE INSECT WOMAN & BELLADONNA (28"x40") and a unique set of Belgian posters for YELLOW SUBMARINE.
{ "redpajama_set_name": "RedPajamaC4" }
4,657
{"url":"https:\/\/stats.stackexchange.com\/questions\/575222\/estimation-of-standard-error-in-observables-generated-from-time-series-data","text":"Estimation of standard error in observables generated from time series data\n\nImagine that I have time series data which are time-correlated, non-scalar, and of unknown, but identical distribution\n\n$X = \\left\\{ X_{1},X_{2},..., X_{n},X_{n+1}...,X_{N}\\right\\}$\n\nFrom this time series I have a function that takes an subset of X as input to calculate a scalar quantity Q\n\n$Q_{Y} = F(Y)\\ \\ \\ where \\ \\ Y \\subseteq X$\n\nWhat I would like to know is how do you estimate the standard deviation of $Q_{X}$ ? That is to say the standard deviation of the quantity calculated by passing the full dataset to function F. I have seen methods of dealing with time-correlated data before. For example if each random variable $X_{n}$ were a scalar and we wanted to calculate the standard error of the mean of the underlying distribution, we could use the Block Averaging method. Does an analogous technique exist for this case where I am trying to estimate the standard deviation of $Q_{X}$ ?\n\nI should further note that for my case $Q_{X}$ converges for large sample sizes (i.e. when N is large) and because of this I am using $Q_{X}$ as an estimator for the quantity calculated if the entire population were passed to function F. Also, the range of the function is $0 \\leq F \\leq 1$ and has the property $F(X_{n}) = 1$ (the function applied to any single data point always gives a value equal to 1)","date":"2022-08-09 08:15:49","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 9, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8433863520622253, \"perplexity\": 153.04624752125332}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-33\/segments\/1659882570913.16\/warc\/CC-MAIN-20220809064307-20220809094307-00739.warc.gz\"}"}
null
null
The 1987–88 season was the 89th completed season of The Football League. Final league tables and results The tables and results below are reproduced here in the exact form that they can be found at The Rec.Sport.Soccer Statistics Foundation website, with home and away statistics separated. First Division Liverpool won the league title by nine points, and with only two defeats all season. Second in the league were Manchester United. The automatically relegated sides were Watford, Oxford United and Portsmouth. Chelsea were subsequently relegated as well after losing to Middlesbrough in the playoff final. Final table First Division results Managerial changes First Division maps Second Division Millwall lifted the Second Division championship trophy and gained promotion to the First Division for the first time in their history. Runners-up were Aston Villa, and Middlesbrough won promotion via play-offs. Huddersfield Town and Sheffield United were relegated. Second Division play-offs The team fourth from bottom of the First Division played off for one place in that division with the teams finishing third, fourth and fifth in the Second Division. In the semi-final, Chelsea of the First Division beat fifth-placed Blackburn Rovers 6–1 on aggregate, and third-placed Middlesbrough beat Bradford City 3–2 on aggregate. The final was also played over two legs. Playing at their Ayresome Park ground in front of a crowd of 25,531, Middlesbrough duly won the first leg 2–0 with goals from Bernie Slaven and Trevor Senior. In the second leg at Stamford Bridge, which was marred by violence perpetrated by some of the 40,550 spectators, Chelsea's Gordon Durie scored the only goal. Thus Middlesbrough won 2–1 on aggregate and were promoted to the First Division for 1988–89, while Chelsea were relegated to the Second. Source: Second Division results Third Division Sunderland won the Third Division and went back up to the Second Division. They were joined by runners-up Brighton & Hove Albion and playoff winners Walsall. The automatic relegation places were occupied by Grimsby Town, York City and Doncaster Rovers, with Rotherham United relegated after play-offs. Third Division play-offs Replay Third Division results Fourth Division Wolves ended their two-year tenure in the Fourth Division by finishing top of the table and winning promotion to the Third Division. They also won the Sherpa Van Trophy final by defeating Burnley at Wembley. Bolton Wanderers, Cardiff City and Swansea City were also promoted. Newport County were relegated for the second successive season. They were replaced in the Football League by Lincoln City. Fourth Division play-offs Fourth Division results Goalscorers The top goalscorers in each division were: Division 1 - John Aldridge (26) Division 2 - David Currie (28) Division 3 - David Crown (26) Division 4 - Steve Bull (34) See also 1987–88 in English football References English Football League seasons
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,794
// Copyright (c) Microsoft Corporation. All rights reserved. // Licensed under the MIT License. // Code generated by Microsoft (R) AutoRest Code Generator. package com.azure.resourcemanager.synapse.models; import com.azure.core.util.ExpandableStringEnum; import com.fasterxml.jackson.annotation.JsonCreator; import java.util.Collection; /** Defines values for BlobStorageEventType. */ public final class BlobStorageEventType extends ExpandableStringEnum<BlobStorageEventType> { /** Static value Microsoft.Storage.BlobCreated for BlobStorageEventType. */ public static final BlobStorageEventType MICROSOFT_STORAGE_BLOB_CREATED = fromString("Microsoft.Storage.BlobCreated"); /** Static value Microsoft.Storage.BlobRenamed for BlobStorageEventType. */ public static final BlobStorageEventType MICROSOFT_STORAGE_BLOB_RENAMED = fromString("Microsoft.Storage.BlobRenamed"); /** * Creates or finds a BlobStorageEventType from its string representation. * * @param name a name to look for. * @return the corresponding BlobStorageEventType. */ @JsonCreator public static BlobStorageEventType fromString(String name) { return fromString(name, BlobStorageEventType.class); } /** @return known BlobStorageEventType values. */ public static Collection<BlobStorageEventType> values() { return values(BlobStorageEventType.class); } }
{ "redpajama_set_name": "RedPajamaGithub" }
3,540
Q: SSRS lock parameter input Is there any chance I can set a parameter input in SSRS so that the user can only modify but not delete the field? I'm playing around with a date/time field like this 1/1/2013 12:10:00 PM I would like to be able to let the user change the date and time but not be able to delete or use the backspace button on this parameter. Thanks in advance for the help.
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,104
Posted at 2:53 pm on October 16, 2018 by Sam J. Welp, we thought this joke made at the expense of Ana Navarro, Bill Kristol, and Elizabeth Warren was pretty damn funny. Is it kinda childish? Sure. I was a Republican when Trump was a Democrat. I was a Republican when Trump was an Independent. I am a Republican while you and your ilk enable an immoral, great pretender. I will not change principles I believe in, in order to accommodate an unfit impostor. Maybe if she smiled more. You are not a Republican when you fail to acknowledge all the Trump accomplishments…you have TDS. Republicans don't advocate for Democrats Ana. Ana would be a far happier and more pleasant person if she just admitted she's no longer a Republican. In other words completely ineffective and you speak for nobody but yourself and the comforting embrace of self-congratulations between TV pundit class. Hooray. Tweet more, earn that paycheck. Just say it…You ARE CNN, period. Those are your principles..and we know exactly what that means. We know exactly what that accomodates. What's most interesting about the tweet she's responding to is that she was not tagged in it … so how did she see it? Is she searching her own name on Twitter? Did someone send it to her? Either way, it made her super cranky which only opened her up for more ridicule. Never go full Jennifer Rubin, Ana.
{ "redpajama_set_name": "RedPajamaC4" }
5,856