text
stringlengths
256
16.4k
Chapter 4 Bits and Bytes In base 10, {$\frac{1}{3} $} has issues..... 0.33333333333 is only approximately equal to {$\frac{1}{3} $} -- to be exact, there shouldn't be an end to those repeating 3's. There is similar trouble in all bases, even base two. Try 4.35*100 for example....shouldn't that be 435.0? Try it.... Use the BlueJ code pad to and experiment-- can you see the trouble? Can you find another? There are other simple data types in java like the int, but smaller. The byte type is an integer that only takes 8 bits (1 byte=8 bits). In the BlueJ code pad, try this: byte x=127; x+1 Isn't 127+1=128? What does your poor computer think it is? This is an overflow of the poor byte. {$2^7-1=127$} so thats why there is a problem. The short is 2 bytes (or 16 bits) so it has trouble adding 1 to 32767 (Since {$2^{15}-1=32767 $} ) You can compute this using the Math.pow() method: (short)( Math.pow(2,15)-1 ) Use the code pad to try an overflow of a short. The int is 4 bytes (or 32 bits). How big a number can fit into an int? Try to overflow an int! Once in a wile, even big companies like Intel can make mistakes. The Pentium processor had a famous blunder. {$ \left(\frac{4195835}{3145727}\right) \left(\frac{3145727}{1} \right) - 4195835 = 0 $}, right? Shouldn't those 3145727's cancel, leaving 4195835-4195835=0? Well, according to a pentium FP, it's 256 not 0! Try it in your BlueJ code pad.... is it fixed now? Demo Code Implement a class QuadraticEquation (you can start with the code below if you wish to save time) whose constructor receives the coefficients a, b, c of the quadratic equation ax 2 + bx + c = 0. Supply methods getSolution1 and getSolution2 that get the solutions, using the quadratic formula. Write a test class QuadraticEquationTester (or use the sample one below if you wish) that constructs a QuadraticEquation object, and prints the two solutions. For example, if you make QuadraticEquation myProblem = new QuadraticEquation(2,5,-3), the solutions should be .5 and -3 Recall that if {$ax^2+bx+c=0$} then {$ x= \frac{-b \pm \sqrt{b^2-4ac}}{2a} $} QuadraticEquationTester.java public class QuadraticEquationTester { public static void main(String[] args) { QuadraticEquation myProblem = new QuadraticEquation(2.0,5.0,-3.0); double x1 = myProblem.getSolution1(); double x2 = myProblem.getSolution2(); System.out.println(myProblem); System.out.println("The solutions are "+x1+" and "+x2); System.out.println("Expected .5 and -3"); } } QuadraticEquation.java public class QuadraticEquation { private double a,b,c; public QuadraticEquation (double theA, double theB, double theC ) { a=theA; b=theB; c=theC; } public double getSolution1() { return 0;// your work here } public double getSolution2() { return 0;//your work here } public String toString() { return a+"x^2 + "+b+"x + "+c+" =0"; } } Write a method that gets the last n characters from a string. For example, last("Hello, World!", 5) should return the string "orld!". public class Strings { /** Gets the last n characters from a string. @param s a string @param n a nonnegative integer <= s.length() @return the string that contains the last n characters of s */ public String last(String s, int n) { // your work here } } Complete the following function that computes the length of a line segment with end points (x1, y1) and (x2, y2). According to the Pythagorean theorem, the length is {$\sqrt{(x_1 - x_2)^2 + (y_1 - y_2)^2}$}. Lines.java public class Lines { /** Computes the length of a line segment @param x1 the x-coordinate of the starting point @param y1 the y-coordinate of the starting point @param x2 the x-coordinate of the ending point @param y2 the y-coordinate of the ending point @return the length of the line segment joining (x1, y1) and (x2, y2) */ public static double segmentLength(double x1, double y1, double x2, double y2) { // your work here } } LinesTester.java public class LinesTester { /** * @param args */ public static void main(String[] args) { System.out.print("Expecting 5:"+Lines.segmentLength(0,0,3,4)); } } Demo: DigitPeeler Write a method that returns the last digit of an int Write a method that returns the first digit of an intthat has ndigits. DigitPeeler.java public class DigitPeeler { /** * DigitPeeler is a class that exists * to make a small "library" of static functions, * just in the same way that Math and Integer classes * serve as a collection of static methods. * * You don't need to create an instance of this class * to use these functions, because they are static */ public static int lastDigit(int n){ } /** * precondition: n has at least 1 digit * @param n the number * @param numberOfDigits of n * @return the first digit of n */ public static int firstDigit(int n, int numberOfDigits){ //Hint: Math.pow(10,3) returns 1000 } } DigitPeelerTester.java public class DigitPeelerTester { public static void main(String[] args) { System.out.println("This tests the DigitPeeler methods\n"); System.out.println("Last digit of 56834 is "+DigitPeeler.lastDigit(56834)); System.out.println("Expected 4\n"); System.out.println("First digit of 56834 is "+DigitPeeler.firstDigit(56834, 5)); System.out.println("Expected 5\n"); } } Review Exercises These are important activities to do, but not really something I can grade. This is like the "Training ground" in a video game where you learn the buttons before you go out on a quest. Skip these, and you may not last very long in the game. For each of these, make a guess, and then type the code, and see if the computer behaved the way you expected. True learning happens when you are surprised by a result, and you figure out what it did what it did! R4.1 Write the following mathematical expressions in Java. R4.2 Write the following Java expressions in mathematical notation. dm = m * (Math.sqrt(1 + v / c) / (Math.sqrt(1 - v / c) - 1)); volume = Math.PI * r * r * h; volume = 4 * Math.PI * Math.pow(r, 3) / 3; p = Math.atan2(z, Math.sqrt(x * x + y * y)); R4.5 What does Java report 4.35 * 100.0? Why? R4.7 Let nbe an integer and xa floating-point number. Explain the difference between n = (int) x; and n = (int) Math.round(x); R4.10 R4.11 If xis an intand sis a String, then which are true? Integer.parseInt(“” + x) is the same as x “” + Integer.parseInt(s) is the same as s s.substring(0, s.length()) is the same as s R4.13 How do you get the last digit of an integer? The first digit? That is, if nis 23456, how do you find out that the first digit is 2 and the last digit is 6? Do not convert the number to a string. Hint: try % or Math.log. If you use the Math.log method, Java uses the natural log, and you want base 10. double logn = Math.log(n) / Math.log(10); // 10 to what power is 23456? about 4.37025395296 int ndigits = (int) logn; // round to 4. int pow10 = (int) Math.pow(10, ndigits); //four zeros: 10000 int first = n / pow10; // 23456 over 1000 is 2. Don't worry, we'll see this task later when we have loops and conditional statements, and that process will seem easier. R4.16 Programming Exercises For 8 points choose 2 of these four, for 9 points, choose 3 out of these four, and for 10 points, do all four! (There is no 11 point "Extra Credit" this chapter since Chapter 5 is so much more interesting, I'd rather you move on to the good stuff!) P4.11 Giving Change Enhance the CashRegister class so that it directs a cashier how to give change. The cash register computes the amount to be returned to the customer, in pennies.Here is some starter code: /** A cash register totals up sales and computes change due. */ public class CashRegister { public static final double QUARTER_VALUE = 0.25; public static final double DIME_VALUE = 0.1; public static final double NICKEL_VALUE = 0.05; public static final double PENNY_VALUE = 0.01; private double purchase; private double payment; /** Constructs a cash register with no money in it. */ public CashRegister() { purchase = 0; payment = 0; } /** Records the purchase price of an item. @param amount the price of the purchased item */ public void recordPurchase(double amount) { //your code here } /** Enters the payment received from the customer. @param dollars the number of dollars in the payment @param quarters the number of quarters in the payment @param dimes the number of dimes in the payment @param nickels the number of nickels in the payment @param pennies the number of pennies in the payment */ public void enterPayment(int dollars, int quarters, int dimes, int nickels, int pennies) { //your code here } /** Computes the change due and resets the machine for the next customer. @return the change due to the customer */ public double giveChange() { //your code here } } Add the following methods to the CashRegister class: /** Computes the number of dollars due to the customer. @return the number of dollars in the change due */ public int giveDollars() { int change = (int) (payment - purchase); if (change < 0) return 0; payment = payment - change; return change; } /** Computes the number of quarters due to the customer. @return the number of quarters in the change due */ public int giveQuarters() { int change = (int) ((payment - purchase) / QUARTER_VALUE); payment = payment - change * QUARTER_VALUE; return (int) change; } /** Computes the number of dimes due to the customer. @return the number of dimes in the change due */ public int giveDimes() { int change = (int) ((payment - purchase) / DIME_VALUE); payment = payment - change * DIME_VALUE; return (int) change; } /** Computes the number of nickels due to the customer. @return the number of nickels in the change due */ public int giveNickels() { int change = (int) ((payment - purchase)/NICKEL_VALUE); payment = payment - change * NICKEL_VALUE; return (int) change; } /** Computes the number of pennies due to the customer. @return the number of pennies in the change due */ public int givePennies() { ///your code here return (int) change; } Each method computes the number of dollar bills or coins to return to the customer, and reduces the change due by the returned amount. You may assume that the methods are called in this order. Here is a test class: public class CashRegisterTester { public static void main(String[] args) { CashRegister register = new CashRegister(); register.recordPurchase(8.37); register.enterPayment(10, 0, 0, 0, 0); System.out.println("Dollars: " + register.giveDollars()); System.out.println("Expected: 1"); System.out.println("Quarters: " + register.giveQuarters()); System.out.println("Expected: 2"); System.out.println("Dimes: " + register.giveDimes()); System.out.println("Expected: 1"); System.out.println("Nickels: " + register.giveNickels()); System.out.println("Expected: 0"); System.out.println("Pennies: " + register.givePennies()); System.out.println("Expected: 3"); } } If you prefer, here is a more interactive way to test your code: import java.util.Scanner; /** This program simulates a transaction in which a user pays for an item and receives change. */ public class CashRegisterSimulator { public static void main(String[] args) { Scanner in = new Scanner(System.in); CashRegister register = new CashRegister(); System.out.print("Enter price: "); double price = in.nextDouble(); register.recordPurchase(price); System.out.print("Enter dollars: "); int dollars = in.nextInt(); System.out.print("Enter quarters: "); int quarters = in.nextInt(); System.out.print("Enter dimes: "); int dimes = in.nextInt(); System.out.print("Enter nickels: "); int nickels = in.nextInt(); System.out.print("Enter pennies: "); int pennies = in.nextInt(); register.enterPayment(dollars, quarters, dimes, nickels, pennies); System.out.print("Your change: "); System.out.println(register.giveChange()); } } P4.12 Split a number by digits Write a program that reads in an integer and breaks it into a sequence of individual digits in reverse order. For example, the input 16384 is displayed as 4 8 3 6 1 You may assume that the input has no more than five digits and is not negative. Define a class DigitExtractor: public class DigitExtractor { private int number; /** Constructs a digit extractor that gets the digits of an integer in reverse order. @param anInteger the integer to break up into digits */ public DigitExtractor(int anInteger) { . . . } /** Removes the right most digit from number, and returns it. @return the next digit */ public int nextDigit() { . . . } } In your main class DigitPrinter, call System.out.println(myExtractor.nextDigit()) five times. DigitExtractorTester.java public class DigitExtractorTester { /* * Creates a DigitExtractor with a 5 digit number * and prints the results of 5 calls of the * nextDigit method. It should print the last * digit first and the first digit last */ public static void main(String[] args) { //your code here } } P4.15 Write large letters. Writing large letters. A large letter H can be produced like this: * * * * ***** * * * * Use the class: public class LetterH { public String toString() { return "* *\n* *\n*****\n* *\n* *\n"; } } Define similar classes for the letters E, L, and O. Then write the message H E L L 0 in large letters. Your main class should be called HelloPrinter. P4.18 Gauss' Easter Algorithm Write a class to compute the date of Easter Sunday. For the Western Church, Easter Sunday is the first Sunday after the first full moon of spring (the Eastern Church uses a different method which you can read about here ). Use this algorithm, invented by the mathematician Carl Friedrich Gauss in 1800 (For more info on the history of the computation of Easter, read about Computus ): Let y be the year (such as 1800 or 2001). Divide y by 19 and call the remainder a. Ignore the quotient. Divide y by 100 to get a quotient b and a remainder c. Divide b by 4 to get a quotient d and a remainder e. Divide 8 * b + 13 by 25 to get a quotient g. Ignore the remainder. Divide 19 * a + b - d - g + 15 by 30 to get a remainder h. Ignore the quotient. Divide c by 4 to get a quotient j and a remainder k. Divide a + 11 * h by 319 to get a quotient m. Ignore the remainder. Divide 2 * e + 2 * j - k - h + m + 32 by 7 to get a remainder r. Ignore the quotient. Divide h - m + r + 90 by 25 to get a quotient n. Ignore the remainder. Divide h - m + r + n + 19 by 32 to get a remainder p. Ignore the quotient. Then Easter falls on day p of month n. For example, if y is 2001: a = 6 g = 6 r = 6 b = 20 h = 18 n = 4 c = 1 j = 0, k = 1 p = 15 d = 5, e = 0 m = 0 Therefore, in 2001, Easter Sunday fell on April 15, since n=4 and p=15. Write a class Easter with methods getEasterSundayMonth and getEasterSundayDay.Use the following class as your tester class: /** This program tests the Easter class. */ public class EasterTester { public static void main(String[] args) { Easter myEaster = new Easter(2001); System.out.print("Month: " + myEaster.getEasterSundayMonth()); System.out.println(" Expected: 4"); System.out.print("Day: " + myEaster.getEasterSundayDay()); System.out.println(" Expected: 15"); Easter myEaster2 = new Easter(2012); System.out.print("Month: " + myEaster2.getEasterSundayMonth()); System.out.println(" Expected: 4"); System.out.print("Day: " + myEaster2.getEasterSundayDay()); System.out.println(" Expected: 8"); Easter myEaster3 = new Easter(2016); System.out.print("Month: " + myEaster3.getEasterSundayMonth()); System.out.println(" Expected: 3"); System.out.print("Day: " + myEaster3.getEasterSundayDay()); System.out.println(" Expected: 27"); } }
I have this problem: Given a collection of sets $S:\{S_{1},...,S_{k}\}$ where each set $S_{j}$ is a subset of $U:\{e_{1},...,e_{n}\}$ universe of elements. I would find-out a subset $C \subseteq S$ such that I maximize the function $f(C)=|\cup_{\forall C_{j} \in C} C_{j}|$ and minimize the function $g(C)=\sum_{\forall C_{j},C_{i} \in C}|C_{i} \cap C_{j}|$. I have found out a Integer Linear Programming formulation for this problem, now I'm wondering is this problem NP-Hard? Does exist in literature a similar problem? My formulation: $max(\sum_{\forall e}e_{j} - k \sum_{\forall z}z_{j})$ $\forall e_{j} \in U: \;\;e_{j} \leq \sum_{\forall S_{i}\;s.t.\; e_{j} \in S_{i}}s_{i}$ $\forall e_{j} \in S_{j} \;s.t.\; e_{j} \in S_{k} \cap S_{t}:\;\; z_{j} \geq s_{k}+s_{t}-1$ all variable in ${0,1}$, k is a constant to specify the malus for adding an overlap. The problem formulated in this way can solve Maximum Set Packing. So it is NP-Hard. Now I'm wondering what can be a good heuristic to find some solutions? I'm trying a greedy algorithm that put at each step the elements with the maximum ratio of new covered elements over new intersections.
This question already has an answer here: Mathematica Sec and Csc 3 answers I am writing a code to compute some matrix quantities. The result involves Sec and Csc functions, and I want a form displayng only Sinand Cos. I have already seen this question, but the solution suggested does not solve my problem, since I get the following: Ef = {{-Sin[ζ], 0, 0, 0, 0, 0, 0, 0}, {0, Cos[ζ], 0, 0, 0, 0, 0, 0}, {0, 0, Cos[ζ], 0, 0, 0, 0, γ Cos[ζ]}, {0, 0, 0, Sin[ζ], 0, 0, -γ Sin[ζ], 0}, {0, 0, 0, 0, -1/Sin[ζ], 0, 0, 0}, {0, 0, 0, 0, 0, 1/Cos[ζ], 0, 0}, {0, 0, 0, 0, 0, 0, (1/Cos[ζ]), 0}, {0, 0, 0, 0, 0, 0, 0, 1/Sin[ζ]}}; G = Transpose[Ef].Ef; MatrixForm[G] ginv = Table[G[[i]][[j]], {i, 5, 8}, {j, 5, 8}]; g = Inverse[ginv]; MatrixForm[g] // FullSimplify $$g= \begin{pmatrix} \sin ^2(\zeta ) & 0 & 0 & 0 \\ 0 & \cos ^2(\zeta ) & 0 & 0 \\ 0 & 0 & \frac{\csc ^2(\zeta )}{\gamma ^2+4 \csc ^2(2 \zeta )} & 0 \\ 0 & 0 & 0 & \frac{\sec ^2(\zeta )}{\gamma ^2+4 \csc ^2(2 \zeta )}\end{pmatrix}$$ But I would like the equivalent form (but easier to cope with), $$g = \begin{pmatrix} \sin ^2(\zeta ) & 0 & 0 & 0 \\ 0 & \cos ^2(\zeta ) & 0 & 0 \\ 0 & 0 & \frac{\cos ^2(\zeta )}{1+\gamma ^2 \sin^2 (\zeta) \cos^2 (\zeta)} & 0 \\ 0 & 0 & 0 & \frac{\sin ^2(\zeta )}{1+\gamma ^2 \sin^2 (\zeta) \cos^2 (\zeta)}\end{pmatrix}$$ Using $PrePrint = # /. {Csc[z_] :> 1/Defer@Sin[z], Sec[z_] :> 1/Defer@Cos[z]} &; I just manage to obtain terms like $$ \frac{1}{\cos^2 (\zeta) \left( \gamma^2 + \frac{4}{\sin^2(2\zeta)}\right)}$$ and further TrigExpand does not have any effect.Someone has any suggestion?
Chapter 8 Designing Classes Demos Review Exercises R8.1 Users place coins in a vending machine and select a product by pushing a button. If the inserted coins are sufficient to cover the purchase price of the product, the product is dispensed and change is given. Otherwise, the inserted coins are returned to the user. What classes should you use to implement it? R 8.6 Suppose a vending machine contains products, and users place coins into the vending machine to purchase products.Draw a UML diagram showing the dependencies between the classes VendingMachine, Coin, and Product R 8.9 Classify (Accessor or Mutator) the methods of the class Scanner that are used in this book as accessors and mutators. boolean hasNext() boolean hasNextDouble() boolean hasNextInt() boolean hasNextLine() String next() double nextDouble() int nextInt() String nextLine() Programming Exercises For 8 points do P8.5, P8.6, P8.10, P8.11. For 9 points, add P8.7. For 10 Points add P8.8. For 11 out of 10, add A JUnit Class. 8.5 Write static methods public static double sphereVolume(double r) public static double sphereSurface(double r) public static double cylinderVolume(double r, double h) public static double cylinderSurface(double r, double h) public static double coneVolume(double r, double h) public static double coneSurface(double r, double h) that compute the volume and surface area of a sphere with radius r, a cylinder with circular base with radius r and height h, and a cone with circular base with radius r and height h. Place them into a class Geometry. Then write a program that prompts the user for the values of r and h, calls the six methods, and prints the results.Here are some helpful formulas: {$V_{sphere}=\frac{4}{3}\pi r^3 $} {$A_{sphere}=4\pi r^2$} {$V_{cone}=\frac{\pi}{3} h r^2$} {$A_{cone}=\pi r^2+\pi r \sqrt{r^2+h^2}$} {$V_{cylinder}=\pi h r^2$} {$A_{cylinder}=2\pi r^2+2\pi r h$} Here is a screen capture of me writing the first method: Before you work on your main method of GeometryCalculator85.java, you may wish to try my JUnit for testing your Geometry class GeometryTest.java import static org.junit.Assert.*; import org.junit.After; import org.junit.Before; import org.junit.Test; /** * The test class GeometryTest. * * @author (your name) * @version (a version number or a date) */ public class GeometryTest { /** * Default constructor for test class GeometryTest */ public GeometryTest() { } /** * Sets up the test fixture. * * Called before every test case method. */ @Before public void setUp() { } /** * Tears down the test fixture. * * Called after every test case method. */ @After public void tearDown() { } @Test public void SphereVolumeTest() { assertEquals(0.0, Geometry.sphereVolume(0.0), 0.0001); assertEquals(4.18879, Geometry.sphereVolume(1.0), 0.0001); assertEquals(1098.066219, Geometry.sphereVolume(6.4), 0.0001); } @Test public void SphereSurfaceTest() { assertEquals(0, Geometry.sphereSurface(0), 0.001); assertEquals(12.5663706, Geometry.sphereSurface(1.0), 0.001); assertEquals(314.1592653589793, Geometry.sphereSurface(5.0), 0.001); } @Test public void CylinderVolumeTest() { assertEquals(9.4247779, Geometry.cylinderVolume(1.0, 3.0), 0.001); assertEquals(785.3981633974483, Geometry.cylinderVolume(5.0, 10.0), 0.001); } @Test public void CylinderSurfaceTest() { assertEquals(471.23889803846896, Geometry.cylinderSurface(5.0, 10.0), 0.0001); } @Test public void ConeVolumeTest() { assertEquals(261.799387, Geometry.coneVolume(5.0, 10.0), 0.1); } @Test public void ConeSurfaceTest() { assertEquals(254.1601846, Geometry.coneSurface(5.0, 10.0), 0.0001); } } Once you have your static methods passing the JUnit test, complete the GeometryCalculator85 class to test the remaining 4 methods of the Geometry class (To save time, you can borrow the expected values from my JUnit test class above). GeometryCalculator85.java import java.util.Scanner; /** * GeometryCalculator85 here. * * @author Your Name * @version today's date */ public class GeometryCalculator85 { public static void main(String[] args) { System.out.println("The radius: 5.0"); System.out.println("The height: 10.0"); double r=5.0; double h=10.0; System.out.print("The area of the cone is: "+Geometry.coneSurface(r,h)); System.out.println("-- Expected 254.160184615763"); System.out.print("The volume of the cone is: "+Geometry.coneVolume(r,h)); System.out.println("-- Expected 261.79938779914943"); // your code here: } } P8.6. Solve Exercise P8.5 by implementing classes Sphere, Cylinder, and Cone. Which approach is more object-oriented? /** * GeometryCalculator85 here. * * @author Your Name * @version today's date here */ public class GeometryCalculator86 { public static void main(String[] args) { System.out.println("The radius: 5.0"); System.out.println("The height: 10.0"); double r=5.0; double h=10.0; Sphere ball = new Sphere(r); Cylinder cyl = new Cylinder(r, h); Cone cone = new Cone(r, h); System.out.print("The area of this sphere is: "+ball.surface()); System.out.println("-- Expected ???"); // replace the ??? with the correct number System.out.print("The volume of this sphere is: "+ball.volume()); System.out.println("-- Expected ???"); // replace the ??? with the correct number System.out.print("The area of this cylinder is: "+cyl.surface()); System.out.println("-- Expected ???"); // replace the ??? with the correct number System.out.print("The volume of this cylinder is: "+cyl.volume()); System.out.println("-- Expected ???"); // replace the ??? with the correct number System.out.print("The area of this cone is: "+cone.surface()); System.out.println("-- Expected 254.160184615763"); System.out.print("The volume of this cone is: "+cone.volume()); System.out.println("-- Expected 261.79938779914943"); } } Hint for 8.6 Here is the Cone class: public class Cone { // instance variables - replace the example below with your own private double r,h; /** * Constructor for objects of class Cone */ public Cone(double radius, double height) { // initialise instance variables this.r=radius; this.h=height; } /** * @return surface area of this cone */ public double surface() { return Math.PI*r*r+Math.PI*r*Math.sqrt(r*r+h*h); } /** * @return volume of this cone */ public double volume(){ return (Math.PI/3.0)*h*r*r; } } P8.7. Write methods public static double perimeter(Ellipse2D.Double e); public static double area(Ellipse2D.Double e); that compute the area and the perimeter of the ellipse e. Add these methods to a class Geometry. The challenging part of this assignment is to find and implement an accurate formula for the perimeter. To help, use the API to find the width and height. The formulas involve HALF the major axis {$a$} and HALF the minor axis {$b$}. {$A_{ellipse}\approx \pi a b $} {$P_{ellipse} \approx 2\pi \sqrt{\frac{1}{2}(a^2+b^2)} $} Why does it make sense to use a static method in this case? Use the following class as your tester class: import java.awt.geom.Ellipse2D; /** This is a tester for the ellipse geometry methods. */ public class EllipseTester { public static void main(String[] args) { Ellipse2D.Double e = new Ellipse2D.Double(100, 100, 200, 100); System.out.println("Area: " + Geometry.area(e)); System.out.println("Expected: 15707.963267948966"); System.out.println("Perimeter: " + Geometry.perimeter(e)); System.out.println("Expected: 496.7294132898051"); } } P8.8. Write methods public static double angle(Point2D.Double p, Point2D.Double q) public static double slope(Point2D.Double p, Point2D.Double q) that compute the angle between the x-axis and the line joining two points, measured in degrees, and the slope of that line. Add the methods to the class Geometry. Supply suitable preconditions (think about what input would cause an error). Why does it make sense to use a static method in this case? Use the following class as your tester class: import java.awt.geom.Point2D; /** This program tests the methods to compute the slope and angle of a line. */ public class LineTester { public static void main(String[] args) { Point2D.Double p = new Point2D.Double(-1, -1); Point2D.Double q = new Point2D.Double(3, 0); System.out.println("Slope: " + Geometry.slope(p, q)); System.out.println("Expected: 0.25"); Point2D.Double r = new Point2D.Double(0, 0); System.out.println("Angle: " + Geometry.angle(p, r)); System.out.println("Expected: -135"); } } Hint: use Math.atan2 P8.10. Write a method public static int readInt( Scanner in, String prompt, String error, int min, int max) that displays the prompt string, reads an integer, and tests whether it is between the minimum and maximum. If not, print an error message and repeat reading the input. Add the method to a class Input. Use the following class as your main class: import java. util.Scanner; /** This program prints how old you'll be next year. */ public class AgePrinter { public static void main(String[] args) { Scanner in = new Scanner(System.in); int age = Input.readInt(in, "Please enter your age", "Illegal Input--try again", 1, 150); System.out.println("Next year, you'll be " + (age + 1)); } } P8.11. Consider the following algorithm for computing {$x^n$} for an integer {$n$}. If {$n < 0$}, {$x^n$} is {$\frac{1}{x^{-n}}$}. If {$n$} is positive and even, then {$x^n = (x^{n/2})^2$}. If {$n$} is positive and odd, then {$x^n = x \cdot x^{n-1}$} . Implement a static method double intPower(double x, int n) that uses this algorithm. Add it to a class called Numeric. Use the following class as your tester class: /** This is a test driver for the intPower method. */ public class PowerTester { public static void main(String[] args) { System.out.println(Numeric.intPower(0.1, 12)); System.out.println("Expected: " + 1E-12); System.out.println(Numeric.intPower(2, 10)); System.out.println("Expected: 1024"); System.out.println(Numeric.intPower(-1, 1000)); System.out.println("Expected: 1"); } } Write a JUnit Make a JUnit test class for an earlier Homework Programming Exercise, such as P5.5
Fit Determinantal Point Process Model Fit a determinantal point process model to a point pattern. Usage dppm(formula, family, data=NULL, ..., startpar = NULL, method = c("mincon", "clik2", "palm"), weightfun=NULL, control=list(), algorithm="Nelder-Mead", statistic="K", statargs=list(), rmax = NULL, covfunargs=NULL, use.gam=FALSE, nd=NULL, eps=NULL) Arguments formula A formulain the R language specifying the data (on the left side) and the form of the model to be fitted (on the right side). For a stationary model it suffices to provide a point pattern without a formula. See Details. family Information specifying the family of point processes to be used in the model. Typically one of the family functions dppGauss, dppMatern, dppCauchy, dppBesselor dppPowerExp. Alternatively a character string giving the name of a family function, or the result of calling one of the family functions. See Details. data The values of spatial covariates (other than the Cartesian coordinates) required by the model. A named list of pixel images, functions, windows, tessellations or numeric constants. … Additional arguments. See Details. startpar Named vector of starting parameter values for the optimization. method The fitting method. Either "mincon"for minimum contrast, "clik2"for second order composite likelihood, or "palm"for Palm likelihood. Partially matched. weightfun Optional weighting function \(w\) in the composite likelihood or Palm likelihood. A functionin the R language. See Details. control List of control parameters passed to the optimization function optim. algorithm statistic Name of the summary statistic to be used for minimum contrast estimation: either "K"or "pcf". statargs Optional list of arguments to be used when calculating the statistic. See Details. rmax Maximum value of interpoint distance to use in the composite likelihood. covfunargs,use.gam,nd,eps Arguments passed to ppmwhen fitting the intensity. Details This function fits a determinantal point process model to a point pattern dataset as described in Lavancier et al. (2015). The model to be fitted is specified by the arguments formula and family. The argument formula should normally be a formula in the R language. The left hand side of the formula specifies the point pattern dataset to which the model should be fitted. This should be a single argument which may be a point pattern (object of class "ppp") or a quadrature scheme (object of class "quad"). The right hand side of the formula is called the trend and specifies the form of the logarithm of the intensity of the process. Alternatively the argument formula may be a point pattern or quadrature scheme, and the trend formula is taken to be ~1. The argument family specifies the family of point processes to be used in the model. It is typically one of the family functions dppGauss, dppMatern, dppCauchy, dppBessel or dppPowerExp. Alternatively it may be a character string giving the name of a family function, or the result of calling one of the family functions. A family function belongs to class "detpointprocfamilyfun". The result of calling a family function is a point process family, which belongs to class "detpointprocfamily". The algorithm first estimates the intensity function of the point process using ppm. If the trend formula is ~1 (the default if a point pattern or quadrature scheme is given rather than a "formula") then the model is homogeneous. The algorithm begins by estimating the intensity as the number of points divided by the area of the window. Otherwise, the model is inhomogeneous. The algorithm begins by fitting a Poisson process with log intensity of the form specified by the formula trend. (See ppm for further explanation). The interaction parameters of the model are then fitted either by minimum contrast estimation, or by maximum composite likelihood. Minimum contrast: If method = "mincon"(the default) interaction parameters of the model will be fitted by minimum contrast estimation, that is, by matching the theoretical \(K\)-function of the model to the empirical \(K\)-function of the data, as explained in mincontrast. For a homogeneous model ( trend = ~1) the empirical \(K\)-function of the data is computed using Kest, and the interaction parameters of the model are estimated by the method of minimum contrast. For an inhomogeneous model, the inhomogeneous \(K\) function is estimated by Kinhomusing the fitted intensity. Then the interaction parameters of the model are estimated by the method of minimum contrast using the inhomogeneous \(K\) function. This two-step estimation procedure is heavily inspired by Waagepetersen (2007). If statistic="pcf"then instead of using the \(K\)-function, the algorithm will use the pair correlation function pcffor homogeneous models and the inhomogeneous pair correlation function pcfinhomfor inhomogeneous models. In this case, the smoothing parameters of the pair correlation can be controlled using the argument statargs, as shown in the Examples. Additional arguments …will be passed to mincontrastto control the minimum contrast fitting algorithm. Composite likelihood: If method = "clik2"the interaction parameters of the model will be fitted by maximising the second-order composite likelihood (Guan, 2006). The log composite likelihood is $$ \sum_{i,j} w(d_{ij}) \log\rho(d_{ij}; \theta) - \left( \sum_{i,j} w(d_{ij}) \right) \log \int_D \int_D w(\|u-v\|) \rho(\|u-v\|; \theta)\, du\, dv $$ where the sums are taken over all pairs of data points \(x_i, x_j\) separated by a distance \(d_{ij} = \| x_i - x_j\|\) less than rmax, and the double integral is taken over all pairs of locations \(u,v\) in the spatial window of the data. Here \(\rho(d;\theta)\) is the pair correlation function of the model with cluster parameters \(\theta\). The function \(w\) in the composite likelihood is a weighting function and may be chosen arbitrarily. It is specified by the argument weightfun. If this is missing or NULLthen the default is a threshold weight function, \(w(d) = 1(d \le R)\), where \(R\) is rmax/2. Palm likelihood: If method = "palm"the interaction parameters of the model will be fitted by maximising the Palm loglikelihood (Tanaka et al, 2008) $$ \sum_{i,j} w(x_i, x_j) \log \lambda_P(x_j \mid x_i; \theta) - \int_D w(x_i, u) \lambda_P(u \mid x_i; \theta) {\rm d} u $$ with the same notation as above. Here \(\lambda_P(u|v;\theta\) is the Palm intensity of the model at location \(u\) given there is a point at \(v\). In all three methods, the optimisation is performed by the generic optimisation algorithm optim. The behaviour of this algorithm can be modified using the argument control. Useful control arguments include trace, maxit and abstol (documented in the help for optim). Finally, it is also possible to fix any parameters desired before the optimisation by specifying them as name=value in the call to the family function. See Examples. Value An object of class "dppm" representing the fitted model. There are methods for printing, plotting, predicting and simulating objects of this class. References Lavancier, F. Moller, J. and Rubak, E. (2015) Determinantal point process models and statistical inference Journal of the Royal Statistical Society, Series B 77, 853--977. Guan, Y. (2006) A composite likelihood approach in fitting spatial point process models. Journal of the American Statistical Association 101, 1502--1512. Tanaka, U. and Ogata, Y. and Stoyan, D. (2008) Parameter estimation and model selection for Neyman-Scott point processes. Biometrical Journal 50, 43--57. Waagepetersen, R. (2007) An estimating function approach to inference for inhomogeneous Neyman-Scott processes. Biometrics 63, 252--258. See Also Minimum contrast fitting algorithm: mincontrast. See also ppm Aliases dppm Examples # NOT RUN { jpines <- residualspaper$Fig1 # }# NOT RUN { dppm(jpines ~ 1, dppGauss) dppm(jpines ~ 1, dppGauss, method="c") dppm(jpines ~ 1, dppGauss, method="p") # Fixing the intensity to lambda=2 rather than the Poisson MLE 2.04: dppm(jpines ~ 1, dppGauss(lambda=2)) if(interactive()) { # The following is quite slow (using K-function) dppm(jpines ~ x, dppMatern) } # much faster using pair correlation function dppm(jpines ~ x, dppMatern, statistic="pcf", statargs=list(stoyan=0.2)) # Fixing the Matern shape parameter to nu=2 rather than estimating it: dppm(jpines ~ x, dppMatern(nu=2))# } Documentation reproduced from package spatstat, version 1.55-1, License: GPL (>= 2)
(32 intermediate revisions by 3 users not shown) Line 1: Line 1: __NOTOC__ __NOTOC__ − = Spring 2019 = + = 2019 = <b>Thursdays in 901 Van Vleck Hall at 2:25 PM</b>, unless otherwise noted. <b>Thursdays in 901 Van Vleck Hall at 2:25 PM</b>, unless otherwise noted. Line 9: Line 9: [mailto:join-probsem@lists.wisc.edu join-probsem@lists.wisc.edu] [mailto:join-probsem@lists.wisc.edu join-probsem@lists.wisc.edu] + + + + − == January 31, [https://www.math.princeton.edu/people/oanh-nguyen Oanh Nguyen], [https://www.math.princeton.edu/ Princeton] == + == , , == − Title: '''Survival and extinction of epidemics on random graphs with general degrees''' + − Abstract: We establish the necessary and sufficient criterion for the contact process on Galton-Watson trees (resp. random graphs) to exhibit the phase of extinction (resp. short survival). We prove that the survival threshold $\lambda_1$ for a Galton-Watson tree is strictly positive if and only if its offspring distribution has an exponential tail, settling a conjecture by Huang and Durrett. On the random graph with degree distribution $D$, we show that if $D$ has an exponential tail, then for small enough $\lambda$ the contact process with the all-infected initial condition survives for polynomial time with high probability, while for large enough $\lambda$ it runs over exponential time with high probability. When $D$ is subexponential, the contact process typically displays long survival for any fixed $\lambda>0$. + , , − Joint work with Shankar Bhamidi, Danny Nam, and Allan Sly. − == <span style="color:red"> Wednesday, February 6 at 4:00pm in Van Vleck 911</span> , [https://lc-tsai.github.io/ Li-Cheng Tsai], [https://www.columbia.edu/ Columbia University] == + == , , == − Title: '''When particle systems meet PDEs''' + − Abstract: Interacting particle systems are models that involve many randomly evolving agents (i.e., particles). These systems are widely used in describing real-world phenomena. In this talk we will walk through three facets of interacting particle systems, namely the law of large numbers, random fluctuations, and large deviations. Within each facet, I will explain how Partial Differential Equations (PDEs) play a role in understanding the systems.. + , , − == February 7, [http://www.math.cmu.edu/~yug2/ Yu Gu], [https://www.cmu.edu/math/index.html CMU] == + == 7, , == − Title: '''Fluctuations of the KPZ equation in d\geq 2 in a weak disorder regime''' + − Abstract: We will discuss some recent work on the Edwards-Wilkinson limit of the KPZ equation with a small coupling constant in d\geq 2. + − == February 14, [https://www.math.wisc.edu/~seppalai/ Timo Seppäläinen], UW-Madison== + == , , == − Title: '''Geometry of the corner growth model''' + − Abstract: The corner growth model is a last-passage percolation model of random growth on the square lattice. It lies at the nexus of several branches of mathematics: probability, statistical physics, queueing theory, combinatorics, and integrable systems. It has been studied intensely for almost 40 years. This talk reviews properties of the geodesics, Busemann functions and competition interfaces of the corner growth model, and presents some new qualitative and quantitative results. Based on joint projects with Louis Fan (Indiana), Firas Rassoul-Agha and Chris Janjigian (Utah). − == February 21, [https://people.kth.se/~holcomb/ Diane Holcomb], KTH == + − Title: '''On the centered maximum of the Sine beta process''' + : + − Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log- correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette. + Abstract: . -. we will of the of , and a the .. − == Probability related talk in PDE Geometric Analysis seminar: Monday, 3:30pm to 4:30pm, Van Vleck 901== − Xiaoqin Guo, UW-Madison + , − Title: Quantitative homogenization in a balanced random environment − Abstract: Stochastic homogenization of discrete difference operators is closely related to the convergence of random walk in a random environment (RWRE) to its limiting process. In this talk we discuss non-divergence form difference operators in an i.i.d random environment and the corresponding process—a random walk in a balanced random environment in the integer lattice Z^d. We first quantify the ergodicity of the environment viewed from the point of view of the particle. As consequences, we obtain algebraic rates of convergence for the quenched central limit theorem of the RWRE and for the homogenization of both elliptic and parabolic non-divergence form difference operators. Joint work with J. Peterson (Purdue) and H. V. Tran (UW-Madison). + <div style="width:;height:50px;border:5px solid black"> − + <b><span style="color:red">  Please note the unusual day. − − − == <span style="color:red"> Wednesday, February 27 at 1:10pm</span> [http://www.math.purdue.edu/~peterson/ Jon Peterson], [http://www.math.purdue.edu/ Purdue] == − − − <div style="width: 520px;height:50px;border:5px solid black"> − <b><span style="color:red">  Please note the unusual day and time. </span></b> </span></b> </div> </div> + − Title: '''Functional Limit Laws for Recurrent Excited Random Walks''' + Abstract: I will the of the scaling limit of () of a -invariant on the , and to . The directed is a limit for and random .. , , the a . and . − − Abstract: − − Excited random walks (also called cookie random walks) are model for self-interacting random motion where the transition probabilities are dependent on the local time at the current location. While self-interacting random walks are typically very difficult to study, many results for (one-dimensional) excited random walks are remarkably explicit. In particular, one can easily (by hand) calculate a parameter of the model that will determine many features of the random walk: recurrence/transience, non-zero limiting speed, limiting distributions and more. In this talk I will prove functional limit laws for one-dimensional excited random walks that are recurrent. For certain values of the parameters in the model the random walks under diffusive scaling converge to a Brownian motion perturbed at its extremum. This was known previously for the case of excited random walks with boundedly many cookies per site, but we are able to generalize this to excited random walks with periodic cookie stacks. In this more general case, it is much less clear why perturbed Brownian motion should be the correct scaling limit . This is joint work with Elena Kosygina. − − == March 7, TBA == − − == March 14, TBA == − == March 21, Spring Break, No seminar == − − == March 28, [https://www.math.wisc.edu/~shamgar/ Shamgar Gurevitch] [https://www.math.wisc.edu/ UW-Madison]== − − Title: '''Harmonic Analysis on GLn over finite fields, and Random Walks''' − − Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the ''character ratio'': − − $$ − \text{trace}( \rho(g) )/\text{dim}(\rho), − $$ − − for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G- biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant ''rank''. This talk will discuss the notion of rank for $GL_n$ over finite fields, and apply the results to random walks. This is joint work with Roger Howe (Yale and Texas AM). − − == April 4, TBA == − == April 11, [https://sites.google.com/site/ebprocaccia/ Eviatar Procaccia], [http://www.math.tamu.edu/index.html Texas A&M] == − − == April 18, [https://services.math.duke.edu/~agazzi/index.html Andrea Agazzi], [https://math.duke.edu/ Duke] == − − == April 25, [https://www.brown.edu/academics/applied-mathematics/kavita-ramanan Kavita Ramanan], [https://www.brown.edu/academics/applied-mathematics/ Brown] == − − == April 26, Colloquium, [https://www.brown.edu/academics/applied-mathematics/kavita-ramanan Kavita Ramanan], [https://www.brown.edu/academics/applied-mathematics/ Brown] == − − == April 26, TBA == − == May 2, TBA == − − − <!-- − ==<span style="color:red"> Friday, August 10, 10am, B239 Van Vleck </span> András Mészáros, Central European University, Budapest == − − − Title: '''The distribution of sandpile groups of random regular graphs''' − − Abstract: − We study the distribution of the sandpile group of random <math>d</math>-regular graphs. For the directed model we prove that it follows the Cohen-Lenstra heuristics, that is, the probability that the <math>p</math>-Sylow subgroup of the sandpile group is a given <math>p</math>-group <math>P</math>, is proportional to <math>|\operatorname{Aut}(P)|^{-1}</math>. For finitely many primes, these events get independent in limit . Similar results hold for undirected random regular graphs, there for odd primes the limiting distributions are the ones given by Clancy, Leake and Payne. − − Our results extends a recent theorem of Huang saying that the adjacency matrices of random <math>d</math>-regular directed graphs are invertible with high probability to the undirected case. − − − ==September 20, [http://math. columbia. edu/~hshen/ Hao Shen], [https://www.math.wisc.edu/ UW-Madison] == − − Title: '''Stochastic quantization of Yang-Mills''' − − Abstract: − "Stochastic quantization” refers to a formulation of quantum field theory as stochastic PDEs. Interesting progress has been made these years in understanding these SPDEs, examples including Phi4 and sine-Gordon. Yang-Mills is a type of quantum field theory which has gauge symmetry, and its stochastic quantization is a Yang-Mills flow perturbed by white noise. − In this talk we start by an Abelian example where we take a symmetry-preserving lattice regularization and study the continuum limit. We will then discuss non-Abelian Yang-Mills theories and introduce a symmetry-breaking smooth regularization and restore the symmetry using a notion of gauge-equivariance. With these results we can construct dynamical Wilson loop and string observables. Based on [S., arXiv:1801.04596] and [Chandra,Hairer,S., work in progress]. − --> -->
The largest circle that can be drawn on the sphere surface is the great circle. The shortest distance between any two points on the sphere surface is the Great Circle distance. Historically, the Great circle is also called as an Orthodrome or Romanian Circle. The diameter of The diameter of any sphere coincides with the diameter of the great circle. The great circle is used in the navigation of ship or aircraft. The idea that is the Earth is somewhat spherical helps in navigating as we come to know for the shortest distance in the sphere. Pretty much useful, isn’t? Great circle formula is given by, Where, r is the radius of the earth $\delta$ is the latitude $\lambda$ is the longitude Solved Examples Question 1: Find the great circle distance if the radius is 4.7 km, latitude is (45 o, 32 o) and longitude is (24 o, 17 o) ? Solution: Given, $\sigma_{1},\sigma_{2}=45^{\circ},32^{\circ}$ $\Lambda_{1},\Lambda_{2}=24^{\circ},17^{\circ}$ r=4.7 Using the above given formula, $d=4700\;cos^{-1}(0.52\times 0.83\times 0.75)+(0.85 \times 0.32)$ $d=4700\times 0.99$ D = 4653 m
Constraint-based structure learning is one of the few classes of algorithms for Bayesian network structure learning. Structure learning is the process of reconstructing the Bayesian network from a data set. Constraint-based structure learning is an intuitive process but can be very sensitive to individual failures. There are two main reasons for structure learning: knowledge discovery, and density estimation. Knowledge discovery is useful for learning dependencies and relationships between variables in a problem domain. Density estimation, which is the more common reason, focuses on creating a model that can be used to perform inference on new data. Brute-force techniques are not feasible for structure learning, and even some of the common methods turn out to be NP-hard in many cases. The problem stems from the super-exponential growth in the number of possible directed acyclic graphs (DAGs) for each added node. For example, with just five nodes, there are 29,281 possible DAGs. Increasing to ten nodes yields $ 4.2 \times 10^{18} $ possible DAGs. Constraint-based methods use the data to find all the conditional independence statements, which are then used to construct a Bayesian network representative of these statements. To understand the general process used by constraint-based methods, it is easiest to start with a simplifying assumption—we have an oracle that answers conditional independence questions, such as is $A \perp B|C$? Another simplifying assumption is that we have the skeleton (undirected graph) for the Bayesian network represented by the data set. Given a skeleton graph, there are four rules that can be used in conjunction with queries to the oracle to create a DAG. Earlier rules always take precedence over later rules. (Example taken from Jensen, and Nielsen). Suppose we have the sequence of graphs below where the only v-structure that could be created is C $\rightarrow$ F, D $\rightarrow$ F ( see graph (a) ). An example application of Rules 2 -4 is as follows: The skeleton for the graph can be obtained by asking the oracle various independence tests. The PC algorithm takes the complete graph as input and produces the graph skeleton. The PC algorithm is as follows: Begin with complete graph G i = 0 while any node has at least i + 1 neighbors foreach node A with at least i + 1 neighbors foreach B $\in$ ADJ$_A$ for all sets X such that |X| = i and X $\subseteq$(ADJ$_A$-{B}) if $A \perp B|X$ then remove edge A -- B and store independence (Example taken from Jensen, and Nielsen). Given the complete graph shown as (a) the first iteration (i=0) yields the following oracle queries and answers: $A \perp B$ = yes<br> $A \perp C$ = no<br> $A \perp D$ = no<br> $A \perp E$ = no<br> $B \perp C$ = yes<br> $B \perp D$ = no<br> $B \perp E$ = no<br> $C \perp D$ = no<br> $C \perp E$ = no<br> $D \perp E$ = no<br> Removing the edges A – B and B – C results in the graph shown in (b). Moving onto iteration (i=1): $A \perp C|E$ = no, $B \perp C|D$ = no<br> $B \perp C|E$ = no, $B \perp D|C$ = no<br> $B \perp D|E$ = no, $B \perp E|C$ = no<br> $B \perp E|D$ = no, $C \perp B|A$ = no<br> $C \perp D|B$ = no, $C \perp D|A$ = yes<br> $C \perp E|A$ = no, $C \perp E|B$ = no<br> $D \perp B|E$ = no, $D \perp E|B$ = no<br> $E \perp A|B$ = no, $E \perp A|D$ = no<br> $E \perp B|A$ = no, $E \perp C|B$ = no<br> $E \perp C|D$ = no, $E \perp D|A$ = no<br> $E \perp D|C$ = no<br> The only edge removed during this iteration is C – D, resulting in ©. Now some of the queries for iteration (i=2): $A \perp C|D,E$ = no<br> $A \perp D|C,E$ = no<br> $A \perp E|C,D$ = yes<br> $B \perp E|C,D$ = yes<br> So with edges A – E and B – E removed, we are left with (d). At the start of iteration (i=3) no node has four adjacent nodes, so the algorithm stops and the final graph skeleton is (d). Unfortunately there is no magical oracle that can answer these independence queries. However, if the data set is truly representative of the Bayesian network, and we have enough samples, the data set itself can be used to answer these queries. There are many different methods for performing these queries. One is known as conditional mutual information and is defined by the following equation: $ CMI(A,B|X)=\sum_X P(X) \sum_{A,B} P(A,B|X)log \frac{P(A,B|X)}{P(A|X)P(B|X)} $ CMI gives a measurement of the degree of dependence between A and B given X. In particular, CMI = 0 when A is conditionally independent of B given X. A common approach is to perform a $\chi^2$-test on the hypothesis $CMI(A,B|X) = 0$ with a user-defined significance level. The higher the significance level, the more links will be removed during the PC algorithm. If the data set is truly representative of the unknown Bayesian network N, then the skeleton graph created using these methods will be the skeleton of N. Given the same conditions, all the correct v-structures can be assigned given the independence queries. Overall, this is a very intuitive approach that, in some cases, can yield good results. Unfortunately these methods have some significant weaknesses, which often makes score-based methods a better choice. In the real world, the data set is not generated from the Bayesian network that you are trying to construct. Real world data sets also contain missing or incorrect values. Since the processes rely on receiving accurate answers to independence queries, even small inaccuracies can lead to the incorrect removal or non-removal of edges, as well as incorrectly assigned or missed v-structures. As is common with many methods, there is also the problem of overfitting. Even if the constructed Bayesian network perfectly represents the data set, it may not generalize well, especially if the data set is small or has missing or incorrect values. Starting with a complete graph with six nodes (like the one shown below) use the PC algorithm and your own oracle to create a skeleton graph (the skeleton must meet the requirements needed for the next step). Using your skeleton graph and oracle, go through the steps of creating a possible DAG that uses each of the four rules at least once. List the steps and rules used to add each of your directed edges.
In this section we will see a useful method for approaching a problem that cannot be solved analytically and in the process we will learn why a product wavefunction is a logical choice for approximating a multi-electron wavefunction. The helium atom Hamiltonian is re-written below with the kinetic and potential energy terms for each electron followed by the potential energy term for the electron-electron interaction. The last term, the electron-electron interaction, is the one that makes the Schrödinger equation impossible to solve. \[\hat {H} = -\dfrac {\hbar ^2}{2m} \nabla^2_1 - \dfrac {2e^2}{4 \pi \epsilon _0 r_1} - \dfrac {\hbar ^2}{2m} \nabla ^2_2 - \dfrac {2e^2}{4 \pi \epsilon _0 r_2} + \dfrac {e^2}{4 \pi \epsilon _0 r_12} \label {9-9}\] To solve the Schrödinger Equation using this Hamiltonian, we need to make an assumption that allows us to find an approximate solution. The approximation that we consider in this section is the complete neglect of the electron-electron interaction term. Odd though it seems, this assumption corresponds mathematically to treating the helium atom as two non-interacting helium ions (with one electron each) that happen to share the same nucleus. This approximation is called the independent-electron assumption. While this assumption might seem very drastic, it is worth trying since it also presents a straightforward path to a solution. A general strategy when solving difficult problems is to make an assumption and see how the results turn out. In this case we can compare the results we obtain using the assumption to what is known experimentally about the quantum states of helium, like the ionization energies. Are we a factor of 10 off? 10000? The latter result would probably indicate that we have hit a dead end with this method, while the former might indicate a method worth refining. Neglecting the electron repulsion term simplifies the helium atom Hamiltonian to a sum of two hydrogen-like Hamiltonians that can be solved exactly. \[\hat {H}(r_1,r_2) = \hat {H} (r_1) + \hat {H} (r_2) \label {9-10}\] The variables (the positions of the electrons, \(r_1\) and \(r_2\)) in the Schrödinger equation separate, and we end up with two independent Schrödinger equations that are exactly the same as that for the hydrogen atom, except that the nuclear charge is +2e rather than +1e. \[ \hat {H} (r_1) \varphi (r_1) = E_1 \varphi (r_1) \label {9-11}\] \[ \hat {H} (r_2) \varphi (r_2) = E_2 \varphi (r_2) \label {9-12}\] Exercise \(\PageIndex{1}\) What is the specific mathematical form for \(\hat {H} (r_1)\) in Equation \(\ref{9-10}\)? Using our previous experiences with separation of variables, we realize that the wavefunction can be approximated as a product of two single-electron hydrogen-atom wavefunctions with a nuclear charge \(Z = +2e\), \[\psi (r_1 , r_2) \approx \varphi (r_1) \varphi (r_2) \label {9-13}\] Exercise \(\PageIndex{2}\) Write the explicit mathematical expression for the ground state wavefunction for the helium atom shown in Equation \(\ref{9-13}\). Binding Energy As we will show below, the energy eigenvalue associated with the product wavefunction is the sum of the one-electron energies associated with the component single-electron hydrogen-atom wavefunctions. \[E_{He} = E_1 + E_2 \label {9-14}\] The energy calculated using the Schrödinger equation is also called the total energy or the binding energy. Binding energy is the energy required to separate the particles of a system (in this case the two electrons and the nucleus) to an infinite distance apart. The binding energy should not be confused with the ionization energy, \(IP\), which is the energy required to remove only one electron from the helium atom. Binding energies can be measured experimentally by sequentially ionizing the atom and summing all the ionization energies. hence for the lithium atom with three electrons, the binding energy is \[E_{He} = IP_1 + IP_2 + IP_3\] The binding energy (or total energy) should not be confused with the ionization energy, \(IP\), which is the energy required to remove a single electron from the atom. Exercise \(\PageIndex{3}\) Why was it unnecessary to differentiate the terms binding energy and ionization energy for the hydrogen atom and other one-electron systems? To calculate binding energies using the approximate Hamiltonian with the missing electron-electron repulsion term, we use the expectation value integral, Equation \(\ref{9-15}\). This is a general approach and we’ve used it in earlier chapters. The notation \(\int d\tau \) is used to represent integration over the three-dimensional space in spherical coordinates for electrons 1 and 2. \[ \left \langle E \right \rangle = \int \varphi ^*_{1s} (r_1) \varphi ^*_{1s} (r_2) [ H(r_1) + H(r_2) ] \varphi _{1s} (r_1)\varphi _{1s} (r_2) d\tau \label {9-15}\] The wavefunctions in Equation \(\ref{9-15}\) are the hydrogen atom functions with a nuclear charge of +2e. The resulting energy for the helium ground state is \[ E_{approx} = 2Z^2 E_H = - 108\, eV \label {9-16}\] where \(Z = +2\) and \(E_H\) is the binding energy of the hydrogen atom (-13.6 eV). The calculated result for the binding energy can be compared to the experimental value of -78.9 eV. The difference is due to the electron-electron interaction. The experimental and calculated binding and ionization energies are listed in Table \(\PageIndex{1}\). Experimental Crude Approximation \(E\) (energy to remove all electrons from nucleus) -79.0 eV -108.8 eV \({IP}\) (energy to remove weakest electron from nucleus) 24.6 eV 54.4 eV Exercise \(\PageIndex{4}\) Start with Equation \(\ref{9-15}\) and show that \(E\) in fact equals -108 eV. Rather than evaluating integrals, derive that \[E = 2 Z^2 E_H \nonumber\] and substitute the value for \(E_H\). The deviation of the calculated binding energy from the experimental value can be recognized as being good or bad depending on your point of view. It is bad because a 38% error is nothing to “brag about”; on the other hand, the comparison is good because the calculated value is close to the experimental value. Both the experiment and the calculation give an answer of about -100 eV for the binding energy of helium. This comparison tells you that although the electron repulsion term is important, the idea that the electrons are independent is reasonable. An independent-electron picture is reasonable because you can completely neglect the electron-electron interaction and you get a reasonable value for the binding energy, although it is not particularly accurate. This observation is important because we can now feel justified in using the idea of independent electrons as a starting point for improved approximate solutions to the Schrödinger equation for multi-electron atoms and molecules. To find better approximate solutions for multi-electron systems, we start with wavefunctions that depend only on the coordinates of a single electron, and then take into account the electron-electron repulsion to improve the accuracy. Getting highly accurate energies and computed properties for many-electron systems is not an impossible task. In subsequent sections of this chapter we approximate the helium atom using several additional widely applicable approaches, perturbation theory, the variational method, self consistent field theory and the Hartree-Fock approach (SCF-HF), and configuration interaction (CI). These basic computational chemistry tools are used to treat other multi-electron systems, both atomic and molecular, for applications ranging from interpretation of spectroscopy to predictions of chemical reactivity. Contributors Adapted from "Quantum States of Atoms and Molecules" by David M. Hanson, Erica Harvey, Robert Sweeney, Theresa Julia Zielinski
I have a velocity field and I want to get a pressure field. In my experiment we're controlling the pressure at the inlet and the outlet. I have Dirichlet boundary conditions at the inlet and outlet and I'm applying Neuman boundary conditions at the vertical walls. To derive the hydraulic pressure map, I'm taking the divergence of the steady- state NaviereStokes equation (NSE) for an incompressible fluid. The poisson pressure equation reads $$−\frac{1}\rho(\nabla^2p)=−\frac{1}{\rho}\left(\frac{∂^2p}{∂x^2}+\frac{∂^2p}{∂y^2}\right)=\left(\frac{∂u}{∂x}\right)^2+ 2\cdot \frac{∂u}{∂y}\frac{∂v}{∂x}+\left(\frac{∂v}{∂y}\right)^2$$ In my code I tried different velocity fields ( constante, linear, random etc...) but I'm always getting a linear pressure field which I find not logical. Do you have an idea what I'm doing wrong? because i think It's not normal to get always a linear pressure. Do you have another proposition to solve my problem? In my case, I'm working on concrete so the velocity is in range of $10^{-6}\frac{m}{s}$ and the pressure on the bondary conditions are in range of $10^6$ Pa. When I'm searching the $(\nabla^2p)$, I'm getting values close to zero which is normal (since the range of the velocity is very small comparing to the pressure field) so at the end I'm getting a linear pressure field. For info, I'm solving my problem using finite differences. What do you think?
In the formula, the stock return is modelled as a brownian motion that is a drift + a stochastic term, ok I get that. But the drift term is then modelled as r - volatility ^ 2 / 2. I am not sure how they derive this "volatility ^ 2 / 2". Is this derived out of the Ito Lemma?? This drift comes from making the discounted stock a martingale in the risk-neutral measure $\mathbb Q$ You start with a stock in $\mathbb P$ having this form: $$ dS_t = \mu S_t dt + \sigma S_t dW_t $$ You also have a discount factor $e^{rt}$. The idea is to remove the drift of the discounted process in $\mathbb Q$ so you get (after applying Girsanov's theorem) a martingale: $$ d\hat S_t = \sigma \hat S_t d \tilde W_t $$ where $\hat S_t$ is the discounted stock and $\tilde W_t$ is a $\mathbb Q$-brownian motion. If you solve this last SDE you get $$ \hat S_t = \hat S_0\exp(\sigma W_t - \frac{1}{2}\sigma^2t) $$ Multiplying with $e^{rt}$ on both sides you get the un-discounted process and the drift you were asking about. But the gist of why you get the correction term $\frac{1}{2}\sigma^2t$ is when solving the SDE $$ dX_t = \sigma X_t dW_t $$ you get $X_t = X_0 \exp(\sigma W_t - \frac{1}{2}\sigma^2t)$
To Explain the World The Discovery of Modern Science Steven Weinberg, Bvtn 509 Wei, 2015 Harper Collins Steven Weinberg is a theoretician who does physics with mathematics, not measurement. This book is about the development of mathematical explanations for observations, not how those observations are made or validated, nor how experimental apparatus is constructed and calibrated. I only skimmed the book. It whetted my interest in how Tycho and Kepler performed their measurements and especially how they estimated the errors in their imprecise pre-telescope pre-photography pre-standards astronomical measurements. A companion book should be written by an experimentalist instead of a pure theoretician. A footnote on page 168 may be result from narrow theoretical focus: The main effect of the ellipticity of planetary orbits is not so much the ellipticity itself as the fact that the Sun is at a focus rather than the center of the ellipse. To be precise, the distance between either focus and the center of an ellipse is proportional to the eccentricity, while the variation in the distance of points on the ellipse from either focus is proportional to the squareof the eccentricity, which for a small eccentricity makes it much smaller. For instance, for an eccentricity of 0.1 (similar to that of the orbit of Mars) the smallest distance of the planet from the Sun is only ½ percent smaller than the largest distance. On the other hand, the distance of the Sun from the center of this orbit is 10 percent of the average radius of the orbit. The underlined text above is incorrect. The radius r of a Keplerian orbit (with a semimajor axis of a and an eccentricity of e ) varies with angle \theta by the formula: \Large r ~ = ~ \LARGE { { a ( 1 - e )^2 } \over { 1 + e \cos( \theta ) } } This equation can be derived from the conservation of energy and angular momentum in an inverse-square gravity field, and is well known to astronomers and space engineers. For Mars, the semimajor axis a is 2.279392e11 meters or 1.523679 AU and the eccentricity e is 0.0934. At perihelion (closest to the Sun), \theta = 0, so the perihelion radius is: \Large r_{p} ~ = ~ \LARGE { { a ( 1 - e )^2 } \over { 1 + e \cos( 0 ) } } \Large ~ = ~ \LARGE { { a ( 1 - e )^2 } \over { 1 + e } } \large ~ = ~ a ( 1 - e ) ~ \approx ~ 2.067e11 m or 1.3814 AU At aphelion (farthest from the Sun), \theta = \pi, so the aphelion radius is: \Large r_{a} ~ = ~ \LARGE { { a ( 1 - e )^2 } \over { 1 + e \cos( \pi ) } } \Large ~ = ~ \LARGE { { a ( 1 - e )^2 } \over { 1 - e } } \large ~ = ~ a ( 1 + e ) ~ \approx ~ 2.492e11 m or 1.6660 AU The ratio of these distances is ( 1 + e ) / ( 1 - e ) \approx 1.206. About 20%, not ½%. Page 170 has an amusing quote from page 24 of Donahue's translation of Kepler's Astronomia Nova: Advice for Idiots. But whoever is too stupid to understand astronomical science, or too weak to believe Copernicus without affecting his faith, I would advise him that, having dismissed astronomical studies and having damned whatever philosophical opinions he pleases, he mind his own business and betake himself home to scratch in his own dirt patch.
Advanced topics in information theory From CYPHYNETS (→Aug 18: Multiple access channels, network coding techniques) (→Aug 18: Multiple access channels, network coding techniques) Line 103: Line 103: - <math>a=123343534534957349579347593475893475934875893475983457934785+\min_{p(x, \hat{x}), \sum p(x; \hat{x}) d(x; \hat{x}) \leq D} I(X; \hat{X + <math>a=123343534534957349579347593475893475934875893475983457934785+\min_{p(x, \hat{x}), \sum p(x; \hat{x}) d(x; \hat{x}) \leq D} I(X; \hat{X454545})</math> Revision as of 08:04, 30 July 2009 Reading Group: Advanced Topics in Information Theory Calendar: Summer 2009 Venue: LUMS School of Science & Engineering Organizer: Abubakr Muhammad This group meets every week at LUMS to discuss some advanced topics in information theory. This is a continuation of our formal course at LUMS, CS-683: Information theory (offered most recently in Spring 2008). We hope to cover some advanced topics in information theory as well as its connections to other fundamental disciplines such as statistics, mathematics, physics and technology. Participants Mubasher Beg Shahida Jabeem Qasim Maqbool Muhammad Bilal Muzammad Baig Hassan Mohy-ud-Din Zartash Uzmi Shahab Baqai Abubakr Muhammad Topics Include the following, but not limited to: Rate distortion theory Network information theory Kolmogorov complexity Quantum information theory Sessions July 7: Organization. Recap of CS-683 Basic organization, presentation assignments. Review of Information theory ideas Entropy, AEP, Compression and Capacity Entropy of a random variable is given by The capacity of a channel is defined by Compression and Capacity determine the two fundamental information theoretic limits of data transmission, A review of Gaussain channels and their capacities. Let us take these analysis one step further. How much do you loose when you cross these barriers? We saw one situation when you try to transmit over the capacity. By Fano's inequality Rate distortion: A theory for lossy data compression. References/Literature Elements of Information theoryby Cover and Thomas. July 14: Rate distortion theory - I Rate–distortion provides the theoretical foundations for lossy data compression. We try to find answer to the following question: Given an acceptable level of distortion, what is the minimal information that should be sent over a channel, so that the source can be reconstructed (up to that level of distortion) at the receiver? Quantization for a single random variable. Given a distribution for a random variable, what are the optimal choices for quantization? Answer is Lloyd's algorithm (closely related to k-means clustering). What about multiple random variables, treated at the same time? Even if the RVs are IID, quantizing them in sequences can result in better performance. (stated without proof). Define distortion D. When is a distortion pair (R,D) achievable? The rate distortion function R(D) is the min rate R such that (R,D) is achievable for a given D. There is also an information theoretic definition, We can show that both are equivalent. Proof follows closely the treatment on channel capacity. July 21: Rate distortion theory - II July 28: Network Information theory- I Aug 04: Network Information theory- II Aug 11: Wireless networks, cognitive radios Aug 18: Multiple access channels, network coding techniques Failed to parse (Cannot write to or create math temp directory): a=123343534534957349579347593475893475934875893475983457934785+\min_{p(x, \hat{x}), \sum p(x; \hat{x}) d(x; \hat{x}) \leq D} I(X; \hat{X\454545})
I'm trying to understand BRST complex in its Lagrangian incarnation i.e. in the form mostly closed to original Faddeev-Popov formulation. It looks like the most important part of that construction (proof of vanishing of higher cohomology groups) is very hard to find in the literature, at least I was not able to do so. Let me formulate couple of questions on BRST, but in the form of exercises on Lie algebra cohomology. Let $X$ be a smooth affine variety, and $g$ is a (reductive?) Lie algebra acting on $X$, I think we assume $g$ to be at least unimodular, otherwise BRST construction won't work, and also assume that map $g \to T_X$ is injective. In physics language this is closed and irreducible action of a Lie algebra of a gauge group of the space of fields $X$. Structure sheaf $\mathcal{O}_X$ is a module over $g$, and I could form Chevalley-Eilenberg complex with coefficients in this module$$C=\wedge g^* \otimes \mathcal{O}_X.$$ The ultimate goal if BRST construction is to provide "free model" of algebra of invarinats $\mathcal{O}_X^g$, it is nor clear what is "free model", but I think BRST construction is just Tate's procedure of killing cycles for Chevalley-Eilenberg complex above (Tate's construction works for any dg algebra, and $C$ is a dg algebra). My first question is what exactly are cohomology of the complex $C$? In other words before killing cohomology I'd like to understand what exactly have to be killed. For me it looks like a classical question on Lie algebra cohomology and, perhaps, it was discussed in the literature 60 years ago. It is not necessary to calculate these cohomology groups and then follow Tate's approach to construct complete BSRT complex (complete means I added anti-ghosts and lagrange multipliers to $C$ and modified the differential), but even if I start with BRST complex$$C_{BRST}=(\mathcal{O}_X \otimes \wedge (g \oplus g^*) \otimes S(g), d_{BRST}=d_{CE}+d_1),$$where I could find a proof that all higher cohomology vanishes? This post imported from StackExchange MathOverflow at 2014-08-15 09:41 (UCT), posted by SE-user Sasha Pavlov
A polygon is a two-dimensional (2-D) closed figure made up of straight line segments. In geometry, hexagon is a polygon with 6 sides. If the lengths of all the sides and the measurement of all the angles are equal, such hexagon is called a regular hexagon. In other words, sides of a regular hexagon are congruent. There is a predefined set of formulas for the calculation of perimeter and area of a regular hexagon which is collectively called as hexagon formula. The hexagon formula for a hexagon with the side length of a, is given as: Perimeter of an Hexagon = 6a Area of an Hexagon = \(\frac{3\sqrt{3}}{ 2} \times a^{2}\) Hexagon formula helps us to compute the area and perimeter of hexagonal objects. Honeycomb, quartz crystal, bolt head, Lug/wheel nut, Allen wrench, floor tiles etc are few things which you would find a hexagon. Properties of a Regular Hexagon: It has six sides and six angles. Lengths of all the sides and the measurement of all the angles are equal. The total number of diagonals in a regular hexagon is 9. The sum of all interior angles is equal to 720 degrees, where each interior angle measures 120 degrees. The sum of all exterior angles is equal to 360 degrees, where each exterior angle measures 60 degrees. Derivation: Consider a regular hexagon with each side a units. Formula for area of a hexagon: Area of a hexagon is defined as the region occupied inside the boundary of a hexagon. In order to calculate the area of a hexagon, we divide it into small six isosceles triangles. Calculate the area of one of the triangles and then we can multiply by 6 to find the total area of the polygon. Take one of the triangles and draw a line from the apex to the midpoint of the base to form a right angle. The base of the triangle is , the side length of the polygon. Let the length of this line be a . h The sum of all exterior angles is equal to 360 degrees. Here, ∠AOB = 360/6 = 60° ∴ θ = 30° We know that the tan of an angle is opposite side by adjacent side, Therefore, \( tan\theta = \frac{\left ( a/2 \right )}{h}\) \(tan30 = \frac{\left ( a/2 \right )}{h}\) \(\frac{\sqrt{3}}{3}= \frac{\left ( a/2 \right )}{h}\) \(h= \frac{a}{2}\times \frac{3}{\sqrt{3}}\) The area of a triangle = \(\frac{1}{2}bh\) The area of a triangle=\(\frac{1}{2}\times a\times \frac{a}{2}\times \frac{3}{\sqrt{3}}\) =\(\frac{3}{\sqrt{3}}\frac{a^{2}}{4}\) Area of the hexagon = 6 x Area of Triangle Area of the hexagon = \(6\times \frac{3}{\sqrt{3}} \times \frac{a^{2}}{4}\) Area of an Hexagon = \(\frac{3\sqrt{3}}{ 2} \times a^{2}\) Formula for perimeter of a hexagon: Perimeter of a hexagon is defined as the length of the boundary of the hexagon. So perimeter will be the sum of the length of all sides. The formula for perimeter of a hexagon is given by: Perimeter = length of 6 sides Perimeter of an Hexagon = 6a Solved examples: Question 1: Calculate the area and perimeter of a regular hexagon whose side is 4.1cm. Solution: Given, side of the hexagon = 4.1 cm Area of an Hexagon = \(\frac{3\sqrt{3}}{ 2} \times a^{2}\) Area of an Hexagon = \(\frac{3\sqrt{3}}{ 2} \times 4.1^{2}\) = 43.67cm² Perimeter of the hexagon = 6a = 6 × 4.1 = 24.6cm Question 2: Perimeter of a hexagonal board is 24 cm. Find the area of the board. Solution: Given, perimeter of the board = 24 cm Perimeter of an Hexagon = 6a 24 cm = 6a a = 24/6 = 4 cm Area of an Hexagon = \(\frac{3\sqrt{3}}{ 2} \times 4^{2}\)= 41.57cm² To solve more problems on the topic, download Byju’s -The Learning App.
I am trying to calculate the radial wavefunction for a spherical stepped potential of arbitrary lengths and heights, but am running into a numerical issue concerning exponentially growing and shrinking functions. For a potential with arbitrary regions and steps, the solution comes from solving a matrix representing boundary conditions at each step: $R_q(r_q)=R_{q+1}(r_q)$ and $dR_q(r)/dr\bigg|_{r=rq} = dR_{q+1}(r)/dr\bigg|_{r=rq}$ Which forms a matrix $A$ that I solve for the null space of. In regions where $E<V$, the solutions are the Hankel functions with imaginary arguments, $h_l(i\kappa r)$, where $\kappa$ is: $\kappa=\sqrt{2m(V-E)/\hbar^2}$ When the radius of a certain region where $E<V$ is large, or the potential step is high, the resulting dimensionless argument $i\kappa r$ becomes so large that the exponentially decaying/rising solutions are numerically pathological. For example, for a potential that has steps of $0, 5, 10$ eV, the resulting matrix looks like this: $\left( \begin{array}{ccc} -0.063 & -0.023 & -0.32 & 0 \\ 0 & 0.038 & 0.054 & 0.0... \\ 0.038 & 0.055 & -0.014 & 0 \\ 0 & -0.0075 & 0.0046 & -0.0... \end{array} \right)$ Where the first two rows describe the matching conditions of the wavefunction, and the second two rows the derivative match. The wavefunctions for each region are $j_l$, $h^1$ and $h^2$, and $h^1$. The exponentially decaying solution results in a value in the last column of $10^{-30}$. Does anyone have advice on how to recast this problem to be numerically tractable? Changing the system of units will not work, as the argument to the exponential is dimensionless. When I use my code to follow the inputs given in https://link.aps.org/doi/10.1103/PhysRevB.49.17072, I get a bad answer - but it must be possible as they have figures of the correct eigenfunction!
Newform invariants Coefficients of the \(q\)-expansion are expressed in terms of \(\beta = \frac{1}{2}(1 + \sqrt{17})\). We also show the integral \(q\)-expansion of the trace form. For each embedding \(\iota_m\) of the coefficient field, the values \(\iota_m(a_n)\) are shown below. For more information on an embedded modular form you can click on its label. This newform does not admit any (nontrivial) inner twists. \( p \) Sign \(2\) \(1\) \(3\) \(-1\) \(5\) \(-1\) \(67\) \(-1\) This newform can be constructed as the intersection of the kernels of the following linear operators acting on \(S_{2}^{\mathrm{new}}(\Gamma_0(8040))\): \(T_{7}^{2} \) \(\mathstrut -\mathstrut T_{7} \) \(\mathstrut -\mathstrut 4 \) \(T_{11}^{2} \) \(\mathstrut -\mathstrut 7 T_{11} \) \(\mathstrut +\mathstrut 8 \) \(T_{13}^{2} \) \(\mathstrut -\mathstrut 2 T_{13} \) \(\mathstrut -\mathstrut 16 \) \(T_{17}^{2} \) \(\mathstrut +\mathstrut 6 T_{17} \) \(\mathstrut -\mathstrut 8 \)
Search Now showing items 1-2 of 2 D-meson nuclear modification factor and elliptic flow measurements in Pb–Pb collisions at $\sqrt {s_{NN}}$ = 5.02TeV with ALICE at the LHC (Elsevier, 2017-11) ALICE measured the nuclear modification factor ($R_{AA}$) and elliptic flow ($\nu_{2}$) of D mesons ($D^{0}$, $D^{+}$, $D^{⁎+}$ and $D^{s+}$) in semi-central Pb–Pb collisions at $\sqrt{s_{NN}} =5.02$ TeV. The increased ... ALICE measurement of the $J/\psi$ nuclear modification factor at mid-rapidity in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV (Elsevier, 2017-11) ALICE at the LHC provides unique capabilities to study charmonium production at low transverse momenta ( p T ). At central rapidity, ( |y|<0.8 ), ALICE can reconstruct J/ ψ via their decay into two electrons down to zero ...
What is the proof that any given unitary matrix can be converted as above? Let $U$ be an arbitrary $2\times 2$ unitary matrix. This is equivalent to the rows/columns of $U$ forming an orthonormal system. Let us write a generic $U$ as$$U=\begin{pmatrix}a&b\\c&d\end{pmatrix}.$$The constraints imposed on the coefficients $a,b,c,d$ by the requirement of $U$ being unitary are$$|a|^2+|c|^2=1,\qquad |b|^2+|d|^2=1,\qquad a^* b+c^* d=0.$$A pair of complex numbers $a,b\in\mathbb C$ satisfying $|a|^2+|b|^2=1$ can always be parametrized as$$a=e^{i\alpha_{11}}\cos\theta,\qquad b=e^{i\alpha_{12}}\sin\theta,$$for some real coefficients $\alpha_{ij},\theta\in\mathbb R$. It follows that using only the normalization constraint (but without taking into account the orthogonality) we can parametrize $U$ as $$U=\begin{pmatrix}e^{\alpha_{11}}\cos\theta& e^{\alpha_{12}}\sin\theta\\e^{\alpha_{21}}\sin\theta & e^{\alpha_{22}}\cos\theta\end{pmatrix}.$$ Requiring the columns to be orthogonal then gives the additional relation $$e^{i(\alpha_{11}-\alpha_{12})}+e^{i(\alpha_{21}-\alpha_{22})}=0,$$that is, $\alpha_{11}=\alpha_{12}+\alpha_{21}-\alpha_{22}+\pi$. We conclude that $U$ is parametrized by three real parameters, here denoted $\theta,\alpha_{12},\alpha_{21},\alpha_{22}$. To get the form you show you simply need to change variables as follows: \begin{align}\theta&=\gamma/2, \\\alpha_{12} &= \alpha-\beta/2+\delta/2+\pi,\\\alpha_{21} &= \alpha+\beta/2-\delta/2, \\\alpha_{22} &= \alpha+\beta/2+\delta/2.\end{align} Like why there should be only 4 variables and not more or less? While the above already proved this, it can be useful to know that this a special case of a more general result. A generic unitary $n\times n$ matrix is specified by $n^2$ real parameters (see wiki page on the unitary group for more details). An easy way to see this is again to remember that unitary matrices are characterised by their columns/rows forming an orthonormal system.This amounts to $n$ real constraints (imposing each of the $n$ columns to be normalized), plus $\binom{n}{2}=n(n-1)/2$ additional complex constraints (imposing each pair of columns to be orthogonal). Each complex constraints amounts to two real constraints, so this sums up to a total of$$n+2\binom{n}{2}=n+n(n-1)=n^2$$real, independent constraints. A generic $n\times n$ matrix is characterised by $n^2$ complex numbers, that is, $2n^2$ real numbers. We conclude that the number of free parameters of a generic $n\times n$ unitary matrix is:$$2n^2-n^2=n^2.$$ Going back to the simple $2\times 2$ case above, you can see how we get back the previous result because $2^2=4$ (you might also notice that to count parameters in the special case $2\times 2$ I used a different ad-hoc strategy, rather then the one showed here to get the count in the general case). Yet another way to count parameters Another method I like is to think entirely in terms of orthonormal systems.The question is: how many parameters need to be given to specify an orthonormal basis in an $n$-dimensional complex vector space? Let us start by the first vector. The only constraint here is that we want the vector to be normalized.The number of real parameters needed to specify a normalized vector in $n$-dimensions is $d_1=2n-1$. Let us now add another vector. Now we have to impose both the normalization of this additional vector (one real constraint), and the orthogonality of this additional vector to the initial one (two real constraints).The additional parameters are therefore $d_2= 2n-3$. A third vector will need to be normalized (one real constraint), and orthogonal to the first two vectors ($2\times 2$ real constraints), thus $d_3=2n-5$. Iterate this reasoning until you get to the last vector, which will be specified by a single real parameter. The total number of parameters is therefore:$$\sum_k d_k=(2n-1)+(2n-3)+\cdots+3+1=\sum_{k=0}^{n-1} (2k+1).$$In other words, the number of parameters is given by the sum of the first $n$ odd integers, which is again readily shown to equal $n^2$.
The short answer is that such numbers $N$ (that are generated pseudo-randomly) have to be infeasibly large, even if you accept numbers that might be only partially hard to factor. Define a b-smooth integer as an integer with only primes less than $2^b$ in its factorization. Observe that all integers less than $2^b$ are b-smooth. Define an integer that is b-hard to factor as an integer $N$ that meets the following criteria: $N$ is not a prime. $N$ is not the product of a single prime times a b-smooth integer. $N$ is not a b-smooth integer. This leaves us with integers $N$ that are the product of two or more primes greater than $2^b$, possibly times a b-smooth integer. Most candidates will fall into this category, but unfortunately the number of candidates that don't, is hard to make small enough to get cryptographically acceptable guarantees the candidate cannot be factored. Testing $N$ for primality is relatively cheap, so it might be presumed that the entity generating $N$ is discarding any candidates that are prime. Still, if $N$ is composite, an estimate of the number of candidates that are a product of a large prime times a b-smooth integer, is that this number is roughly equal to the number $\pi(N)$ of primes less than $N$. Most of those primes are greater than $2^{n-b}$ and for candidates $N$ with such a prime in the factorization, the other factor will be a b-smooth integer. Since $\pi(N) \approx N/\log(N)$, and $N = 2^{\log(N)\log(2)}$ this means $N$ has to be infeasibly large if you want guarantees that $N$ is equally infeasible to factor, say $N \gt 2^{2^{128}}$. Edit: It should be noted that the main reason numbers $N$ that are generated this way have to unrealistically large, is because that is the only way to ensure a low probability of a prime factor that is almost as large as $N$ itself. A more feasible way to ensure that $N$ does not have any large prime factor, is to construct $N$ as a product of smaller random integers, as suggested in the paper cited in this answer. However, $N$ would still have to huge. A completely different approach, considering that the reason for the question is to find suitable parameters for a hash function, would be to generate a sequence of random integers $N_i$, but not multiply them together, but instead use them for successive RSA operations. Select the exponent $e$ as a mid-sized prime, of a size relative to each $N_i$ such that both the probability that $e \ge \phi(N_i)$ is low, and the probability that $GCD(e,\phi(N_i)) \ne 1$ is low. You should probably also ensure $e$ is not a Sophie-Germain prime. Now define the function $H(m)$, for $2 \le m \lt N_0$, as $H_0(m,e) = m^e \mod N_0$ $H_i(m,e) = {H_{i-1}(m,e)}^e \mod N_i$, for $0 \lt i \le n$ $H(m,e) = H_n(m,e)$ If $e$ is selected as described above, the probability can be made arbitrarily high that each $H_i()$ function is a permutation, save for the few bits that are lost due to the different values of the moduli. The probability that at least one of the functions $H_i()$ is one-way (and hence $H()$ is one-way), depends on the choice of parameter $n$ and parameter $k$ such that each $2^{k-1} \lt N_i \lt 2^k$. For instance, I guess that selecting parameters such that $k = 4096$, $2^{191} < e < 2^{192}$ and $n = 256$ would be adequate. The downside is of course that the above function $H()$ can't easily be used for comparing hashes with different salts. It is however possible to modify it slightly, to give it this feature: $H_0(m,e) = m^e \mod N_0$ $H_i(m,e) = {H_{i-1}(m,e)}^e \mod N_i$, for $0 \lt i \lt n$ $N'_n = f(N_n + H_{n-1}(m,e))$ $H'(m,salt) = m^{2salt} \mod N'_n$. The function $f()$ might be selected in such way that statistical analysis of realistic number of differently salted hashes of the same message $m$, is unlikely to leak bits about the exact value of $N'_n$. Such a function $H'(m,salt)$ might arguably be potentially useful as a building block of challenge-response protocols, but would present few benefits over plain $H()$ in terms of rainbow table attack prevention.
Find a sequence $f_n,$ bounded in $L^1([0,1]),$ and $f\in L^1,$ such that $$\lim_{n\to \infty}\int_0^x f_n=\int_0^x f$$ for all $x\in [0,1],$ and such that$\{f_n\}$ does not converge weakly to $f$ in $L^1$ Below is my attempt: let $$f_n(x)=2n\chi_{[0,1/n]}(x)$$ then $f_n \in L^1([0,1])$, for all $x$ with $\int_{[0,1]} f_n =2$ define $f$ to be identically $0$, ie. $f\equiv 0$. then $f_n \to f$ pointwise a.e So since $f_n \geq 0, \forall n $ and $f_n$ is increasing, by monotone convergence theorem $$\lim_{n\to \infty}\int_0^x f_n=\int_0^x f=\int_0^x 0 = 0$$ Below is Royden's definition of weak convergence Let $E$ be measurable set, and $1 \leq p< \infty$ and $q$ a conjugate of $p$, then $\{f_n\}$ is said to converge weakly to $f$ in $L^p(E)$ if and only if $$\lim_{n\to \infty}\int_E g.f_n=\int_E g.f , \forall g \in L^q(E)$$ As per this, I define $g=\chi_{[0,1]}(x)$, then clearly $g\in L^{\infty}([0,1])$, but $$\lim_{n\to \infty}\int_0^x g\cdot f_n=\int_0^xf_n =2$$ and $$\lim_{n\to \infty}\int_0^x g\cdot f=\int_0^x 0=0$$ hence $f_n$ does not converge weakly to $f$ in $L^1([0,1])$ Is this meaningful? if not can someone help me come up with one that works. thank you
I have the following piece of code: \documentclass{article}\oddsidemargin 43pt\textheight 20.4 cm\textwidth 14.0 cm\parskip 6.8 pt\parindent 12 pt\usepackage{mathtools}\begin{document}\begin{table}[h]\hrule\begin{align}\label{E}\tag{E}&\text{Axioms:} &&s =t &&&\text{for all equations $s=t$ in $E$}\\[0mm]\label{Ref}\tag{Reflexivity}&\text{ }&&s=s &&&\text{for every term $s$}\\[0mm]\label{Sym}\tag{Symmetry}&\text{Rules:}&& \frac{s=t}{t=s}&&&\text{ } \\[0mm]\label{Trans}\tag{Transitivity}&\text{ }&&\frac{s=t,t=v}{s=v}&&&\text{ } \\[0mm]\label{Cong}\tag{Congruence}&\text{ }&&\frac{s_1=t_1, \ldots, s_n = t_n}{f(s_1,\ldots,s_n) = f(t_1,\ldots t_n)}&&&\text{for every $n$-ary $f$}\\[0mm]\label{Subs}\tag{Substitution}&\text{ }&&\frac{s=t}{\sigma(s) = \sigma(t)}&&&\text{for $\sigma$ a substitution}\end{align}\hrule\caption{Axioms and rules for an equational logic with axiom set $E$}\label{tab: equational logic}\end{table}\end{document} As can be seen the first two columns are aligned to the left, and the second two to the right. Now I want the third and fourth column also to be aligned to the left (fourth not necessarily). I tried a tabular, but then I can't refer to the lines (using \eqref{E} for instance). I also like the vertical spacing between the lines (which fails in a tabular environment) I hope anyone knows how to do this
Energy for Future Presidents Physics for Future Presidents Notes inspired by the books. Work in progress, I will inform Dr. Muller when more complete. Better explanation of CO2 warming based on adiabatic cooling with altitude The usual explanation of increased CO2 "thickening" the atmosphere is easy but wrong. It attracts a lot of skepticism from the 80% informed, as I was before I learned some atmospheric science. A better explanation is slightly more complicated, but it can convert informed skeptics into informed believers. The climate debate needs fewer smart people poking holes in careless parodies of the scientific arguments. If you show more respect for your intelligent audience, they will show more respect for you. The "temperature of the earth" is an equilibrium between the black body emission of the earth and solar input. The "earth" as viewed from space is "colored" infrared ball, at some wavelengths the top of the opaque CO2 layer, at other wavelengths the top of the opaque water vapor layer. Temperature decreases with increasing altitude. One can think of this as the gas molecules slowing down as they move upwards, or as the energy lost due to the work performed as an adiabatically-isolated volume of gas rises and expands and cools. The cooled gas has a much lower vapor pressure of water - the excess water condenses and falls. Above the tropopause, there is very little water, so the infrared bands blocked by water vapor can radiate freely into space. In those bands, the earth appears opaque at the tropopause, and the radiation in those bands is the black body radiation of gas at the temperature at the top of the tropopause. If the bottom of the atmosphere gets warmer, the tropopause moves up, so that the temperature lapse rate ( temperature per vertical distance) stays approximately the same. Since CO2 does not condense at atmospheric temperatures, its pressure versus altitude decreases exponentially, but never drops abruptly to zero. As it gets thinner, the absorption length goes up inversely proportional to density. When it gets thin enough, it approaches transparency in its absorption bands. The temperature of the atmosphere at that altitude determines the black body radiation in those infrared bands. So, if the CO2 density lapse rate is 5km (WAG), and the CO2 doubles, then the "transparency altitude" moves up by 3.3km, colder by the temperature lapse rate of (6.5K / km ) or 21K. To radiate the same amount of heat in the IR band, it would need to increase the temperature at those altitudes by 21K - which means increasing the temperature of the whole column underneath by 21K, all the way to the ground. Fortunately, the CO2 opacity wavelengths are only a small percentage of the whole IR spectrum, so the net result is a small percentage increase of 21K, with a little more IR emitted by water vapor and a lot more by the surface. Much depends on the IR albedo of the surface, which is a lot lower for ocean water than for sea ice, so increased temperatures at the poles and on land will send significantly more heat into space. The exponential decrease in pressure with altitude is ultimately where the observed ln(P ,CO2,) temperature dependence comes from, not a "widening of the CO2 absorption band with temperature". Do the math. Of course, the whole picture is complicated, and while the atmospheric scientists are making great strides with models and measurements, they are still struggling to produce simple but accurate explanations that fit into sound bites. The "heat blanket" sound bite is simple but misleading, and creating a lot of opposition. Better explanations, probably relying on computer graphics and analogies between IR and visual color, would go a long way towards adding accuracy to "both sides" of the debate. CO2 increase - don't ignore agriculture Note: I need to find a good graph of CO2 versus time back to 1750. Most of these graphs start at 1900CE, when there is already a distinct upslope in CO2. This is not commensurate with global industrialization rates, but is strongly related to the clearing of forests and their replacement by tillage agriculture, which releases enormous amounts of CO2 into the atmosphere from wood and soil. This is not to say that agriculture is to blame for CO2 increases, and mechanical CO2 creation is not. However, if our agricultural practices generate more CO2 than wild nature does, and sequesters less, then we should minimize the amount of agriculture we do, no more than that necessary to feed the world's population. If "green energy" increases the amount of agriculture, or replaces wild lands with sunlight-blocking solar cells, then it increases CO2, and does not reduce it. While the ocean contains far more CO2 than the air, there is far more carbon in soil and rock than both. The carbon "long cycle" is the biological injection of carbon into soil and rock. Weathered rock contains potassium and phosphorus - plant roots replace those cations and anions with carbonates, absorbing the nutrients and binding the carbon to the rocks. This is a slow process, but steady. Between the building of deep carbon-laden soils (mostly humus), the carbonization of rocks, the volcanic burial of rock, and the accumulation of plant and animal remains on the sea floor, in the long term life buries carbon, adding it to land and ocean bedrock. So the big question is whether agriculture, and recent changes for higher productivity, increase or decrease the long term geological sequestration of carbon. It is plausible that many of our high yield crops, fed with artificial fertilizers, do not send down deep roots to wrest these nutrients from the soil and rock. Less root means more biomass for food. That is a good thing in that it reduces the land needed to feed the world, but it comes at a price of reduced carbon sequestration in the soil. If we plow up wild lands to plant shallow-rooted biofuel crops, and feed those crops with energy-intensive fertilizers, we are doing far more harm than good. Most "deserts" are lands unfit for pasture or crop agriculture - but very few are lifeless. Desert biological productivity is surprisingly high (higher than Iowa farmland, for example), but occurs in the crust of the soil, conserving scarce water by not creating tall evaporative leaves and stalks. Most of the world's mineral deserts are the result of prehistoric herding, which stripped the land of the fragile grasses and ground cover that held down the sand. Now they are sand dunes. While those dunes reflect a lot of sunlight back into space, and convert less to heat, they are not compatible with life. We should think long and hard before we turn living deserts into "solar deserts" of PV cells, creating an expensive dribble of electricity while destroying a vast amount of CO2-sequestering life. 220 ppm CO2 is 60 ppm elemental carbon. The air column has a mass of about 10,000 kg/m 2, so assuming uniform vertical mixing that is about 600 grams of carbon per square meter. If 15% of the earth's surface is fertile enough to grow some kinds of plants (including most non-mineral deserts), that is around 4 kilograms of carbon per square meter in fertile areas. For an average soil depth of a meter, and an average density of 2000 kg/m 3, then an additional 0.2% carbon-by-weight captured in the soil will sequester 220 ppm CO2. Carbon is good for soil - the deep, high carbon terra preta soils in the Amazon basin are very good for agriculture compared to the low nutrient clay soils nearby. Some of the permaculture community is enamored of biochar; so am I (unless I learn something different). This is the heating of organic waste to drive off the volatiles, leaving a charcoal residue that is buried in the soil. Besides sequestering carbon, the charcoal buffers the soil pH, and makes large surface areas of microcavity pores for bacterial substrates, while impeding their consumption by nematodes and other larger-but-still-microscopic predators. The whole story is probably very complicated, and deserves deep study by agricultural and soil scientists. But if we can store kilotons per hectare of carbon in deep soils, we've gone a long way towards cleaning up the atmosphere. Liquid methane for aviation First things first - before greatly expanding shale gas production, we need to manage water better: 1) Full public disclosure on what goes in the wells, and comes out of them 2) Mark individual wells chemically, so we can trace the source of pollution 3) Use seawater, pumped overland, rather than drain aquifers and compete with agriculture for water 4) Regulation to enforce long term responsibility for fractured shale commensurate with long term, boundary crossing effects. Jet aircraft may be the best transportation use for methane, possibly from fractured shale. Tupolev is designing planes to run on natural gas. Unlike passenger vehicles, aircraft travel between major airports, with fuel systems operated by professionals. Aircraft fuel is stored in huge, easy to insulate tanks. Liquid methane must be kept cryogenically cold, but the heat of evaporation can maintain temperature for the duration of a flight. If the tank and liquid are pressurized, that may require wing redesign, but the pressure can help stiffen the wing, much like the thin tanks on the Atlas rocket. LCH4 is only 60% as dense as normal jet fuel, but the energy per weight is higher and fuel cost and the pollution are lower. This increases fuel economy, lowers takeoff weight, and makes aircraft operation cheaper. This reduces the need for a recooler after the intake compressor in the engine. Under pressure the boiloff displaces air from the ullage above the remaining liquid. Managed correctly, this should prevent air-fuel explosions like TWA Flight 800 in 1996. Drawbacks: The cryogenic liquid can lead to icing, stuck valves, and mechanical problems in the wings. There may be safety problems while maintenance crews learn new techniques and procedures. Ultracold metal can be more brittle. Magnetic/Ballistic Power Storage Loop Flywheel speeds are limited because the structural mass rotates - even very expensive carbon fiber has a limited strength to mass ratio, so v 2 is proportional to a×r and S/ρ . Magnetic levitation is weaker than structural materials, but a magnetic field is massless. The strength of a magnetic field is \mu H^2/2 ; for an average 1 Tesla field, that is 800KPa . Imagine a horizontal rotating ring of iron in vacuum (axis pointed vertically, supported by magnetic levitation, with very strong stationary electromagnets inside the ring providing centripedal acceleration. By Earnshaw's theorem, the magnetic suspension is unstable, but the ring-to-magnet spacing can be controlled by distance measurement (optical, capacitance, ...) and rapid (10s of microseconds) electronic adjustment of the electromagnet current. If the ring has radius R, height H, velocity V, and mass M, then the pressure P_R necessary to counteract centrifugal acceleration is P_R = M V^2 / 2 \pi H R^2 . The energy E stored in the ring is E = \pi P_R H R^2 . Unlike a flywheel, a small ring does not store much energy, but a very large ring with large R can move incredibly fast and store a huge amount of energy in ordinary steel, costing perhaps $300/tonne. If the ring is a meter high, a centimeter thick, and the diameter of an agricultural pivot irrigator (1km), it weighs about m = 80 kg/meter, or 250 tonnes for the ring (the stationary magnets weigh more). the velocity is \sqrt{ P_R H R / m } or 2200 m/s , and the energy stored is 6.3e11 joules or 170 MW-h , as much as 6800 Beacon Power flywheels. This is proportional to R 2. A ring 12.5 km diameter would use 10 times as much material, 12.5 times as much power for the magnet windings, 12.5 times as much electronics. But it would move \sqrt{12.5} times as fast ( 7900 m/s ), store 12.5 times as much energy per meter, and 156 times as much total energy as a 1 km diameter ring, in this case 26 GW-h, more than a million flywheels. 7900 m/s is interesting - it is orbital velocity. If we cut our ring in half, and insert long straight sections between the halves, we add total mass and total energy storage without increasing velocity. The "straight" sections follow the curvature of the earth. Nominally they do not need magnet support - but since we are speeding and slowing the loop to add and subtract energy, and we must move the rotor during earthquake accelerations, we will need about 800 Pa/meter of control magnets - about 0.1% of the magnets needed for the D shaped turnarounds. If we added 180 km of straight section, the energy stored goes up 10X. This loop would store 260 GW-h, perhaps 160 GW-h recoverable from speeds between 4900 and 7900 m/s (slower than that, and we need to put more average power into our straight section magnets to provide lift). The two straight sections could be converged into one track for most of the distance, perhaps under a railroad right-of-way. 7900 meters per second is fast - if the system breaks and throws chunks, they go a long way. This must be deeply buried - but we can use horizontal well drills to make the tunnels, or put it underwater. If a sand particle gets loose between the moving iron rotor and the stationary magnet track (stator), it may cause a "hypervelocity spalling cascade", with material from the rotor knocking loose material from the stator and vice versa. But we can make the rotor thicker and slower, increasing the mass of the rotor and the straight section magnets while reducing the velocity and the spalling energies. We can also coat the rotor and stator surfaces with diamond (now a common industrial process), which is lighter and stronger - less energy per particle, and more tightly bound surfaces. Imagine a loop running underwater, following the continental shelf around the Pacific, from China past Alaska to Mexico, and back, a total length of around 30,000 km. This loop could provide 12 terawatt-hours of usable energy storage, while efficiently sharing power between the US and Asia. The loop would be expensive - perhaps 30 billion dollars at 1 thousand dollars per mass-produced meter - but if the cost difference between peak and trough demand prices was 1 cent per kilowatt hour, it would produce 120 million dollars per cycle. With two deep cycles per week, it would pay for itself in 2.5 years. In 2012, the peak-to-trough prices vary a lot more, but a very large system will eliminate most of that variance. This is all wildly speculative, and will need a lot of development, but the physics is well understood. See http://launchloop.com/PowerLoop for more. 380 Trillion Terawatts
Let's see. There are two observations one needs to make in order to "arrive" to F-theory. Let's go back to type IIB string theory and take the lowe energy sugra 7-brane solutions. These 7-branes have an harmonic function that depends logarithmically on the transverse distance from the brane, something distinct to these 7-branes and not to other lower $p$ $Dp$-branes. If you examine this system you will realize that there exists a $SL(2,\mathbb{Z})$ symmetry and that many of these 7-branes put together backreact to give a a $\mathbb{P}^1$ background. The other observation is that this $SL(2,\mathbb{Z})$ symmetry has a nice geometric interpretation as the modular group of $T^2 \approx S^1 \times S^1$ in whose zero limit we compactify M-theory to get the 9-dimensional type IIA string theory (if I remember well). These are the two points or observation that lead to consider F-theory in the first place. F-theory is nothing more than a "new" way to compactify type IIB string theory in which the complex scalar field $\tau$ is not constant anymore. The novelty is also that we can consider this scalar field $\tau$ as the complex structure modulus of an auxiliary torus with modular group the usual $SL(2,\mathbb{Z})$ (and this "interpretation" if I am not mistaken is the same in say Seiberg-Witten theory). We the above in mind we indeed get a 12 dimenisonal theory where the torus on which we compactify it is actually a non-physical torus, it does not have a pure geometric interpretation. Note that the dimensional reduction is not a usual KK reduction as we do in, say, type IIB when compactifying it in $\mathbb{M}^4 \times T^6$. Additionally note that the low energy limit is not given by a 12 dimensional sugra theory since sugra can be realized up to only 11 dimensions. The above want to morally communicate the fact that the 12 dimensional interpretation is a useful means to geometrize they $SL(2,\mathbb{Z})$ duality symmetry. Now, upon compactifying the resulting IIB theory in lower dimensions (so, after the $T^2$ F-theory compactification) we get already some remarkable results. The compactification of tyoe IIB on the previously mentioned $\mathbb{P}^1$ because of the backreaction of the 7-branes preserves half the susy. What is remarkable is the fact that also M-theory compactified on a K3 surface preserves the same amount of supersymmtries. Now things can get quite technical but we already see some connection. If one goes further she will that M-theory and F-theory are related to each other after one has dualized M-theory and type IIB strings on the $\mathbb{P}^1$ and by the (conjectured) fact that F-theory on an elliptically fibrated K3 is also dual to type IIB strings on $\mathbb{P}^1$. To end up, the most useful road map I have found is the picture where F-theory on $T^2$ is dual to type IIB in 10 dimensions which is T-dual to type IIA in 9 dimensions which is the M-theory compactification on $T^2$. I took the above notes from a graduate course I attended with lecturer Inaki García-Etxebarria who works on F-theory. Additionally a nice resource is of course the nLab article and also Herman Verlinde's lectures on PiTP. Maybe Weigand's notes are also useful.
№ 9 All Issues Shchedrik V. P. Bezout rings of stable ranк 1.5 and the decomposition of a complete linear group into its multiple subgroups Ukr. Mat. Zh. - 2017. - 69, № 1. - pp. 113-120 A ring $R$ is called a ring of stable rank 1.5 if, for any triple $a, b, c \in R, c \not = 0$, such that $aR + bR + cR = R$, there exists $r \in R$ such that $(a + br)R + cR = R$. It is proved that a commutative Bezout domain has a stable rank 1.5 if and only if every invertible matrix $A$ can be represented in the form $A = HLU$, where $L, U$ are elements of the groups of lower and upper unitriangular matrices (triangular matrices with 1 on the diagonal) and the matrix $H$ belongs to the group $$\bf{G} \Phi = \{ H \in \mathrm{G}\mathrm{L}n(R) | \exists H_1 \in \mathrm{G}\mathrm{L}_n(R) : H\Phi = \Phi H_1\},$$ where $\Phi = \mathrm{d}\mathrm{i}\mathrm{a}\mathrm{g} (\varphi 1, \varphi 2,..., \varphi n), \varphi 1| \varphi 2| ... | \varphi n, \varphi n \not = 0$. Ukr. Mat. Zh. - 2015. - 67, № 6. - pp. 849–860 A ring $R$ has a stable range 1.5 if, for every triple of left relatively prime nonzero elements $a, b$ and $c$ in $R$, there exists $r$ such that the elements $a+br$ and $c$ are left relatively prime. Let $R$ be a commutative Bezout domain. We prove that the matrix ring $M_2 (R)$ has the stable range 1.5 if and only if the ring $R$ has the same stable range. Ukr. Mat. Zh. - 2014. - 66, № 3. - pp. 425–430 We study the structure of the greatest common divisor of matrices one of which is a disappear matrix. In this connection, we indicate the Smith normal form and the transforming matrices of the left greatest common divisor. Ukr. Mat. Zh. - 2012. - 64, № 1. - pp. 126-139 We study commutative domains of elementary divisors from the viewpoint of investigation of the structure of invertible matrices that reduce a given matrix to the diagonal form. Some properties of elements of these domains are indicated. We establish conditions, close to the stable-rank conditions, under which a commutative Bezout domain is a domain of ´ elementary divisors. Ukr. Mat. Zh. - 1987. - 39, № 3. - pp. 370-373
Region Less One Point is Region Theorem Let $R \subseteq M$ be a region of $M$. Let $x \in R$. Then $R \setminus \left\{ {x}\right\}$ is also a region of $M$. Proof First, note that as $R$ is open it can not be a singleton from Finite Subspace of Dense-in-itself Metric Space is not Open. Therefore $R \setminus \left\{{x}\right\}$ is not empty. Now, let $\alpha, \beta \in R$. If $x \notin \Gamma$, then $\Gamma$ is also a path in $R \setminus \left\{{x}\right\}$, and we are done. If $x \in \Gamma$, then we consider the open $\epsilon$-ball $B_\epsilon \left({x}\right)$ of $x$ for some $\epsilon$ such that $B_\epsilon \left({x}\right) \subseteq R$.
Definition:Strict Upper Closure/Element Definition Let $\left({S, \preccurlyeq}\right)$ be an ordered set. Let $a \in S$. The strict upper closure of $a$ (in $S$) is defined as: $a^\succ := \left\{{b \in S: a \preccurlyeq b \land a \ne b}\right\}$ or: $a^\succ := \left\{{b \in S: a \prec b}\right\}$ $a^\preccurlyeq := \left\{{b \in S: b \preccurlyeq a}\right\}$: the lower closure of $a \in S$: everything in $S$ that precedes $a$ $a^\succcurlyeq := \left\{{b \in S: a \preccurlyeq b}\right\}$: the upper closure of $a \in S$: everything in $S$ that succeeds $a$ $a^\prec := \left\{{b \in S: b \preccurlyeq a \land a \ne b}\right\}$: the strict lower closure of $a \in S$: everything in $S$ that strictly precedes $a$ $a^\succ := \left\{{b \in S: a \preccurlyeq b \land a \ne b}\right\}$: the strict upper closure of $a \in S$: everything in $S$ that strictly succeeds $a$. $\displaystyle T^\preccurlyeq := \bigcup \left\{{t^\preccurlyeq: t \in T:}\right\}$: the lower closure of $T \in S$: everything in $S$ that precedes some element of $T$ $\displaystyle T^\succcurlyeq := \bigcup \left\{{t^\succcurlyeq: t \in T:}\right\}$: the upper closure of $T \in S$: everything in $S$ that succeeds some element of $T$ $\displaystyle T^\prec := \bigcup \left\{{t^\prec: t \in T:}\right\}$: the strict lower closure of $T \in S$: everything in $S$ that strictly precedes some element of $T$ $\displaystyle T^\succ := \bigcup \left\{{t^\succ: t \in T:}\right\}$: the strict upper closure of $T \in S$: everything in $S$ that strictly succeeds some element of $T$. The astute reader may point out that, for example, $a^\preccurlyeq$ is ambiguous as to whether it means: The lower closure of $a$ with respect to $\preccurlyeq$ The upper closure of $a$ with respect to the dual ordering $\succcurlyeq$ By Lower Closure is Dual to Upper Closure and Strict Lower Closure is Dual to Strict Upper Closure, the two are seen to be equal. Also denoted as Other notations for closure operators include: ${\downarrow} a, {\bar \downarrow} a$ for lower closure of $a \in S$ ${\uparrow} a, {\bar \uparrow} a$ for upper closure of $a \in S$ ${\downarrow} a, {\dot \downarrow} a$ for strict lower closure of $a \in S$ ${\uparrow} a, {\dot \uparrow} a$ for strict upper closure of $a \in S$ However, as there is considerable inconsistency in the literature as to exactly which of these arrow notations is being used at any one time, its use is not endorsed on $\mathsf{Pr} \infty \mathsf{fWiki}$.
Yeah, this software cannot be too easy to install, my installer is very professional looking, currently not tied into that code, but directs the user how to search for their MikTeX and or install it and does a test LaTeX rendering Some body like Zeta (on codereview) might be able to help a lot... I'm not sure if he does a lot of category theory, but he does a lot of Haskell (not that I'm trying to conflate the two)... so he would probably be one of the better bets for asking for revision of code. he is usually on the 2nd monitor chat room. There are a lot of people on those chat rooms that help each other with projects. i'm not sure how many of them are adept at category theory though... still, this chat tends to emphasize a lot of small problems and occasionally goes off tangent. you're project is probably too large for an actual question on codereview, but there is a lot of github activity in the chat rooms. gl. In mathematics, the Fabius function is an example of an infinitely differentiable function that is nowhere analytic, found by Jaap Fabius (1966). It was also written down as the Fourier transform off^(z)=∏m=1∞(cos... Defined as the probability that $\sum_{n=1}^\infty2^{-n}\zeta_n$ will be less than $x$, where the $\zeta_n$ are chosen randomly and independently from the unit interval @AkivaWeinberger are you familiar with the theory behind Fourier series? anyway here's a food for thought for $f : S^1 \to \Bbb C$ square-integrable, let $c_n := \displaystyle \int_{S^1} f(\theta) \exp(-i n \theta) (\mathrm d\theta/2\pi)$, and $f^\ast := \displaystyle \sum_{n \in \Bbb Z} c_n \exp(in\theta)$. It is known that $f^\ast = f$ almost surely. (a) is $-^\ast$ idempotent? i.e. is it true that $f^{\ast \ast} = f^\ast$? @AkivaWeinberger You need to use the definition of $F$ as the cumulative function of the random variables. $C^\infty$ was a simple step, but I don't have access to the paper right now so I don't recall it. > In mathematics, a square-integrable function, also called a quadratically integrable function, is a real- or complex-valued measurable function for which the integral of the square of the absolute value is finite. I am having some difficulties understanding the difference between simplicial and singular homology. I am aware of the fact that they are isomorphic, i.e. the homology groups are in fact the same (and maybe this doesnt't help my intuition), but I am having trouble seeing where in the setup they d... Usually it is a great advantage to consult the notes, as they tell you exactly what has been done. A book will teach you the field, but not necessarily help you understand the style that the prof. (who creates the exam) creates questions. @AkivaWeinberger having thought about it a little, I think the best way to approach the geometry problem is to argue that the relevant condition (centroid is on the incircle) is preserved by similarity transformations hence you're free to rescale the sides, and therefore the (semi)perimeter as well so one may (for instance) choose $s=(a+b+c)/2=1$ without loss of generality that makes a lot of the formulas simpler, e.g. the inradius is identical to the area It is asking how many terms of the Euler Maclaurin formula do we need in order to compute the Riemann zeta function in the complex plane? $q$ is the upper summation index in the sum with the Bernoulli numbers. This appears to answer it in the positive: "By repeating the above argument we see that we have analytically continued the Riemann zeta-function to the right-half plane σ > 1 − k, for all k = 1, 2, 3, . . .."
2018-09-11 04:29 Proprieties of FBK UFSDs after neutron and proton irradiation up to $6*10^{15}$ neq/cm$^2$ / Mazza, S.M. (UC, Santa Cruz, Inst. Part. Phys.) ; Estrada, E. (UC, Santa Cruz, Inst. Part. Phys.) ; Galloway, Z. (UC, Santa Cruz, Inst. Part. Phys.) ; Gee, C. (UC, Santa Cruz, Inst. Part. Phys.) ; Goto, A. (UC, Santa Cruz, Inst. Part. Phys.) ; Luce, Z. (UC, Santa Cruz, Inst. Part. Phys.) ; McKinney-Martinez, F. (UC, Santa Cruz, Inst. Part. Phys.) ; Rodriguez, R. (UC, Santa Cruz, Inst. Part. Phys.) ; Sadrozinski, H.F.-W. (UC, Santa Cruz, Inst. Part. Phys.) ; Seiden, A. (UC, Santa Cruz, Inst. Part. Phys.) et al. The properties of 60-{\mu}m thick Ultra-Fast Silicon Detectors (UFSD) detectors manufactured by Fondazione Bruno Kessler (FBK), Trento (Italy) were tested before and after irradiation with minimum ionizing particles (MIPs) from a 90Sr \b{eta}-source . [...] arXiv:1804.05449. - 13 p. Preprint - Full text Detaljerad journal - Similar records 2018-08-25 06:58 Charge-collection efficiency of heavily irradiated silicon diodes operated with an increased free-carrier concentration and under forward bias / Mandić, I (Ljubljana U. ; Stefan Inst., Ljubljana) ; Cindro, V (Ljubljana U. ; Stefan Inst., Ljubljana) ; Kramberger, G (Ljubljana U. ; Stefan Inst., Ljubljana) ; Mikuž, M (Ljubljana U. ; Stefan Inst., Ljubljana) ; Zavrtanik, M (Ljubljana U. ; Stefan Inst., Ljubljana) The charge-collection efficiency of Si pad diodes irradiated with neutrons up to $8 \times 10^{15} \ \rm{n} \ cm^{-2}$ was measured using a $^{90}$Sr source at temperatures from -180 to -30°C. The measurements were made with diodes under forward and reverse bias. [...] 2004 - 12 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 533 (2004) 442-453 Detaljerad journal - Similar records 2018-08-23 11:31 Detaljerad journal - Similar records 2018-08-23 11:31 Effect of electron injection on defect reactions in irradiated silicon containing boron, carbon, and oxygen / Makarenko, L F (Belarus State U.) ; Lastovskii, S B (Minsk, Inst. Phys.) ; Yakushevich, H S (Minsk, Inst. Phys.) ; Moll, M (CERN) ; Pintilie, I (Bucharest, Nat. Inst. Mat. Sci.) Comparative studies employing Deep Level Transient Spectroscopy and C-V measurements have been performed on recombination-enhanced reactions between defects of interstitial type in boron doped silicon diodes irradiated with alpha-particles. It has been shown that self-interstitial related defects which are immobile even at room temperatures can be activated by very low forward currents at liquid nitrogen temperatures. [...] 2018 - 7 p. - Published in : J. Appl. Phys. 123 (2018) 161576 Detaljerad journal - Similar records 2018-08-23 11:31 Detaljerad journal - Similar records 2018-08-23 11:31 Characterization of magnetic Czochralski silicon radiation detectors / Pellegrini, G (Barcelona, Inst. Microelectron.) ; Rafí, J M (Barcelona, Inst. Microelectron.) ; Ullán, M (Barcelona, Inst. Microelectron.) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Fleta, C (Barcelona, Inst. Microelectron.) ; Campabadal, F (Barcelona, Inst. Microelectron.) Silicon wafers grown by the Magnetic Czochralski (MCZ) method have been processed in form of pad diodes at Instituto de Microelectrònica de Barcelona (IMB-CNM) facilities. The n-type MCZ wafers were manufactured by Okmetic OYJ and they have a nominal resistivity of $1 \rm{k} \Omega cm$. [...] 2005 - 9 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 548 (2005) 355-363 Detaljerad journal - Similar records 2018-08-23 11:31 Silicon detectors: From radiation hard devices operating beyond LHC conditions to characterization of primary fourfold coordinated vacancy defects / Lazanu, I (Bucharest U.) ; Lazanu, S (Bucharest, Nat. Inst. Mat. Sci.) The physics potential at future hadron colliders as LHC and its upgrades in energy and luminosity Super-LHC and Very-LHC respectively, as well as the requirements for detectors in the conditions of possible scenarios for radiation environments are discussed in this contribution.Silicon detectors will be used extensively in experiments at these new facilities where they will be exposed to high fluences of fast hadrons. The principal obstacle to long-time operation arises from bulk displacement damage in silicon, which acts as an irreversible process in the in the material and conduces to the increase of the leakage current of the detector, decreases the satisfactory Signal/Noise ratio, and increases the effective carrier concentration. [...] 2005 - 9 p. - Published in : Rom. Rep. Phys.: 57 (2005) , no. 3, pp. 342-348 External link: RORPE Detaljerad journal - Similar records 2018-08-22 06:27 Numerical simulation of radiation damage effects in p-type and n-type FZ silicon detectors / Petasecca, M (Perugia U. ; INFN, Perugia) ; Moscatelli, F (Perugia U. ; INFN, Perugia ; IMM, Bologna) ; Passeri, D (Perugia U. ; INFN, Perugia) ; Pignatel, G U (Perugia U. ; INFN, Perugia) In the framework of the CERN-RD50 Collaboration, the adoption of p-type substrates has been proposed as a suitable mean to improve the radiation hardness of silicon detectors up to fluencies of $1 \times 10^{16} \rm{n}/cm^2$. In this work two numerical simulation models will be presented for p-type and n-type silicon detectors, respectively. [...] 2006 - 6 p. - Published in : IEEE Trans. Nucl. Sci. 53 (2006) 2971-2976 Detaljerad journal - Similar records 2018-08-22 06:27 Technology development of p-type microstrip detectors with radiation hard p-spray isolation / Pellegrini, G (Barcelona, Inst. Microelectron.) ; Fleta, C (Barcelona, Inst. Microelectron.) ; Campabadal, F (Barcelona, Inst. Microelectron.) ; Díez, S (Barcelona, Inst. Microelectron.) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Rafí, J M (Barcelona, Inst. Microelectron.) ; Ullán, M (Barcelona, Inst. Microelectron.) A technology for the fabrication of p-type microstrip silicon radiation detectors using p-spray implant isolation has been developed at CNM-IMB. The p-spray isolation has been optimized in order to withstand a gamma irradiation dose up to 50 Mrad (Si), which represents the ionization radiation dose expected in the middle region of the SCT-Atlas detector of the future Super-LHC during 10 years of operation. [...] 2006 - 6 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 566 (2006) 360-365 Detaljerad journal - Similar records 2018-08-22 06:27 Defect characterization in silicon particle detectors irradiated with Li ions / Scaringella, M (INFN, Florence ; U. Florence (main)) ; Menichelli, D (INFN, Florence ; U. Florence (main)) ; Candelori, A (INFN, Padua ; Padua U.) ; Rando, R (INFN, Padua ; Padua U.) ; Bruzzi, M (INFN, Florence ; U. Florence (main)) High Energy Physics experiments at future very high luminosity colliders will require ultra radiation-hard silicon detectors that can withstand fast hadron fluences up to $10^{16}$ cm$^{-2}$. In order to test the detectors radiation hardness in this fluence range, long irradiation times are required at the currently available proton irradiation facilities. [...] 2006 - 6 p. - Published in : IEEE Trans. Nucl. Sci. 53 (2006) 589-594 Detaljerad journal - Similar records
I'm studying qft from Srednicki's book. If one writes down the full $i\epsilon$ terms, passing from Eq. (6.21) (non-perturbative definition) to Eq. (6.22) (perturbative definition) yields $$\left<0|0\right>_{f,h} = \int\mathcal{D}p\mathcal{D}q \exp\left[i\int_{-\infty}^{+\infty}dt\left(p\dot{q} - (1 - i\epsilon )H(p,q) + fq + hp\right)\right] \\ =\exp\left[-i(1 - i\epsilon )\int_{-\infty}^{+\infty}dt H_{1}\left(\frac{1}{i}\frac{\delta}{\delta h(t)},\frac{1}{i}\frac{\delta}{\delta f(t)}\right)\right]\\ \times\int\mathcal{D}p\mathcal{D}q \exp\left[i\int_{-\infty}^{+\infty}dt\left(p\dot{q} - (1 - i\epsilon )H_{0} + fq + hp\right)\right] \tag{6.22}$$ where $f\left( t\right) $, $g\left( t\right) $ are external sources with $\lim_{t\rightarrow \pm \infty }f\left( t\right) =\lim_{t\rightarrow \pm\infty }g\left( t\right) =0$, $\left\langle 0|0\right\rangle _{f,h}$ is the vacum-vacum probability amplitude in the presence of sources, $H=H_{0}+H_{1}$, and $\left\vert 0\right\rangle $ is the $\textit{ground state ket}$ (assumed non-degenerate) of the $\textit{total}$ Hamiltonian operator $\hat{H}=H(\hat{p},\hat{q})$. However, in the physics literature the $i\epsilon$ factor multiplying $H_{1}\left(\frac{1}{i}\frac{\delta}{\delta h(t)},\frac{1}{i}\frac{\delta}{\delta f(t)}\right)$ is absent (see, e.g., Eq. (9.6) from Srednicki), even though the $i\epsilon$ factor multiplying $H_{0}$ is still present during the calculations giving the famous $i\epsilon$ prescription for the free propagator, and $\epsilon$ is taken to the limit $0^{+}$ only in the very end for $H_{0}$. Question: How to justify getting rid of the $i\epsilon$ that multiplies $H_{1}$, while keeping to the very end the $i\epsilon$ that multiplies $H_{0}$? ------------------------------------------------------------ I've tried to reason about it as follows. Let $$ Z_{0}\left[ f\left( t\right) ,g\left( t\right) ;\varepsilon \right] \equiv\int \mathcal{D}p\mathcal{D}q\exp\left[ i\int_{-\infty }^{+\infty }dt\ \left( p\dot{q}-\left( 1-i\varepsilon \right) H_{0}+fq+hp\right) \right] $$ and $$ x_{n}\left[ f\left( t\right) ,g\left( t\right) ;\varepsilon \right] \equiv \frac{\left( -1\right) ^{n}}{n!}\left[ \int_{-\infty }^{+\infty }dt\text{ }% H_{1}\left( \frac{1}{i}\frac{\delta }{\delta h\left( t\right) },\frac{1}{i}% \frac{\delta }{\delta f\left( t\right) }\right) \right] ^{n}Z_{0}\left[ f\left( t\right) ,g\left( t\right) ;\varepsilon \right] $$ Since $$ \exp \left[ -i\left( 1-i\varepsilon \right) \int_{-\infty }^{+\infty }dt \text{ }H_{1}\left( \frac{1}{i}\frac{\delta }{\delta h\left( t\right) }, \frac{1}{i}\frac{\delta }{\delta f\left( t\right) }\right) \right] \\ =\sum_{n=0}^{\infty }\frac{\left( -1\right) ^{n}}{n!}\left( i+\varepsilon \right) ^{n}\left[ \int_{-\infty }^{+\infty }dt\text{ }H_{1}\left( \frac{1}{i }\frac{\delta }{\delta h\left( t\right) },\frac{1}{i}\frac{\delta }{\delta f\left( t\right) }\right) \right] ^{n} $$ it follows that \begin{eqnarray*} \left\langle 0|0\right\rangle _{f,h} &=&\lim_{\varepsilon \rightarrow 0^{+}}\left\{ \sum_{n=0}^{\infty }\left( i+\varepsilon \right) ^{n}x_{n} \left[ f\left( t\right) ,g\left( t\right) ;\varepsilon \right] \right\} \\ &=&\lim_{\varepsilon \rightarrow 0^{+}}\left\{ \sum_{n=0}^{\infty }i^{n}x_{n} \left[ f\left( t\right) ,g\left( t\right) ;\varepsilon \right] \right\} \\ &&-i\lim_{\varepsilon \rightarrow 0^{+}}\varepsilon \left\{ \sum_{n=0}^{\infty }n\cdot i^{n} x_{n}\left[ f\left( t\right) ,g\left( t\right) ;\varepsilon \right] \right\} +\lim_{\varepsilon \rightarrow 0^{+}}O\left( \varepsilon ^{2}\right) \end{eqnarray*} I assume that the series $S_{1}\left( \varepsilon \right) \equiv \sum_{n=0}^{\infty }i^{n}x_{n}\left[ f\left( t\right) ,g\left( t\right) ;\varepsilon \right] $ is convergent, i.e., that $S_{1}\left( \varepsilon \right) $ is finite. However, the series $$ S_{2}\left( \varepsilon \right) \equiv \sum_{n=0}^{\infty }n\cdot i^{n} x_{n}\left[ f\left( t\right) ,g\left( t\right) ;\varepsilon \right] $$ is not necessarily convergent due to the multiplication of $i^{n}x_{n}\left[ f\left( t\right) ,g\left( t\right) ;\varepsilon \right] $ by $n$. Therefore, the limit $$ \lim_{\varepsilon \rightarrow 0^{+}}\varepsilon \left\{ \sum_{n=0}^{\infty }n\cdot i^{n}x_{n}\left[ f\left( t\right) ,g\left( t\right) ;\varepsilon \right] \right\} =\lim_{\varepsilon \rightarrow 0^{+}}\varepsilon \cdot S_{2}\left( \varepsilon \right) $$ may not even exist. Only in the case in which $\lim_{\varepsilon \rightarrow 0^{+}}\varepsilon \cdot S_{2}\left( \varepsilon \right) =0$ can one write \begin{eqnarray*} \left\langle 0|0\right\rangle _{f,h} &=&\lim_{\varepsilon \rightarrow 0^{+}}\left\{ \sum_{n=0}^{\infty }i^{n}x_{n}\left[ f\left( t\right) ,g\left( t\right) ;\varepsilon \right] \right\} \\ &=&\lim_{\varepsilon \rightarrow 0^{+}}\exp \left[ -i\int_{-\infty }^{+\infty }dt\text{ }H_{1}\left( \frac{1}{i}\frac{\delta }{\delta h\left( t\right) },\frac{1}{i}\frac{\delta }{\delta f\left( t\right) }\right) \right] \\ &&\times \int \mathcal{D}p\mathcal{D}q\exp \left[ i\int_{-\infty }^{+\infty }dt\text{ }\left( p\dot{q}-\left( 1-i\varepsilon \right) H_{0}+fq+hp\right) % \right] \end{eqnarray*} and the $i\varepsilon $ that multiplies $H_{1}$ can be taken as $0$, while the $i\varepsilon $ that multiplies $H_{0}$ is beeing kept until the very end of calculations. Question: Is there any other way to justify this replacement? Is the perturbative definition of path integrals ill-defined due to possible divergence of the series $S_{2}\left( \varepsilon \right)$? _________________________________________ I would be most thankful if you could help me with a question concerning the perturbative definition of Green functions via path integrals in ordinary qm and in non-relativistic and relativistic qft. Following the epoch making papers by Osterwalder and Schrader, it has become common practice to work in Euclidean time with Euclidean Green functions and then to analytically continue them back to the real time. Concerning path integrals, one usually defines them perturbatively. My elementary analysis shows as to what happens when one makes an infinitesimal Wick rotation $t\rightarrow (1 - i\epsilon)t$ (much used in the literature). When the limit $\epsilon\rightarrow 0^{+}$ is taken in the end, in the perturbative definition of the path integral, I show that one encounters a nonsensical result of the form $\epsilon \times (divergent \ series)$ which doesn't have a well-defined limit when $\epsilon\rightarrow 0^{+}$. It seems that for an infinitesimal Wick rotation, when one analytically continues back to the real time (i.e., takes the limit $\epsilon\rightarrow 0^{+}$), one obtains a nonsensical result. An analytic continuation is not a mere replacement $t\rightarrow it$, but can be thought of as a series of infinitesimal Wick rotations. __________________________________________________- The problem of analytic continuation from real time to Euclidean time is very nicely presented in Ch. 13 of Giorgio Parisi's famous book on statistical field theory. This is the essence of the $\epsilon$ trick in Srednicki's book. In Parisi's analysis everything works very well since it is a non-perturbative analysis, based on the total Hamiltonian. The problem shows up in the perturbative approach as I have shown above. The perturbative approach for the analysis of analytic continuation $t\rightarrow it$ is not presented in any book that I know of. In order for the theory to be consistent, the perturbation series must be convergent for any $t\exp(i\theta)$ with $0 < \theta < \pi/2$. For the theory to exist, it is NOT sufficient that the case $\theta = \pi/2$ is convergent. I think that the problem is very serious since the perturbative definition of path integrals is the most common tool in physics. The question I'm raising is not merely an academic one. I have no proof that the series $S_2$ is divergent, since this is model dependent. However, there are no theorems in complex or real analysis that discuss the convergence of a series $S_2 = \sum_{n=0}^{\infty} nx_n$, if it is known that the series $S_1 = \sum_{n=0}^{\infty} x_n$ is convergent. If one continues the series expansion in $\epsilon$ to higher order terms, the divergences are even worse. Questions: Is the perturbative definition of path integrals wrong? Is there a version of the O-S axioms for non-relativistic qm and condensed matter? Is the perturbative definition of path integrals in contradiction with the O-S axioms? I would be most thankful if you could send me your opinion on these questions.
Definition:Integral Sign Definition Let $\left({X, \Sigma, \mu}\right)$ be a measure space. Let: $\displaystyle \int f \rd \mu := \sup \, \left\{{I_\mu \left({g}\right): g \le f, g \in \mathcal E^+}\right\}$ $\mathcal M_{\overline \R}^+$ denotes the space of positive $\Sigma$-measurable functions $\overline \R_{\ge 0}$ denotes the positive extended real numbers $\sup$ is a supremum in the extended real ordering $I_\mu \left({g}\right)$ denotes the $\mu$-integral of the positive simple function $g$ $g \le f$ denotes pointwise inequality $\mathcal E^+$ denotes the space of positive simple functions The symbol: $\displaystyle \int \ldots \rd \mu$ is called the integral sign. Note that there are two parts to this symbol, which embrace the function $f$ which is being integrated. In a manuscript dated $29$th October $1675$ he introduced a long letter $S$ to suggest the first letter of the word summa (Latin for sum). At the time he was using the notation $\operatorname {omn} l$ (that is: omnes lineae, meaning all lines). Then he noted: It will be useful to write $\int$ for $\operatorname {omn}$, thus $\int \, l$ for $\operatorname {omn} l$, that is, the sum of those $l$'s. At the same time he introduced the differential symbol $\rd$. Thus he was soon writing $\rd x$, $\rd y$, and $\int \ldots \rd x$ soon followed. In his $1684$ article Nova Methodus pro Maximis et Minimis, published in Acta Eruditorum, he casually drops the notation in place with very little explanation.
thermodynamic_temperature¶ astropy.units. thermodynamic_temperature( frequency, T_cmb=None)¶ Defines the conversion between Jy/sr and “thermodynamic temperature”, \(T_{CMB}\), in Kelvins. The thermodynamic temperature is a unit very commonly used in cosmology. See eqn 8 in [1] \(K_{CMB} \equiv I_\nu / \left(2 k \nu^2 / c^2 f(\nu) \right)\) with \(f(\nu) = \frac{ x^2 e^x}{(e^x - 1 )^2}\) where \(x = h \nu / k T\) Parameters Notes For broad band receivers, this conversion do not hold as it highly depends on the frequency References 1 Planck 2013 results. IX. HFI spectral response https://arxiv.org/abs/1303.5070 Examples Planck HFI 143 GHz: >>> from astropy import units as u >>> from astropy.cosmology import Planck15 >>> freq = 143 * u.GHz >>> equiv = u.thermodynamic_temperature(freq, Planck15.Tcmb0) >>> (1. * u.mK).to(u.MJy / u.sr, equivalencies=equiv) <Quantity 0.37993172 MJy / sr>
[latex]\begin{array}{l}\text{Budget}={P}_{1}\times{Q}_{1}+{P}_{2}\times{Q}_{2}\\\text{Budget}=\$10\\\,\,\,\,\,\,\,\,\,\,\,\,{P}_{1}=\$2\left(\text{the price of a burger}\right)\\\,\,\,\,\,\,\,\,\,\,\,\,{Q}_{1}=\text{quantity of burgers}\left(\text{variable}\right)\\\,\,\,\,\,\,\,\,\,\,\,\,{P}_{2}=\$0.50\left(\text{the price of a bus ticket}\right)\\\,\,\,\,\,\,\,\,\,\,\,\,{Q}_{2}=\text{quantity of tickets}\left(\text{variable}\right)\end{array}[/latex] For Charlie, this is [latex]{\$10}={\$2}\times{Q}_{1}+{\$0.50}\times{Q}_{2}[/latex] Step 3. Simplify the equation. At this point we need to decide whether to solve for [latex]{Q}_{1} [/latex] or [latex]{Q}_{2} [/latex]. Remember, [latex]{Q}_{1} = \text{quantity of burgers} [/latex]. So, in this equation [latex]{Q}_{1} [/latex] represents the number of burgers Charlie can buy depending on how many bus tickets he wants to purchase in a given week. [latex]{Q}_{2}=\text{quantity of tickets} [/latex]. So, [latex]{Q}_{2} [/latex] represents the number of bus tickets Charlie can buy depending on how many burgers he wants to purchase in a given week. We are going solve for [latex]{Q}_{1} [/latex]. [latex]\begin{array}{l}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,10=2Q_{1}+0.50Q_{2}\\\,\,\,10-2Q_{1}=0.50Q_{2}\\\,\,\,\,\,\,\,\,\,\,\,\,-2Q_{1}=-10+0.50Q_{2}\\\left(2\right)\left(-2Q_{1}\right)=\left(2\right)-10+\left(2\right)0.50Q_{2}\,\,\,\,\,\,\,\,\,\text{Clear decimal by multiplying everything by 2}\\\,\,\,\,\,\,\,\,\,\,\,\,-4Q_{1}=-20+Q_{2}\\\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,Q_{1}=5-\frac{1}{4}Q_{2}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\text{Divide both sides by}-4\end{array}[/latex] Step 4. Use the equation. Now we have an equation that helps us calculate the number of burgers Charlie can buy depending on how many bus tickets he wants to purchase in a given week. For example, say he wants 8 bus tickets in a given week. [latex]{Q}_{2}[/latex] represents the number of bus tickets Charlie buys, so we plug in 8 for [latex]{Q}_{2}[/latex], which gives us [latex]\begin{array}{l}{Q}_{1}={5}-\left(\frac{1}{4}\right)8\\{Q}_{1}={5}-2\\{Q}_{1}=3\end{array}[/latex] This means Charlie can buy 3 burgers that week (point C on the graph, above). Let’s try one more. Say Charlie has a week when he walks everywhere he goes so that he can splurge on burgers. He buys 0 bus tickets that week. [latex]{Q}_{2}[/latex] represents the number of bus tickets Charlie buys, so we plug in 0 for [latex]{Q}_{2}[/latex], giving us [latex]\begin{array}{l}{Q}_{1}={5}-(\frac{1}{4})0\\{Q}_{1}={5}\end{array}[/latex] So, if Charlie doesn’t ride the bus, he can buy 5 burgers that week (point A on the graph). If you plug other numbers of bus tickets into the equation, you get the results shown in Table 1, below, which are the points on Charlie’s budget constraint. Table 1. Point Quantity of Burgers (at $2) Quantity of Bus Tickets (at 50 cents) A 5 0 B 4 4 C 3 8 D 2 12 E 1 16 F 0 20 Graph the results. Step 4. If we plot each point on a graph, we can see a line that shows us the number of burgers Charlie can buy depending on how many bus tickets he wants to purchase in a given week. Figure 2. Charlie’s Budget Constraint. We can make two important observations about this graph. First, the slope of the line is negative (the line slopes downward from left to right). Remember in the last module when we discussed graphing, we noted that when when X and Y have a negative, or inverse, relationship, X and Y move in opposite directions—that is, as one rises, the other falls. This means that the only way to get more of one good is to give up some of the other. Second, the slope is defined as the change in the number of burgers (shown on the vertical axis) Charlie can buy for every incremental change in the number of tickets (shown on the horizontal axis) he buys. If he buys one less burger, he can buy four more bus tickets. The slope of a budget constraint always shows the opportunity cost of the good that is on the horizontal axis. If Charlie has to give up lots of burgers to buy just one bus ticket, then the slope will be steeper, because the opportunity cost is greater. This is easy to see while looking at the graph, but opportunity cost can also be calculated simply by dividing the cost of what is given up by what is gained. For example, the opportunity cost of the burger is the cost of the burger divided by the cost of the bus ticket, or
Yeah, this software cannot be too easy to install, my installer is very professional looking, currently not tied into that code, but directs the user how to search for their MikTeX and or install it and does a test LaTeX rendering Some body like Zeta (on codereview) might be able to help a lot... I'm not sure if he does a lot of category theory, but he does a lot of Haskell (not that I'm trying to conflate the two)... so he would probably be one of the better bets for asking for revision of code. he is usually on the 2nd monitor chat room. There are a lot of people on those chat rooms that help each other with projects. i'm not sure how many of them are adept at category theory though... still, this chat tends to emphasize a lot of small problems and occasionally goes off tangent. you're project is probably too large for an actual question on codereview, but there is a lot of github activity in the chat rooms. gl. In mathematics, the Fabius function is an example of an infinitely differentiable function that is nowhere analytic, found by Jaap Fabius (1966). It was also written down as the Fourier transform off^(z)=∏m=1∞(cos... Defined as the probability that $\sum_{n=1}^\infty2^{-n}\zeta_n$ will be less than $x$, where the $\zeta_n$ are chosen randomly and independently from the unit interval @AkivaWeinberger are you familiar with the theory behind Fourier series? anyway here's a food for thought for $f : S^1 \to \Bbb C$ square-integrable, let $c_n := \displaystyle \int_{S^1} f(\theta) \exp(-i n \theta) (\mathrm d\theta/2\pi)$, and $f^\ast := \displaystyle \sum_{n \in \Bbb Z} c_n \exp(in\theta)$. It is known that $f^\ast = f$ almost surely. (a) is $-^\ast$ idempotent? i.e. is it true that $f^{\ast \ast} = f^\ast$? @AkivaWeinberger You need to use the definition of $F$ as the cumulative function of the random variables. $C^\infty$ was a simple step, but I don't have access to the paper right now so I don't recall it. > In mathematics, a square-integrable function, also called a quadratically integrable function, is a real- or complex-valued measurable function for which the integral of the square of the absolute value is finite. I am having some difficulties understanding the difference between simplicial and singular homology. I am aware of the fact that they are isomorphic, i.e. the homology groups are in fact the same (and maybe this doesnt't help my intuition), but I am having trouble seeing where in the setup they d... Usually it is a great advantage to consult the notes, as they tell you exactly what has been done. A book will teach you the field, but not necessarily help you understand the style that the prof. (who creates the exam) creates questions. @AkivaWeinberger having thought about it a little, I think the best way to approach the geometry problem is to argue that the relevant condition (centroid is on the incircle) is preserved by similarity transformations hence you're free to rescale the sides, and therefore the (semi)perimeter as well so one may (for instance) choose $s=(a+b+c)/2=1$ without loss of generality that makes a lot of the formulas simpler, e.g. the inradius is identical to the area It is asking how many terms of the Euler Maclaurin formula do we need in order to compute the Riemann zeta function in the complex plane? $q$ is the upper summation index in the sum with the Bernoulli numbers. This appears to answer it in the positive: "By repeating the above argument we see that we have analytically continued the Riemann zeta-function to the right-half plane σ > 1 − k, for all k = 1, 2, 3, . . .."
Search Now showing items 1-9 of 9 Production of $K*(892)^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$ =7 TeV (Springer, 2012-10) The production of K*(892)$^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$=7 TeV was measured by the ALICE experiment at the LHC. The yields and the transverse momentum spectra $d^2 N/dydp_T$ at midrapidity |y|<0.5 in ... Transverse sphericity of primary charged particles in minimum bias proton-proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV (Springer, 2012-09) Measurements of the sphericity of primary charged particles in minimum bias proton--proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV with the ALICE detector at the LHC are presented. The observable is linearized to be ... Pion, Kaon, and Proton Production in Central Pb--Pb Collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012-12) In this Letter we report the first results on $\pi^\pm$, K$^\pm$, p and pbar production at mid-rapidity (|y|<0.5) in central Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV, measured by the ALICE experiment at the LHC. The ... Measurement of prompt J/psi and beauty hadron production cross sections at mid-rapidity in pp collisions at root s=7 TeV (Springer-verlag, 2012-11) The ALICE experiment at the LHC has studied J/ψ production at mid-rapidity in pp collisions at s√=7 TeV through its electron pair decay on a data sample corresponding to an integrated luminosity Lint = 5.6 nb−1. The fraction ... Suppression of high transverse momentum D mesons in central Pb--Pb collisions at $\sqrt{s_{NN}}=2.76$ TeV (Springer, 2012-09) The production of the prompt charm mesons $D^0$, $D^+$, $D^{*+}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at the LHC, at a centre-of-mass energy $\sqrt{s_{NN}}=2.76$ TeV per ... J/$\psi$ suppression at forward rapidity in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012) The ALICE experiment has measured the inclusive J/ψ production in Pb-Pb collisions at √sNN = 2.76 TeV down to pt = 0 in the rapidity range 2.5 < y < 4. A suppression of the inclusive J/ψ yield in Pb-Pb is observed with ... Production of muons from heavy flavour decays at forward rapidity in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV (American Physical Society, 2012) The ALICE Collaboration has measured the inclusive production of muons from heavy flavour decays at forward rapidity, 2.5 < y < 4, in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV. The pt-differential inclusive ... Particle-yield modification in jet-like azimuthal dihadron correlations in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012-03) The yield of charged particles associated with high-pT trigger particles (8 < pT < 15 GeV/c) is measured with the ALICE detector in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV relative to proton-proton collisions at the ... Measurement of the Cross Section for Electromagnetic Dissociation with Neutron Emission in Pb-Pb Collisions at √sNN = 2.76 TeV (American Physical Society, 2012-12) The first measurement of neutron emission in electromagnetic dissociation of 208Pb nuclei at the LHC is presented. The measurement is performed using the neutron Zero Degree Calorimeters of the ALICE experiment, which ...
Search Now showing items 1-10 of 27 Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider (American Physical Society, 2016-02) The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ... Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (Elsevier, 2016-02) Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ... Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (Springer, 2016-08) The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ... Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2016-03) The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ... Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2016-03) Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ... Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV (Elsevier, 2016-07) The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ... $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2016-03) The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ... Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV (Elsevier, 2016-09) The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ... Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV (Elsevier, 2016-12) We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ... Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV (Springer, 2016-05) Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ...
Search Now showing items 1-5 of 5 Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV (Springer, 2015-09) We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ... Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2015-07-10) The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ...
Search Now showing items 11-20 of 165 Multiplicity dependence of the average transverse momentum in pp, p-Pb, and Pb-Pb collisions at the LHC (Elsevier, 2013-12) The average transverse momentum <$p_T$> versus the charged-particle multiplicity $N_{ch}$ was measured in p-Pb collisions at a collision energy per nucleon-nucleon pair $\sqrt{s_{NN}}$ = 5.02 TeV and in pp collisions at ... Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider (American Physical Society, 2016-02) The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ... Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (Elsevier, 2016-02) Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ... K$^{*}(892)^{0}$ and $\phi(1020)$ meson production at high transverse momentum in pp and Pb-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 2.76 TeV (American Physical Society, 2017-06) The production of K$^{*}(892)^{0}$ and $\phi(1020)$ mesons in proton-proton (pp) and lead-lead (Pb-Pb) collisions at $\sqrt{s_\mathrm{NN}} =$ 2.76 TeV has been analyzed using a high luminosity data sample accumulated in ... Production of $K*(892)^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$ =7 TeV (Springer, 2012-10) The production of K*(892)$^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$=7 TeV was measured by the ALICE experiment at the LHC. The yields and the transverse momentum spectra $d^2 N/dydp_T$ at midrapidity |y|<0.5 in ... Directed flow of charged particles at mid-rapidity relative to the spectator plane in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (American Physical Society, 2013-12) The directed flow of charged particles at midrapidity is measured in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV relative to the collision plane defined by the spectator nucleons. Both, the rapidity odd ($v_1^{odd}$) and ... Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (Springer, 2016-08) The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ... Centrality, rapidity and transverse momentum dependence of the J/$\psi$ suppression in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2014-06) The inclusive J/$\psi$ nuclear modification factor ($R_{AA}$) in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76TeV has been measured by ALICE as a function of centrality in the $e^+e^-$ decay channel at mid-rapidity |y| < 0.8 ... Multiplicity Dependence of Pion, Kaon, Proton and Lambda Production in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV (Elsevier, 2014-01) In this Letter, comprehensive results on $\pi^{\pm}, K^{\pm}, K^0_S$, $p(\bar{p})$ and $\Lambda (\bar{\Lambda})$ production at mid-rapidity (0 < $y_{CMS}$ < 0.5) in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV, measured ... Long-range angular correlations of π, K and p in p–Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV (Elsevier, 2013-10) Angular correlations between unidentified charged trigger particles and various species of charged associated particles (unidentified particles, pions, kaons, protons and antiprotons) are measured by the ALICE detector in ...
My student Dragan Rakas and I worked on an early version of this technology in 2013, and we found it to be a very interesting but difficult problem. You can read our about it here. Microsoft has open sourced a homomorphic encryption library developed by its Cryptography Research group, saying it “strongly believes” the technology is ripe for use in real-world applications, as it makes the source code available on GitHub. ( Here is the link to the GitHub repository.) My student Geetanjali (Geet) Agarwal defended her masters thesis titled see announcement, where the contribution is a framework of hashing algorithms for image recognition. This important work is done in collaboration with the SoCal High Technology Task Force (HTTF). Geet deployed the AWS to accomplish her results, including EC2 instances and MySQL databases used to run experiments on thousands of images. Geet’s thesis will be available after the final draft is ready. Aneka – Wavelet Image Hashing Algorithm, During my PhD years (1997-2001) in the Computer Science department, at the University of Toronto, my advisor Stephen Cook and I worked on laying the computational complexity foundations of Linear Algebra. To that end we deployed Berkowitz’s algorithm for computing the characteristic polynomial, as it allowed us to state major theorems of linear algebra in the theory NC2 (fast parallel computations). We published the final version of our result in the Annals of Pure and Applied Logic. Recently, Iddo Tzameret and Stephen Cook strengthened those results considerably in this paper. Anyone working in the field of Digital Forensics is aware that a substantial portion of time is dedicated to reverse engineering passwords. That is, in most cases a digital forensics investigator receives a password-protected handheld device, or a laptop with an encrypted hard disk, or a Microsoft Word document which has been password protected. It is then the task of the investigator to try to retrieve the evidence, and that in turns requires reverse engineering the password; in some cases this can be achieved by recovering the hash of the password, which is stored somewhere (the locations are often known) on the device’s memory. In order to obtain the password from the hash, we have to run a brute-force search algorithm that guesses passwords (the guesses can be more or less educated, depending on what is known about the case). Sometimes we get lucky. There are two programs that are used extensively for this purpose: John the Ripper and hashcat. As we have been studying methods for recovering passwords from hashes, we have been using AWS EC2 instances in order to run experiments and help HTTF with their efforts. Together with senior capstone students as well as graduate students in Cybersecurity, we have been creating a set of guidelines and best practices to help in the recovery of passwords from hashes. AWS EC2 instances are ideal as they can be crafted to the needs and resources of a particular case. For example we are currently running a t2.2xlarge instance on a case where we have to recover the password of a Microsoft Word document; we have also used a p2.16xlarge with GPU-based parallel compute capabilities, but it costs $14/hour of usage, and so we deploy it in a very surgical manner. Happy to announce that Ryan McIntyre’s masters thesis results, , will be published as our joint paper in Journal of Discrete Algorithms. An improved upper bound and algorithm for clique covers (prelim) An interesting feature of indeterminate strings is the natural correspondence with undirected graphs. One aspect of this correspondence is the fact that the minimal alphabet size of indeterminates representing any given undirected graph corresponds to the size of the minimal clique cover of this graph. This paper first considers a related problem proposed in Helling 2017: characterize $late \Theta_n(m)$, which is the size of the largest possible minimal clique cover (i.e., an exact upper bound), and hence alphabet size of the corresponding indeterminate, of any graph on vertices and edges. We provide improvements to the known upper bound for . Helling 2017 also presents an algorithm which finds clique covers in polynomial time. We build on this result with an original heuristic for vertex sorting which significantly improves their algorithm’s results, particularly in dense graphs. This work was the result of building on Helling 2017 (see this post) and of a year of research undertaken by Ryan McIntyre under my (Michael Soltys) supervision at the California State University Channel Islands. iSprinkle is a Raspberry Pi powered irrigation controller which will allow a user to set an initial irrigation schedule for a sprinkler system using a web interface, after which it will use the local weather forecast to adjust the base watering schedule as needed. iSprinkle is the result of a senior Capstone project (COMP499) at California State University Channel Islands, undertaken by student Carlos Gomez in 2016, advised by Michael Soltys. A detailed write up of the project, where we partnered with Prof. Adam Sędziwy (who visited CI in June 2016), can be found here: iSprinkle: Design and implementation of an internet-enabled sprinkler timer, by Carlos Adrian Gomez, Adam Sedziwy and Michael Soltys. A short version of the above paper will be presented at INDOTEC2017: iSprinkle: when education, innovation and application meet, by Carlos Adrian Gomez, Adam Sedziwy and Michael Soltys, to be presented at the 5th International Conference on Educational Innovation in Technical Careers,INDOTEC 2017. iSprinkle was also presented at SCCUR 2016, the Southern California Council for Undergraduate Research Conference at UC Riverside on November 12, 2016. The design of a system such as iSprinkle requires a holistic approach that is very different from most class assignments. The former usually span a few files that are to be turned in within a week or two, making it difficult to implement a system with many “moving parts.” However, iSprinkle’s functionality is divided between the front-end and back-end, both of which need to communicate so that the user’s requests are fulfilled. Designing such a system requires taking into consideration many aspects; from major decisions such as deciding on a backend language to use, to minutiae such as the date and time formats to use across the backend and front end to maintain consistency. By doing so, iSprinkle will be able to irrigate more efficiently compared to a fixed schedule; by progrmmatically modifying the user’s watering schedule, iSprinkle will increase/decrease the amount of watering that the schedule dictates depending on data that it receives from a weather API. iSprinkle hopes to make it easier for homeowners to conserve water by automating adjustments to their irrigation schedule. I can’t say how happy I am to have AWS. I just got the account set up, started my first instance, and run a simulation for a very interesting project that I am working on with Ryan McIntyre (a student in CS). What took about 15min on my Mac Pro quad core, took 1m40s on the AWS instance. This is a brave new world! 🙂 . Here us the summary of the experiment: ~/EdgeGraph/EdgeGraph$ time python3 cover_vs_edges.py How many vertices? 7 Generating graphs... Filtering isomorphisms... Sorting graphs... Checking up to 21 edges... 0 / 21 edges complete. 1 / 21 edges complete. 2 / 21 edges complete. 3 / 21 edges complete. 4 / 21 edges complete. 5 / 21 edges complete. 6 / 21 edges complete. 7 / 21 edges complete. 8 / 21 edges complete. 9 / 21 edges complete. 10 / 21 edges complete. 11 / 21 edges complete. 12 / 21 edges complete. 13 / 21 edges complete. 14 / 21 edges complete. 15 / 21 edges complete. 16 / 21 edges complete. 17 / 21 edges complete. 18 / 21 edges complete. 19 / 21 edges complete. 20 / 21 edges complete. 21 / 21 edges complete. elapsed time: --- 96.4 seconds --- real 1m40.000s user 1m36.812s sys 0m0.113s The Rabin Miller algorithm for primality testing relies on a random selection of a’s for which it performs some basic tests in order to determine whether the input n is prime or composite. If n is prime, all a’s work, but if n is composite it is possible to show that at least half of the a’s work, i.e., at least half of the a’s are witnesses of compositness. But in fact, it seems like a lot more a’s than just half of those in the set {1,2,…,n-1} work; see the following diagram where I test all composites under about 60,000. It is clear that there are more good witnesses than just half the numbers. In a new paper, Square-free strings over alphabet lists, my PhD student Neerja Mhaskar and I, solve an open problem that was posed in A new approach to non repetitive sequences, by Jaroslaw Grytczuk, Jakub Kozik, and Pitor Micek, in arXiv:1103.3809, December 2010. The problem can be stated as follows: Given an alphabet list $L=L_1,\ldots,L_n$, where $|L_i|=3$ and $0 \leq i \leq n$, can we always find a square-free string, $W=W_1W_2 \ldots W_n$, where $W_i\in L_i$? We show that this is indeed the case. We do so using an “offending suffix” characterization of forced repetitions, and a counting, non-constructive, technique. We discuss future directions related to finding a constructive solution, namely a polytime algorithm for generating square-free words over such lists. Our paper will be presented and published in the 26th International Workshop on Combinatorial Algorithms (IWOCA), Verona, Italy, October 2015.
Molecular orbital theory has been very successfully applied to large conjugated systems, especially those containing chains of carbon atoms with alternating single and double bonds. An approximation introduced by Hückel in 1931 considers only the delocalized p electrons moving in a framework of \(\pi\)-bonds. This is, in fact, a more sophisticated version of a free-electron model. The simplest hydrocarbon to consider that exhibits \(\pi\) bonding is ethylene (ethene), which is made up of four hydrogen atoms and two carbon atoms. Experimentally, we know that the H–C–H and H–C–C angles in ethylene are approximately 120°. This angle suggests that the carbon atoms are sp 2 hybridized, which means that a singly occupied sp 2 orbital on one carbon overlaps with a singly occupied s orbital on each H and a singly occupied sp 2 lobe on the other C. Thus each carbon forms a set of three \(\sigma\) bonds: two C–H ( sp 2 + s) and one C–C ( sp 2 + sp 2) (part (a) of Figure \(\PageIndex{1}\)). The Hückel approximation is used to determine the energies and shapes of the \(\pi\) molecular orbitals in conjugated systems. Within the Hückel approximation, the covalent bonding in these hydrocarbones can be separated into two independent "frameworks": the \(\sigma\)-bonding framework and the the \(\sigma\)-bonding framework. The wavefunctions used to describe the bonding orbitals in each framework results from different combinations of atomic orbitals. The method limits itself to addressing conjugated hydrocarbons and specifically only \(\pi\) electron molecular orbitals are included because these determine the general properties of these molecules; the sigma electrons are ignored. This is referred to as sigma-pi separability and is justified by the orthogonality of \(\sigma\) and \(\pi\) orbitals in planar molecules. For this reason, the Hückel method is limited to planar systems. Hückel approximation assumes that the electrons in the \(\pi\) bonds “feel” an electrostatic potential due to the entire \(\sigma\)-bonding framework in the molecule (i.e. it focuses only on the formation of \(\pi\) bonds, given that the \(\sigma\) bonding framework has already been formed). Conjugated Systems A conjugated system has a region of overlapping p-orbitals, bridging the interjacent single bonds, that allow a delocalization of \(\pi\) electrons across all the adjacent aligned p-orbitals. These \(\pi\) electrons do not belong to a single bond or atom, but rather to a group of atoms. Ethylene Before considering the Hückel treatment for ethylene, it is beneficial to review the general bonding picture of the molecule. Bonding in ethylene involves the \(sp^2\) hybridization of the \(2s\), \(2p_x\), and \(2p_y\) atomic orbitals on each carbon atom; leaving the \(2p_z\) orbitals untouched (Figure \(\PageIndex{2}\)). The use of hybrid orbitals in the molecular orbital approach describe here is merely a convenience and not invoking valence bond theory (directly). An identical description can be extracted using exclusively atomic orbitals on carbon, but the interpretation of the resulting wavefunctions is less intuitive. For example, the i th molecular orbital can be described via hybrid orbitals \[ | \psi_1\rangle = c_1 | sp^2_1 \rangle + c_2 | 1s_a \rangle \nonumber\] or via atomic orbitals. \[ | \psi_1\rangle = a_1 | 2s \rangle + a_1 | 2p_x \rangle + a_1 | 2p_y \rangle + a_4| 1s_a \rangle \nonumber\] where \(\{a_i\}\) and \(\{c_i\}\) are coefficients of the expansion. Either describe will work and both are identical approaches since \[| sp^2_1 \rangle = b_1 | 2s \rangle + b_1 | 2p_x \rangle + b_1 | 2p_y \rangle \nonumber\] where \(\{c_i\}\) are coefficients describing the hybridized orbital. The bonding occurs via the mixing of the electrons in the \(sp^2\) hybrid orbitals on carbon and the electrons in the \(1s\) atomic orbitals of the four hydrogen atoms (Figure \(\PageIndex{1}\); left) resulting in the \(\sigma\)-bonding framework. The \(\pi\)-bonding framework results from the unhybridized \(2p_z\) orbitals (Figure \(\PageIndex{2}\); right). The independence of these two frameworks is demonstrated in the resulting molecular orbital diagram in Figure \(\PageIndex{3}\); Hückel theory is concerned only with describing the molecular orbitals and energies of the \(\pi\) bonding framework. Hückel treatment is concerned only with describing the molecular orbitals and energies of the \(\pi\) bonding framework. Since Hückel theory is a special consideration of molecular orbital theory, the molecular orbitals \(| \psi_i \rangle\) can be described as a linear combination of the \(2p_z\) atomic orbitals \(\phi\) at carbon with their corresponding \(\{c_i\}\) coefficients: \[ | \psi_i \rangle =c_1 | \phi_{1} \rangle +c_2 | \phi_2 \rangle \label{LCAO} \] This equation is substituted in the Schrödinger equation: \[ \hat{H} | \psi_i \rangle =E_i | \psi_i \rangle \] with \(\hat{H}\) the Hamiltonian and \(E_i\) the energy corresponding to the molecular orbitalto give: \[ \hat{H} c_{1} | \phi _{1} \rangle +\hat{H} c_{2} | \phi _{2} \rangle =E c_{1} | \phi _{1} \rangle +E c_{2} | \phi _{2} \rangle \label{SEq}\] If Equation \(\ref{SEq}\) is multiplied by \(\langle \phi _{1}| \) (and integrated), then \[c_1(H_{11} - ES_{11}) + c_2(H_{12} - ES_{12}) = 0 \label{Eq1}\] where \( H_{ij}\) are the Hamiltonian matrix elements (see note below) \[ H_{ij} = \langle \phi_i | \hat{H} | \phi_j \rangle = \int \phi _{i}H\phi _{j}\mathrm {d} v\] and \( S_{ij} \) are the overlap integrals. \[ S_{ij}= \langle \phi_i | \phi_j \rangle = \int \phi _{i}\phi _{j}\mathrm {d} v\] If Equation \(\ref{SEq}\) is multiplied by \( \langle \phi _{2} | \) (and integrated), then \[c_1(H_{21} - ES_{21}) + c_2(H_{22} - ES_{22}) = 0 \label{Eq2}\] Both Equations \(\ref{Eq1}\) and \(\ref{Eq2}\) can better represented in matrix notation, \[ {\begin{bmatrix}c_{1}(H_{11}-ES_{11})+c_{2}(H_{12}-ES_{12})\\c_{1}(H_{21}-ES_{21})+c_{2}(H_{22}-ES_{22})\\\end{bmatrix}}=0\] or more simply as a product of matrices. \[\begin{bmatrix} H_{11} - ES_{11} & H_{12} - ES_{12} \\ H_{21} - ES_{21} & H_{22} - ES_{22} \\ \end{bmatrix} \times \begin{bmatrix} c_1 \\ c_2 \\ \end{bmatrix}= 0 \label{master}\] All diagonal Hamiltonian integrals \( H_{ii}\) are called Coulomb integrals and those of type \(H_{ij}\) are called resonance integrals. Both integrals are negative and the resonance integrals determines the strength of the bonding interactions. The equations described by Equation \(\ref{master}\) are called the secular equations and will also have the trivial solution of \[ c_1 = c_2 = 0 \] Within linear algebra, the secular equations in Equation \(\ref{master}\) will also have a non-trivial solution, if and only if, the secular determinant is zero \[ \left| \begin{array} {cc} H_{11} - ES_{11} & H_{12} - ES_{12} \\ H_{21} - ES_{21} & H_{22} - ES_{22} \\ \end{array}\right| = 0 \label{SecDet}\] or in shorthand notation \[ \text{det}(H -ES) =0\] Everything in Equation \(\ref{SecDet}\) is a known number except \(E\). Since the secular determinant for ethylene is a \(2 \times 2\) matrix, finding \(E\), requires solving a quadratic equation (after expanding the determinant) \[ ( H_{11} - ES_{11} ) ( H_{22} - ES_{22} ) - ( H_{21} - ES_{21} )( H_{12} - ES_{12} ) = 0\] There will be two values of \(E\) which satisfy this equation and they are the molecular orbital energies. For ethylene, one will be the bonding energy and the other the antibonding energy for the \(\pi\)-orbitals formed by the combination of the two carbon \(2p_z\) orbitals (Equation \(\ref{LCAO}\)). However, if more than two \(| \phi \rangle\) atomic orbitals were used, e.g., in a bigger molecule, then more energies would be estimated by solving the secular determinant. Solving the secular determinant is simplified within Hückel method via the following four assumptions: All overlap integrals \(S_{ij}\) are set equal to zero. This is quite reasonable since the \(\pi-\) orbitals are directed perpendicular to the direction of their bonds (Figure \(\PageIndex{1}\)). This assumption is often call neglect of differential overlap (NDO). All resonance integrals \(H_{ij}\) between non-neighboring atoms are set equal to zero. All resonance integrals \(H_{ij}\) between neighboring atoms are equal and set to \(\beta\). All coulomb integrals \(H_{ii}\) are set equal to \(\alpha\). These assumptions are mathematically expressed as \[ H_{11}=H_{22}=\alpha\] \[ H_{12}=H_{21}=\beta\] Assumptions 1 means that the overlap integral between the two atomic orbitals is 0 \[ S_{11}=S_{22}=1\] \[ S_{12}=S_{21}=0\] Matrix Representation of the Hamiltonian The Coulomb integrals \[H_{ii}= \langle \phi _i|H| \phi _i \rangle \nonumber\] and resonance integrals. \[H_{ij}= \langle \phi _i|H| \phi _j \rangle \,\,\, (i \neq i) \nonumber\] are often described within the matrix representation of the Hamiltonian (specifically within the \( | \phi \rangle\) basis): \[ \hat{H} = \begin{bmatrix} H_{11} & H_{12} \\ H_{21} & H_{22} \end {bmatrix} \nonumber\] or within the Hückel assumptions \[ \hat{H} = \begin{bmatrix} \alpha & \beta \\ \beta & \alpha \end {bmatrix} \nonumber\] The Hückel assumptions reduces Equation \(\ref{master}\) in two homogeneous equations: \[\begin{bmatrix} \alpha - E & \beta \\ \beta & \alpha - E \\ \end{bmatrix} \times \begin{bmatrix} c_1 \\ c_2 \\ \end{bmatrix}= 0 \label{Eq12}\] if Equation \(\ref{Eq12}\) is divided by \(\beta\): \[\begin{bmatrix} \dfrac{\alpha - E}{\beta} & 1 \\ 1 & \dfrac{\alpha - E}{\beta} \\ \end{bmatrix} \times \begin{bmatrix} c_1 \\ c_2 \\ \end{bmatrix}= 0\] and then a new variable \(x\) is defined \[ x = \dfrac {\alpha -E}{\beta} \label{new}\] then Equation \(\ref{Eq12}\) simplifies to \[\begin{bmatrix} x & 1 \\ 1 & x \\ \end{bmatrix} \times \begin{bmatrix} c_1 \\ c_2 \\ \end{bmatrix}= 0 \label{seceq}\] The trivial solution gives both wavefunction coefficients equal to zero and the other (non-trivial) solution is determined by solving the secular determinant \[ \begin{vmatrix}x&1\\1&x\\\end{vmatrix}=0\] which when expanded is \[ x^{2}-1=0\] so \[ x=\pm 1\] Knowing that \(E=\alpha -x\beta \) from Equation \(\ref{new}\), the energy levels can be found to be \[ E=\alpha -\pm 1\times \beta \] or \[ E=\alpha \mp \beta \] Since \(\beta\) is negative, the two energies are ordered (Figure \(\PageIndex{4}\)) For \(\pi_1\): \(E_1 =\alpha + \beta\) For \(\pi_2\): \(E_2 =\alpha - \beta\) To extract the coefficients attributed to these energies, the corresponding \(x\) values can be substituted back into the Secular Equations (Equation \(\ref{seceq}\)). For the lower energy state (\(x=-1\)) \[\begin{bmatrix} -1 & 1 \\ 1 & -1 \\ \end{bmatrix} \times \begin{bmatrix} c_1 \\ c_2 \\ \end{bmatrix}= 0 \] This gives \(c_1=c_2\) and the molecular orbitals attributed to this energy is then (based off of Equation \(\ref{LCAO}\)): \[ \psi_1 \rangle = N_1 (\phi_1 \rangle + | \phi_2 \rangle ) \label{HOMO}\] where \(N_1\) is the normalization constant for this molecular orbital; this is the bonding molecular orbital. For the higher energy molecular orbital (\x=-1\) and then \[\begin{bmatrix} 1 & 1 \\ 1 & 1 \\ \end{bmatrix} \times \begin{bmatrix} c_1 \\ c_2 \\ \end{bmatrix}= 0 \] This gives \(c_1=-c_2\) and the molecular orbitals attributed to this energy is then (based off of Equation \(\ref{LCAO}\)): \[ \psi_2 \rangle = N_2 (\phi_1 \rangle - | \phi_2 \rangle ) \label{LUMO}\] where \(N_2\) is the normalization constant for this molecular orbital; this is the anti-bonding molecular orbital. The normalization constants for both molecular orbitals can obtained via the standard normalization approach (i.e., \(\langle \psi_i | \psi_i \rangle =1\)) to obtain \[N_1 = N_2 = \dfrac{1}{\sqrt{2}}\] These molecular orbitals form the \(\pi\)-bonding framework and since each carbon contributes one electron to this framework, only the lowest molecular orbital (\( | \psi_1 \rangle\)) is occupied (Figure \(\PageIndex{5}\)) in the ground state. The corresponding electron configuration is then \( \pi_1^2\). HOMO and LUMO are acronyms for highest occupied molecular orbital and lowest unoccupied molecular orbital, respectively and are often referred to as frontier orbitals. The energy difference between the HOMO and LUMO is termed the HOMO–LUMO gap. The 3-D calculated \(\pi\) molecular orbitals are shown in Figure \(\PageIndex{6}\). Figure \(\PageIndex{6}\): Calculated \(\pi\) molecular orbitals for ethylene . (left) the bonding orbital (|\psi_1 \rangle\) and (right) the antibonding \( (|\psi_2 \rangle\) orbital. Limitations of Hückel Theory Hückel theory was developed in the 1930's when computers were unavailable and a simple mathematical approaches were very important for understanding experiment. Although the assumptions in Hückel theory are drastic they enabled the early calculations of molecular orbitals to be performed with mechanical calculators or by hand. Hückel Theory can be extended to address other types of atoms in conjugated molecules (e.g., nitrogen and oxygen). Moreover, it can be extended to also treat \(\sigma\) orbitals and this "Extended Hückel Theory" is still used today. Despite the utility of Hückel Theory, it is highly qualitative and we should remember the limitations of Hückel Theory: Hückel Theory is very approximate Hückel Theory cannot calculate energies accurately (electron-electron repulsion is not calculated) Hückel Theory typically overestimates predicted dipole moments Hückel Theory is best used to provide simplified models for understanding chemistry and for a detailed understanding modern ab initio molecular methods discussed in Chapter 11 are needed.
Pair Correlation Function of a Three-Dimensional Point Pattern Estimates the pair correlation function from a three-dimensional point pattern. Usage pcf3est(X, ..., rmax = NULL, nrval = 128, correction = c("translation","isotropic"), delta=NULL, adjust=1, biascorrect=TRUE) Arguments X Three-dimensional point pattern (object of class "pp3"). … Ignored. rmax Optional. Maximum value of argument \(r\) for which \(g_3(r)\) will be estimated. nrval Optional. Number of values of \(r\) for which \(g_3(r)\) will be estimated. correction Optional. Character vector specifying the edge correction(s) to be applied. See Details. delta Optional. Half-width of the Epanechnikov smoothing kernel. adjust Optional. Adjustment factor for the default value of delta. biascorrect Logical value. Whether to correct for underestimation due to truncation of the kernel near \(r=0\). Details For a stationary point process \(\Phi\) in three-dimensional space, the pair correlation function is $$ g_3(r) = \frac{K_3'(r)}{4\pi r^2} $$ where \(K_3'\) is the derivative of the three-dimensional \(K\)-function (see K3est). The three-dimensional point pattern X is assumed to be a partial realisation of a stationary point process \(\Phi\). The distance between each pair of distinct points is computed. Kernel smoothing is applied to these distance values (weighted by an edge correction factor) and the result is renormalised to give the estimate of \(g_3(r)\). The available edge corrections are: "translation": the Ohser translation correction estimator (Ohser, 1983; Baddeley et al, 1993) "isotropic": the three-dimensional counterpart of Ripley's isotropic edge correction (Ripley, 1977; Baddeley et al, 1993). Kernel smoothing is performed using the Epanechnikov kernel with half-width delta. If delta is missing, the default is to use the rule-of-thumb \(\delta = 0.26/\lambda^{1/3}\) where \(\lambda = n/v\) is the estimated intensity, computed from the number \(n\) of data points and the volume \(v\) of the enclosing box. This default value of delta is multiplied by the factor adjust. The smoothing estimate of the pair correlation \(g_3(r)\) is typically an underestimate when \(r\) is small, due to truncation of the kernel at \(r=0\). If biascorrect=TRUE, the smoothed estimate is approximately adjusted for this bias. This is advisable whenever the dataset contains a sufficiently large number of points. Value A function value table (object of class "fv") that can be plotted, printed or coerced to a data frame containing the function values. Additionally the value of delta is returned as an attribute of this object. References Baddeley, A.J, Moyeed, R.A., Howard, C.V. and Boyde, A. (1993) Analysis of a three-dimensional point pattern with replication. Applied Statistics 42, 641--668. Ohser, J. (1983) On estimators for the reduced second moment measure of point processes. Mathematische Operationsforschung und Statistik, series Statistics, 14, 63 -- 71. Ripley, B.D. (1977) Modelling spatial patterns (with discussion). Journal of the Royal Statistical Society, Series B, 39, 172 -- 212. See Also Aliases pcf3est Examples # NOT RUN { X <- rpoispp3(250) Z <- pcf3est(X) Zbias <- pcf3est(X, biascorrect=FALSE) if(interactive()) { opa <- par(mfrow=c(1,2)) plot(Z, ylim.covers=c(0, 1.2)) plot(Zbias, ylim.covers=c(0, 1.2)) par(opa) } attr(Z, "delta")# } Documentation reproduced from package spatstat, version 1.55-1, License: GPL (>= 2)
If you Googled this number a week ago, all you’d get were links to the paper by Melanie Wood Belyi-extending maps and the Galois action on dessins d’enfants. In this paper she says she can separate two dessins d’enfants (which couldn’t be separated by other Galois invariants) via the order of the monodromy group of the inflated dessins by a certain degree six Belyi-extender. She gets for the inflated $\Delta$ the order 19752284160000 and for inflated $\Omega$ the order 214066877211724763979841536000000000000 (see also this post). After that post I redid the computations a number of times (as well as for other Belyi-extenders) and always find that these orders are the same for both dessins. And, surprisingly, each time the same numbers keep popping up. For example, if you take the Belyi-extender $t^6$ (power-map) then it is pretty easy to work out the generators of the monodromy group of the extended dessin. For example, there is a cycle $(1,2)$ in $x_{\Omega}$ and you have to replace it by \[ (11,12,13,14,15,16,21,22,23,24,25,26) \] and similarly for other cycles, always replace number $k$ by $k1,k2,k3,k4,k5,k6$ (these are the labels of the edges in the extended dessin corresponding to edge $k$ in the original dessin, starting to count from the the ‘spoke’ of the $6$-star of $t^6$ corresponding to the interval $(0,e^{\frac{4 \pi i}{3}})$, going counterclockwise). So the edge $(0,1)$ corresponds to $k3$, and for $y$ you take the same cycles as in $y_{\Omega}$ replacing number $k$ by $k3$. Here again, you get for both extended diagrams the same order of the monodromy group, and surprise, surprise: it is 214066877211724763979841536000000000000. Based on these limited calculations, it seems to be that the order of the monodromy group of the extended dessin only depends on the degree of the extender, and not on its precise form. I’d hazard a (probably far too optimistic) conjecture that the order of the monodromy groups of a dessin $\Gamma$ and the extended dessin $\gamma(\Gamma)$ for a Belyi-extender $\gamma$ of degree $d$ are related via \[ \# M(\gamma(\Gamma)) = d \times (\# M(\Gamma))^d \] (or twice that number), except for trivial settings such as power-maps extending stars. Edit (august 19): In the comments Dominic shows that in “most” cases the monodromy group of $\gamma(\Gamma)$ should be the wreath product on the monodromy groups of $\gamma$ and $\Gamma$ which has order \[ \# M(\Gamma)^d \times \# M(\gamma) \] which fits in with the few calculations i did. We knew already that the order of the monodromy groups op $\Delta$ and $\Omega$ is $1814400$, and sure enough \[ 6 \times 1814400^6 = 214066877211724763979841536000000000000. \] If you extend $\Delta$ and $\Omega$ by the power map $t^3$, you get the orders \[ 17919272189952000000 = 3 \times 1814400^3 \] and if you extend them with the degree 3 extender mentioned in the dessinflateurs-post you get 35838544379904000000, which is twice that number. ( Edit : the order of the monodromy group of the extender is $6$, see also above) As much as i like the Belyi-extender idea to construct new Galois invariants, i fear it’s a dead end. (Always glad to be proven wrong!) Similar Posts: Dessinflateurs the mystery Manin-Marcolli monoid Complete chaos and Belyi-extenders permutation representations of monodromy groups anabelian geometry The best rejected proposal ever the modular group and superpotentials (1) Klein’s dessins d’enfant and the buckyball Monstrous dessins 1 NeverEndingBooks-groups
Definition:Coset Contents Definition Let $G$ be a group, and let $H \le G$. The left coset of $x$ modulo $H$, or left coset of $H$ by $x$, is: $x H = \set {y \in G: \exists h \in H: y = x h}$ That is, it is the subset product with singleton: $x H = \set x H$ The right coset of $y$ modulo $H$, or right coset of $H$ by $y$, is: $H y = \set {x \in G: \exists h \in H: x = h y}$ That is, it is the subset product with singleton: $H y = H \set y$ Consider the symmetry group of the equilateral triangle $D_3$. Let $H \subseteq D_3$ be defined as: $H = \set {e, r}$ where: The left cosets of $H$ are: \(\displaystyle H\) \(=\) \(\displaystyle \set {e, r}\) \(\displaystyle \) \(=\) \(\displaystyle e H\) \(\displaystyle \) \(=\) \(\displaystyle r H\) \(\displaystyle s H\) \(=\) \(\displaystyle \set {s e, s r}\) \(\displaystyle \) \(=\) \(\displaystyle \set {s, q}\) \(\displaystyle \) \(=\) \(\displaystyle q H\) \(\displaystyle t H\) \(=\) \(\displaystyle \set {t e, t r}\) \(\displaystyle \) \(=\) \(\displaystyle \set {t, p}\) \(\displaystyle \) \(=\) \(\displaystyle p H\) The right cosets of $H$ are: \(\displaystyle H\) \(=\) \(\displaystyle \set {e, r}\) \(\displaystyle \) \(=\) \(\displaystyle H e\) \(\displaystyle \) \(=\) \(\displaystyle H r\) \(\displaystyle H s\) \(=\) \(\displaystyle \set {e s, r s}\) \(\displaystyle \) \(=\) \(\displaystyle \set {s, p}\) \(\displaystyle \) \(=\) \(\displaystyle H p\) \(\displaystyle H t\) \(=\) \(\displaystyle \set {e t, r t}\) \(\displaystyle \) \(=\) \(\displaystyle \set {t, q}\) \(\displaystyle \) \(=\) \(\displaystyle H q\) Consider the dihedral group $D_3$. $D_3 = \gen {a, b: a^3 = b^2 = e, a b = b a^{-1} }$ Let $H \subseteq D_3$ be defined as: $H = \gen b$ where $\gen b$ denotes the subgroup generated by $b$. As $b$ has order $2$, it follows that: $\gen b = \set {e, b}$ The left cosets of $H$ are: \(\displaystyle e H\) \(=\) \(\displaystyle \set {e, b}\) \(\displaystyle \) \(=\) \(\displaystyle b H\) \(\displaystyle \) \(=\) \(\displaystyle H\) \(\displaystyle a H\) \(=\) \(\displaystyle \set {a, a b}\) \(\displaystyle \) \(=\) \(\displaystyle a b H\) \(\displaystyle a^2 H\) \(=\) \(\displaystyle \set {a^2, a^2 b}\) \(\displaystyle \) \(=\) \(\displaystyle a^2 b H\) The right cosets of $H$ are: \(\displaystyle H e\) \(=\) \(\displaystyle \set {e, b}\) \(\displaystyle \) \(=\) \(\displaystyle H b\) \(\displaystyle \) \(=\) \(\displaystyle H\) \(\displaystyle H a\) \(=\) \(\displaystyle \set {a, a^2 b}\) \(\displaystyle \) \(=\) \(\displaystyle H a^2 b\) \(\displaystyle H a^2\) \(=\) \(\displaystyle \set {a^2, a b}\) \(\displaystyle \) \(=\) \(\displaystyle H a b\) Let $G = \gen a$ be an infinite cyclic group. Let $s \in \Z_{>0}$ be a (strictly) positive integer. Let $H$ be the subgroup of $G$ defined as: $H := \gen {a^s}$ Then a complete repetition-free list of the cosets of $H$ in $G$ is: $S = \set {H, aH, a^2 H, \ldots, a^{s - 1} H}$ Also see Results about cosetscan be found here. Sources 1967: George McCarty: Topology: An Introduction with Application to Topological Groups... (previous) ... (next): $\text{II}$: Subgroups 1969: C.R.J. Clapham: Introduction to Abstract Algebra... (previous) ... (next): Chapter $5$: Rings: $\S 20$. Cosets 1970: B. Hartley and T.O. Hawkes: Rings, Modules and Linear Algebra... (previous) ... (next): $\S 2.2$: Homomorphisms 1978: Thomas A. Whitelaw: An Introduction to Abstract Algebra... (previous) ... (next): $\S 41$. Multiplication of subsets of a group
[LON-CAPA-users] Hints for first LON-CAPA question Raymond Batchelor batchelo at sfu.ca Thu Jun 13 15:52:06 EDT 2013 See my revised xml which I've appended below: ----- Original Message ----- From: "Joseph Mingrone" <jrm at mathstat.dal.ca> To: lon-capa-users at mail.lon-capa.org Sent: Thursday, 13 June, 2013 12:04:44 Subject: [LON-CAPA-users] Hints for first LON-CAPA question Joseph Mingrone <jrm at mathstat.dal.ca> writes: Hello all; I'm creating a problem that I've included below. My questions are: 1. How can a student submit each part of the question separately? **** Use <part></part> tags. 2. In the table, I'm building the header with $row1 .= "<td>$i+1</td>", but the $i+1 isn't evaluated and 0+1, 1+1, etc. is shown. How can I get the intended output? **** Define a new variable within the "for loop". 3. How can \bar{x} be displayed? **** set display argument in tags <m display="mimetex"> 4. In the last radiobuttonresponse question, only one foil is showing up even though I've specified max=3. How can I make all three show up? *** a radiobuttonresponse can have ONLY one "true" foil. two of your foils were designated "unused" -- I changed that to "false" 5. This is my first attempt at a question and I have no sample code other than what's in the manual. If you have any other suggestions about style or best practices, please pass them along. Use the "colourfull" editor "EDIT" and use the small blue "?" links to find all sorts of useful info about everything. : -) Thanks, Joseph <problem> <parameter name="maxtries" id="11" type="int_pos" default="1" description="Maximum Number of Tries" /> <script type="loncapa/perl"> $n = &random(8,16,1); $mu_a = &random(4,16,.01); $sd_a = &random(1,5,.01); @a=&random_normal($n,55,$mu_a,$sd_a); $mu_b = &random(4,16,.01); $sd_b = &random(1,5,.01); @b=&random_normal($n,55,$mu_b,$sd_b); $table = '<table border=1>'; $row1 = '<tr><td></td>'; $row2 = '<tr><td>A</td>'; $row3 = '<tr><td>B</td>'; for ($i=0; $i<$n; $i++) { $I=$i+1; $row1 .= "<td>$I</td>"; $row2 .= "<td>&roundto($a[$i],2)</td>"; $row3 .= "<td>&roundto($b[$i],2)</td>"; } $row1 .= '</tr>'; $row2 .= '</tr>'; $row3 .= '</tr>'; $table .= $row1; $table .= $row2; $table .= "$row3</table>"; </script> <startouttext /> <p> Super Sneaker Company is evaluating two different materials, A and B, to be used to construct the soles of their new active shoe targeted to city high school students in Canada. While material B costs less than material A, the company suspects that mean wear for material B is greater than mean wear for material A. Two study designs were initially developed to test this suspicion. In both designs, Halifax was chosen as a representative city of the targeted market. In Study Design 1, 10 high school students were drawn at random from the Halifax School District database. After obtaining their shoe sizes, the company manufactured 10 pairs of shoes, each pair with one shoe having a sole constructed from material A and the other shoe, a sole constructed from material B. </p><endouttext /> <part> <startouttext /> The researcher in charge of this design asked the company to randomly assign material A to the right shoe or to the left shoe, for each of the ten pairs. Why? <endouttext /> <radiobuttonresponse direction="vertical" max="3" id="12" randomize="yes"> <foilgroup> <foil location="random" value='true' name="foil1"> <startouttext />Right shoe wear is generally different than left shoe wear.<endouttext /> </foil> <foil location="random" value='false' name="foil2"> <startouttext />Right shoe wear is always less than left shoe wear.<endouttext /> </foil> <foil location="random" value='false' name="foil3"> <startouttext />Right shoe wear is always greater than left shoe wear.<endouttext /> </foil> <foil location="random" value='false' name="foil4"> <startouttext />Left shoe wear is always less than right shoe wear.<endouttext /> </foil> <foil location="random" value='false' name="foil5"> <startouttext />Left shoe wear is always greater than right shoe wear.<endouttext /> </foil> </foilgroup> </radiobuttonresponse> </part> <part> <parameter name="maxtries" type="int_pos" description="Maximum Number of Tries" default="1" /> <startouttext /> <p> After 3 months, the amount of wear in each shoe was recorded in standardized units as follows: <parse>$table</parse> </p> The null hypothesis for the test is: <endouttext /> <radiobuttonresponse max="4" randomize="yes" direction="vertical"> <foilgroup> <foil location="random" value='true' name="foil1"> <startouttext /><m display="mimetex">$\mu_a - \mu_b = 0$</m><endouttext /> </foil> <foil location="random" value='true' name="foil2"> <startouttext /><m display="mimetex">$\mu_b - \mu_a = 0$</m><endouttext /> </foil> <foil location="random" value='false' name="foil3"> <startouttext /><m display="mimetex">$ \bar{x}_a < \bar{X}_b $</m><endouttext /> </foil> <foil location="random" value='false' name="foil4"> <startouttext /><m display="mimetex">$ \bar{x}_a - \bar{x}_b = 0 $</m><endouttext /> </foil> <foil location="random" value='false' name="foil5"> <startouttext /><m display="mimetex">$ \bar{x}_b - \bar{x}_a = 0 $</m><endouttext /> </foil> </foilgroup> </radiobuttonresponse> </part><part> <parameter name="maxtries" type="int_pos" description="Maximum Number of Tries" default="1" /> <startouttext /> The alternative hypothesis is: <endouttext /> <radiobuttonresponse max="4" randomize="yes" direction="vertical"> <foilgroup> <foil location="random" value='true' name="foil1"> <startouttext /><m display="mimetex">$\mu_a - \mu_b \ne 0$</m><endouttext /> </foil> <foil location="random" value='true' name="foil2"> <startouttext /><m display="mimetex">$\mu_b - \mu_a > 0$</m><endouttext /> </foil> <foil location="random" value='false' name="foil3"> <startouttext /><m display="mimetex">$\mu_b - \mu_a > 0$</m><endouttext /> </foil> <foil location="random" value='false' name="foil4"> <startouttext /><m display="mimetex">$\mu_a - \mu_b > 0$</m><endouttext /> </foil> <foil location="random" value='false' name="foil5"> <startouttext /><m display="mimetex">$ \bar{x} < $</m><endouttext /> </foil> <foil location="random" value='false' name="foil6"> <startouttext /><m display="mimetex">$ \bar{x}_a - \bar{x}_b < 0 $</m><endouttext /> </foil> <foil location="random" value='false' name="foil7"> <startouttext /><m display="mimetex">$ \bar{x}_b - \bar{x}_a < 0 $</m><endouttext /> </foil> </foilgroup> </radiobuttonresponse> </part><part> <parameter name="maxtries" type="int_pos" description="Maximum Number of Tries" default="3" /> <startouttext />To test the hypothesis that mean wear for material B is greater than mean wear for material A, calculate the test statistic.<endouttext /> <numericalresponse answer=""> <responseparam type="tolerance" default="5%" name="tol" description="Numerical Tolerance" /> <responseparam name="sig" type="int_range,0-16" default="0,15" description="Significant Figures" /> <textline readonly="no" /> </numericalresponse> </part><part> <startouttext />Which of the statistical tables should you use? You only have one try!<endouttext /> <radiobuttonresponse max="3" randomize="yes" direction="vertical"> <foilgroup> <foil name="t" value="true" location="random"> <startouttext />t-distribution<endouttext /> </foil> <foil name="z" value="false" location="random"> <startouttext />z-distribution<endouttext /> </foil> <foil name="chi" value="false" location="random"> <startouttext /><m display="mimetex">$\chi^2$</m><endouttext /> </foil> </foilgroup> </radiobuttonresponse> </part> </problem> More information about the LON-CAPA-usersmailing list
In economics, market concentration is a function of the number of firms and their respective shares of the total production (alternatively, total capacity or total reserves) in a market. Alternative terms are Industry concentration and Seller concentration. [1] Market concentration is related to industrial concentration, which concerns the distribution of production within an competition, theorized to be positively related to the rate of profit in the industry, for example in the work of Joe S. Bain. Contents Metrics 1 Uses 2 Motivation 3 Empirical tests 4 Alternative definition 5 Further Examples 6 See also 7 References 8 External links 9 Metrics Commonly used market concentration measures are the Herfindahl index (HHI or simply H) and the concentration ratio (CR). [2] The Hannah-Kay (1971) index has the general form \text{HK}_\alpha(x) = \begin{cases} & \left(\sum s_i^\alpha\right)^\frac {1}{\alpha-1} \text{ if }\alpha > 0, \alpha \ne 1 \\ & \prod s_i^{s_i} \text{ if }\alpha = 1\\ \end{cases}. Note, \prod s_i^{s_i} = \exp\left(\sum s_i \log(s_i)\right), which is the exponential index. Uses When antitrust agencies are evaluating a potential violation of competition laws, they will typically make a determination of the relevant market and attempt to measure market concentration within the relevant market. Motivation As an economic tool market concentration is useful because it reflects the degree of competition in the market. Tirole (1988, p. 247) notes that: Bain's (1956) original concern with market concentration was based on an intuitive relationship between high concentration and collusion. There are game theoretic models of market interaction (e.g. among oligopolists) that predict that an increase in market concentration will result in higher prices and lower consumer welfare even when collusion in the sense of cartelization (i.e. explicit collusion) is absent. Examples are Cournot oligopoly, and Bertrand oligopoly for differentiated products. Empirical tests Empirical studies that are designed to test the relationship between market concentration and prices are collectively known as price-concentration studies; see Weiss (1989). Typically, any study that claims to test the relationship between price and the level of market concentration is also (jointly, that is, simultaneously) testing whether the market definition (according to which market concentration is being calculated) is relevant; that is, whether the boundaries of each market is not being determined either too narrowly or too broadly so as to make the defined "market" meaningless from the point of the competitive interactions of the firms that it includes (or is made of). Alternative definition In economics, market concentration is a criterion that can be used to rank order various distributions of firms' shares of the total production (alternatively, total capacity or total reserves) in a market. Further Examples Section 1 of the Department of Justice and the Federal Trade Commission's Horizontal Merger Guidelines is entitled "Market Definition, Measurement and Concentration." Herfindahl index is the measure of concentration that these Guidelines state that will be used. A simple measure of market concentration is 1/N where N is the number of firms in the market. This measure of concentration ignores the dispersion among the firms' shares. It is decreasing in the number of firms and nonincreasing in the degree of symmetry between them. This measure is practically useful only if a sample of firms' market shares is believed to be random, rather than determined by the firms' inherent characteristics. Any criterion that can be used to compare or rank distributions (e.g. probability distribution, frequency distribution or size distribution) can be used as a market concentration criterion. Examples are stochastic dominance and Gini coefficient. Curry and George (1981) enlist the following "alternative" measures of concentration: (a) The mean of the first moment distribution (Niehans, 1958); Hannah and Kay (1977) call this an "absolute concentration" index: \bar{X}_1 = \sum x_i^2/x_i = \sum x_i H (b) The Rosenbluth (1961) index (also Hall and Tideman, 1967): R = \frac{1}{2\sum is_i - 1} where symbol i indicates the firm's rank position. (c) Comprehensive concentration index (Horvath 1970): CCI = s_1 + \sum_{i=2}^N s_i^2(2 - s_i) where s 1 is the share of the largest firm. The index is similar to 2\text{H} - \sum s_i^3 except that greater weight is assigned to the share of the largest firm. (d) The Pareto slope (Ijiri and Simon, 1971). If the Pareto distribution is plotted on double logarithmic scales, [then] the distribution function is linear, and its slope can be calculated if it is fitted to an observed size-distribution. (e) The Linda index (1976) L=\frac 1 {N(N-1)}\sum_{i=1}^{N-1}Q_i where Q is the ratio between the average share of the first i firms and the average share of the remaining N - i firms. This index is designed to measure the degree of inequality between values of the size variable accounted for by various sub-samples of firms. It is also intended to define the boundary between the oligopolists within an industry and other firms. It has been used by the European Union. i (f) The U Index (Davies, 1980): U = I^{*a}N^{-1} where I^{*} is an accepted measure of inequality (in practice the coefficient of variation is suggested), a is a constant or a parameter (to be estimated empirically) and N is the number of firms. Davies (1979) suggests that a concentration index should in general depend on both N and the inequality of firms' shares. The "number of effective competitors" is the inverse of the Herfindahl index. Terrence Kavyu Muthoka defines distribution just as functionals in the Swartz space which is the space of functions with compact support and with all derivatives existing.The or the Dirac function is a good example . See also References ^ Concentration. Glossary of Statistical Terms. Organisation for Economic Co-operation and Development. ^ J. Gregory Sidak, Evaluating Market Power Using Competitive Benchmark Prices Instead of the Hirschman-Herfindahl Index, 74 ANTITRUST L.J. 387, 387-388 (2007). Bain, J. (1956). Barriers to New Competition. Cambridge, Mass.: Harvard Univ. Press. Curry, B. and K. D. George (1983). "Industrial concentration: A survey" Jour. of Indust. Econ. 31(3): 203–55 Shughart II, William F. (2008). "Industrial Concentration". In Tirole, J. (1988). The Theory of Industrial Organization. Cambridge, Mass.: MIT Press. Weiss, L. W. (1989). Concentration and price. Cambridge, Mass. : MIT Press. External links Department of Justice and Federal Trade Commission Horizontal Merger Guidelines This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002. Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles. By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
№ 9 All Issues Kopotun K. A. Ukr. Mat. Zh. - 2010. - 62, № 3. - pp. 369–386 In Part I of the paper, we have proved that, for every $α > 0$ and a continuous function $f$, which is either convex $(s = 0)$ or changes convexity at a finite collection $Y_s = \{y_i\}^s_i = 1$ of points $y_i ∈ (-1, 1)$, $$\sup \left\{n^{\alpha}E^{(2)}_n(f,Y_s):\;n \geq N^{*}\right\} \leq c(\alpha,s) \sup \left\{n^{\alpha}E_n(f):\; n \geq 1 \right\},$$ where $E_n (f)$ and $E^{(2)}_n (f, Y_s)$ denote, respectively, the degrees of the best unconstrained and (co)convex approximations and $c(α, s)$ is a constant depending only on $α$ and $s$. Moreover, it has been shown that $N^{∗}$ may be chosen to be 1 for $s = 0$ or $s = 1, α ≠ 4$, and that it must depend on $Y_s$ and $α$ for $s = 1, α = 4$ or $s ≥ 2$. In Part II of the paper, we show that a more general inequality $$\sup \left\{n^{\alpha}E^{(2)}_n(f,Y_s):\;n \geq N^{*}\right\} \leq c(\alpha, N, s) \sup \left\{n^{\alpha}E_n(f):\; n \geq N \right\},$$ is valid, where, depending on the triple $(α,N,s)$ the number $N^{∗}$ may depend on $α,N,Y_s$, and $f$ or be independent of these parameters. Ukr. Mat. Zh. - 1994. - 46, № 9. - pp. 1266–1270
Since $k[x,y]$ is a Unique Factorization Domain, the ideal $(x^m-y^n)$ is prime there if and only if $P=x^m-y^n$ is irreducible in $k[x,y]$. I offer two proofs of irreducibility of $x^m-y^n$ when $\gcd(m,n)=1$, both showing that it’s irreducible as an element of $k(x)[y]$. The first is very advanced, the second is elementary, but my argument there is not as simple as it ought to be. The first argument uses the theory of the Newton Polygon. Consider the discrete valuation $v_x$ on $k(x)$, which makes $v_x(x)=1$ and $v_x(f)=0$ if $x$ does not divide the polynomial $f\in k[x]$. The polygon of $P=x^m-y^n$ has one vertex for each monomial here, one at $(0,m)$ for $x^m$, the other at $(n,0)$ for $y^n$. There is only the one segment, between these two vertices, and it passes through no other integral points. But a factorization $P=P_1P_2$ that’s nontrivial, i.e. with both $y$-degrees positive, will give two segments, of lesser width than $n$ and with the one slope $-m/n$, and with both coordinates of the endpoints integral. Not possible, since the original segment passed through no integral points. The other proof is elementary but much longer than it should be, and proceeds through several steps. First, note that if $f(t)\in F[t]$ has degree $d$, where $F$ is a field, and if $\lambda\ne0$ in $F$, then $f(t)$ is irreducible if and only if $f(\lambda t)$ (and similarly $f(\lambda t)/\lambda^d\,$) is irreducible in $k[t]$. Second, consider the polynomial $x^r-y^s\in k[x,y]$, where $r<s$. Say that $s=qr+d$ with $d<r$, by Euclidean division. Now consider the equivalences:\begin{align}x^r-y^s\text{ irred in }k[x,y]&\Longleftrightarrowx^r-y^s\text{ irred in }k(y)[x]\\&\Longleftrightarrow\left[(xy^q)^r-y^{rq+d}\right]\big/y^{qr}\text{ irred in }k(y)[x]\\&\Longleftrightarrow x^r-y^d\text{ irred in }k(y)[x]\\&\Longleftrightarrow x^r-y^d\text{ irred in }k[x,y]\end{align}You see the strategy now: perform Euclidean division repeatedly on the exponents, just as in the Euclidean algorithm for greatest common divisor, and finally get to a point where one of the exponents is $1$. But you know that $x^\ell-y$ and $y^\ell-x$ are irreducible in $k[x,y]$, which gives the result.
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
I can think of several additions to your list which don't seem to be represented yet. 1. Semismall resolutions This first example is rather general, but afterward I will discuss how it is used in Springer theory. First, suppose that $f:X \to Y$ is a proper map of stratified irreducible complex algebraic varieties with $X$ rationally smooth such that, if $Y = \cup Y_n$ is the stratification of $Y,$ the restriction of $f$ to $f^{-1}(Y_n) \to Y_n$ is topologically locally trivial (there's a theorem (not sure who it's by) that says we can always find a stratification such that this condition holds). Furthermore, we say that $f$ is semi-small if for each stratum $Y_n,$the dimension of the fiber of $f^{-1}(Y_n) \to Y_n$ is less than or equal to the half of codimension of $Y_n$ inside $Y.$ This condition is important largely because of the following theorem: Fact. The pushforward of the constant perverse sheaf under a semismall map is still perverse. Furthermore, we say that a stratum $Y_n$ is relevant whenever equality holds above, i.e., twice the fiber dimension is equal to the codimension. These will be important soon, as they will be the subvarieties appearing in the decomposition theorem. By the assumptions we made on $f:X \to Y,$ we have a monodromy action of $\pi_1(Y_n)$ on the top dimensional cohomology group of the fiber of $f^{-1}(Y_n) \to Y_n.$ This corresponds to a local system $L_{Y_n},$ which we can decompose into irreducible components: $L_{Y_n} = \oplus L_{\rho}^{d_{\rho}}$ where $\rho$ runs over the set of irreducible representations of $\pi_1(Y_n)$ and $d_{\rho}$ are non-negative integers. We then say that a pair $(Y_n, \rho)$ is relevant iff $Y_n$ is a relevant stratum and $d_{\rho} \neq 0$ (i.e., $\rho$ appears in the decomposition of the representation of $\pi_1(Y_n)$). Now we can finally state a theorem, which I believe is due to Borho and Macpherson, but perhaps others deserve credit as well. Keep the initial assumptions on $f:X \to Y,$ but now assume in addition that $X$ is smooth. Then a little work plus the decomposition theorem establish the following. Theorem. $f_{\ast}IC_X = \oplus IC_{Z_n}(L_{\rho})^{d_{\rho}}$ where $Z_n$ is the closure of $Y_n$ and the sum ranges over all relevant pairs $(Y_n, \rho).$ This theorem is used in Springer theory (and perhaps other places as well). In this case, we want $f:X \to Y$ to be the Springer resolution. That is, $Y = \mathcal{N},$ the nilpotent cone of a Lie algebra $g$ associated to a reductive group $G$, and $Y = \widetilde{\mathcal{N}},$ the variety of pairs $(x,b)$ where $x \in \mathcal{N},$ $b$ is a Borel subalgebra, and $x \in b.$ If we stratify $\mathcal{N}$ using the $Ad(G)$-orbits (of which there are finitely many), then it turns out that the Springer resolution is semismall and every stratum is relevant. It can furthermore be shown that the $L_{\rho}$ appearing in the theorem above correspond to the irreducible components of the regular representation of the Weyl group of $G.$ This can be seen as follows. There's an analog of the Springer resolution $\pi:\widetilde{g} \to g$ defined as above but with g in place of $\mathcal{N}.$ By proper base change, the pushforward of the constant sheaf on $\widetilde{\mathcal{N}}$ coincides with the pull-back (under the inclusion $\mathcal{N} \to g$) of the pushforward of the constant sheaf on $\widetilde{g}.$ Finally, since $\pi$ is what's known as a small map, the pushforward of the constant sheaf on $\widetilde{g}$ is equal to $IC_g(L)$ where $L$ is the local system on the dense open subset $g^{rs}$ of regular semisimple elements obtained from the $W$-torsor $\widetilde{g^{rs}} \to g^{rs}.$ From all this we obtain that the top-dimensional cohomology groups of Springer fibers produce all irreducible representations of $W.$ 2. Geometric Satake In a different direction, let me mention how the decomposition theorem is used in the geometric Satake correspondence (see the Mirkovic-Vilonen paper or the Ginzburg paper on this topic). Geometric Satake is concerned with proving a tensor equivalence between the category of spherical perverse sheaves on the affine Grassmannian (i.e., perverse sheaves which are direct sums of IC sheaves) associated to a reductive group $G$ and the category of representations of the Langlands dual of $G.$ This is done through the Tannakian formalism, which in particular requires a tensor structure on spherical perverse sheaves. This tensor structure comes from a convolution product on perverse sheaves, meaning that it comes from a pull-back followed by a tensor product followed by a pushforward. In order to ensure that this operation takes spherical perverse sheaves to spherical perverse sheaves, we need the decomposition theorem. Edit: According to the comments below, the decomposition theorem isn't actually needed to define the convolution product. Comment on Kazhdan-Lusztig I'm going to assume that Gil Kalai is referring to the work of Lusztig on Kazhdan-Lusztig polynomials and the Kazhdan-Lusztig conjecture (mentioned in his answer). In particular, they have a paper, [KL] Schubert varieties and Poincaré duality, D. Kazhdan, G. Lusztig, Proc. Symp. Pure Math, 1980 in which the coefficients of the Kazhdan-Lusztig polynomials are related to the dimensions of the intersection cohomology of Schubert varieties (which are not generally smooth, hence the appearance of intersection cohomology). At this point, the Decomposition Theorem had not been proved and was not used in [KL]. However, the proof of the Decomposition Theorem heavily uses Deligne's Purity Theorem, which also had not been proved at the time of [KL]. Kazhdan and Lusztig ended up giving a proof of the Purity Theorem in the special case they were considering (i.e., a proof for Schubert varieties). Given this, it's not too surprising that a few years later Macpherson and Gelfand gave a proof of the aforementioned result of [KL] using the decomposition theorem and the result explained at the beginning of this answer. It's my understanding that Lusztig has another paper from the mid-eighties on finite Chevalley groups which uses the Kazhdan-Lusztig conjecture (proved in 1981) and the full machinery of perverse sheaves and the Decomposition Theorem (I've never looked at it though). Additionally, Lusztig's work in the late seventies and early eighties on Springer theory certainly hints at the Decomposition Theory methods eventually used by Borho and Macpherson (some of his conjectures are proved by Borho and Macpherson, for example). A wonderful history and reference guide to much of this can be found in this article by Steve Kleiman.
Keywords Sylvester equation, Jordan normal form, Schur decomposition Abstract The method for solving the Sylvester equation $AX-XB=C$ in complex matrix case, when $\sigma(A)\cap\sigma(B)\neq \emptyset$, by using Jordan normal form is given. Also, the approach via Schur decomposition is presented. Recommended Citation Dinčić, Nebojša Č..(2019),"Solving the Sylvester Equation AX-XB=C when $\sigma(A)\cap\sigma(B)\neq\emptyset$", Electronic Journal of Linear Algebra,Volume 35, pp. 1-23. DOI: https://doi.org/10.13001/1081-3810.3698
Category:Intervals Let $\struct {S, \preccurlyeq}$ be an ordered set. Let $a, b \in S$. The intervals between $a$ and $b$ are defined as follows: The open interval between $a$ and $b$ is the set: $\openint a b := a^\succ \cap b^\prec = \set {s \in S: \paren {a \prec s} \land \paren {s \prec b} }$ where: $a^\succ$ denotes the strict upper closure of $a$ $b^\prec$ denotes the strict lower closure of $b$. The left half-open interval between $a$ and $b$ is the set: $\hointl a b := a^\succ \cap b^\preccurlyeq = \set {s \in S: \paren {a \prec s} \land \paren {s \preccurlyeq b} }$ where: $a^\succ$ denotes the strict upper closure of $a$ $b^\preccurlyeq$ denotes the lower closure of $b$. The right half-open interval between $a$ and $b$ is the set: $\hointr a b := a^\succcurlyeq \cap b^\prec = \set {s \in S: \paren {a \preccurlyeq s} \land \paren {s \prec b} }$ where: $a^\succcurlyeq$ denotes the upper closure of $a$ $b^\prec$ denotes the strict lower closure of $b$. The closed interval between $a$ and $b$ is the set: $\closedint a b := a^\succcurlyeq \cap b^\preccurlyeq = \set {s \in S: \paren {a \preccurlyeq s} \land \paren {s \preccurlyeq b} }$ where: Subcategories This category has the following 2 subcategories, out of 2 total. Pages in category "Intervals" The following 2 pages are in this category, out of 2 total.
Search Now showing items 1-1 of 1 Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE (Elsevier, 2017-11) Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ...
Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:2210-2258, 2019. Abstract In an optimal design problem, we are given a set of linear experiments $v_1,…,v_n\in \mathbb{R}^d$ and $k \geq d$, and our goal is to select a set or a multiset $S \subseteq [n]$ of size $k$ such that $\Phi((\sum_{i \in S} v_i v_i^\top )^{-1})$ is minimized. When $\Phi(M) = Determinant(M)^{1/d}$, the problem is known as the D-optimal design problem, and when $\Phi(M) = Trace(M)$, it is known as the A-optimal design problem. One of the most common heuristics used in practice to solve these problems is the local search heuristic, also known as the Fedorov’s exchange method (Fedorov, 1972). This is due to its simplicity and its empirical performance (Cook and Nachtrheim, 1980; Miller and Nguyen, 1994; Atkinson et al., 2007). However, despite its wide usage no theoretical bound has been proven for this algorithm. In this paper, we bridge this gap and prove approximation guarantees for the local search algorithms for D-optimal design and A-optimal design problems. We show that the local search algorithms are asymptotically optimal when $\frac{k}{d}$ is large. In addition to this, we also prove similar approximation guarantees for the greedy algorithms for D-optimal design and A-optimal design problems when $\frac{k}{d}$ is large. @InProceedings{pmlr-v99-madan19a,title = {Combinatorial Algorithms for Optimal Design},author = {Madan, Vivek and Singh, Mohit and Tantipongpipat, Uthaipon and Xie, Weijun},booktitle = {Proceedings of the Thirty-Second Conference on Learning Theory},pages = {2210--2258},year = {2019},editor = {Beygelzimer, Alina and Hsu, Daniel},volume = {99},series = {Proceedings of Machine Learning Research},address = {Phoenix, USA},month = {25--28 Jun},publisher = {PMLR},pdf = {http://proceedings.mlr.press/v99/madan19a/madan19a.pdf},url = {http://proceedings.mlr.press/v99/madan19a.html},abstract = {In an optimal design problem, we are given a set of linear experiments $v_1,…,v_n\in \mathbb{R}^d$ and $k \geq d$, and our goal is to select a set or a multiset $S \subseteq [n]$ of size $k$ such that $\Phi((\sum_{i \in S} v_i v_i^\top )^{-1})$ is minimized. When $\Phi(M) = Determinant(M)^{1/d}$, the problem is known as the D-optimal design problem, and when $\Phi(M) = Trace(M)$, it is known as the A-optimal design problem. One of the most common heuristics used in practice to solve these problems is the local search heuristic, also known as the Fedorov’s exchange method (Fedorov, 1972). This is due to its simplicity and its empirical performance (Cook and Nachtrheim, 1980; Miller and Nguyen, 1994; Atkinson et al., 2007). However, despite its wide usage no theoretical bound has been proven for this algorithm. In this paper, we bridge this gap and prove approximation guarantees for the local search algorithms for D-optimal design and A-optimal design problems. We show that the local search algorithms are asymptotically optimal when $\frac{k}{d}$ is large. In addition to this, we also prove similar approximation guarantees for the greedy algorithms for D-optimal design and A-optimal design problems when $\frac{k}{d}$ is large.}} %0 Conference Paper%T Combinatorial Algorithms for Optimal Design%A Vivek Madan%A Mohit Singh%A Uthaipon Tantipongpipat%A Weijun Xie%B Proceedings of the Thirty-Second Conference on Learning Theory%C Proceedings of Machine Learning Research%D 2019%E Alina Beygelzimer%E Daniel Hsu%F pmlr-v99-madan19a%I PMLR%J Proceedings of Machine Learning Research%P 2210--2258%U http://proceedings.mlr.press%V 99%W PMLR%X In an optimal design problem, we are given a set of linear experiments $v_1,…,v_n\in \mathbb{R}^d$ and $k \geq d$, and our goal is to select a set or a multiset $S \subseteq [n]$ of size $k$ such that $\Phi((\sum_{i \in S} v_i v_i^\top )^{-1})$ is minimized. When $\Phi(M) = Determinant(M)^{1/d}$, the problem is known as the D-optimal design problem, and when $\Phi(M) = Trace(M)$, it is known as the A-optimal design problem. One of the most common heuristics used in practice to solve these problems is the local search heuristic, also known as the Fedorov’s exchange method (Fedorov, 1972). This is due to its simplicity and its empirical performance (Cook and Nachtrheim, 1980; Miller and Nguyen, 1994; Atkinson et al., 2007). However, despite its wide usage no theoretical bound has been proven for this algorithm. In this paper, we bridge this gap and prove approximation guarantees for the local search algorithms for D-optimal design and A-optimal design problems. We show that the local search algorithms are asymptotically optimal when $\frac{k}{d}$ is large. In addition to this, we also prove similar approximation guarantees for the greedy algorithms for D-optimal design and A-optimal design problems when $\frac{k}{d}$ is large. Madan, V., Singh, M., Tantipongpipat, U. & Xie, W.. (2019). Combinatorial Algorithms for Optimal Design. Proceedings of the Thirty-Second Conference on Learning Theory, in PMLR 99:2210-2258 This site last compiled Sat, 17 Aug 2019 00:05:37 +0000
The needle deflection or the number shown on the digital display of a spectrophotometer is proportional to the transmittance of the solution. How do errors in transmittance readings affect the accuracy of solution concentration values? The concentration as a function of the transmittance is given by the equation \[c(T) = - \dfrac{\log T}{ \epsilon \,b}\] Let \(c_o\) be the true concentration and \(T_o\) the corresponding transmittance, i.e. \(c_o = c(To)\). Suppose that the actual transmittance measured is \(T_o + T\), corresponding to the concentration \[c_o + c = c(To + T).\] The error in the transmittance is \(T\) and that of the concentration is \(c\). By using a Taylor series expansion, and discarding all terms higher than T to the first power, it is possible to show that: \[\Delta c = - \dfrac{\Delta T}{ 2.303 \epsilon \,b\,T}\] Dividing the second equation by the first gives us: \[ \dfrac{\Delta c}{c} = \dfrac{\Delta T}{ 2.303 \epsilon \,b\,T} = \dfrac{\Delta T}{ T \,ln T}\] Values of -(TlnT) -1 as a function of T or A (A = -logT) are tabulated below. Below the tabulation one finds a plot of -(TlnT) -1 versus T. The relative error in the concentration, for a given T, has its smallest value, when T = 1/e = 0.368 or when A = 0.434. The minimum is not sharp and good results can be expected in a transmittance range from 0.2 to 0.6 or an absorbance range from 0.7 to 0.2. An inspection of the graph below indicates that transmittance values of 0.1 and 0.8 are the outside limits between which one can expect to obtain reasonably accurate results. These transmittance values correspond to an absorbance range of 0.1 to 1.0 absorbance units. This is the rationale for limiting your calibration curve to that absorbance range. T -(T ln T) -1 A 0.010 21.71 2.00 0.050 6.68 1.30 0.100 4.34 1.00 0.150 3.51 0.824 0.200 3.11 0.699 0.250 2.89 0.602 0.300 2.77 0.523 0.350 2.72 0.456 0.368 2.718 0.434 0.400 2.73 0.398 0.450 2.78 0.347 0.500 2.89 0.301 0.550 3.04 0.260 0.600 3.26 0.222 0.650 3.57 0.187 0.700 4.01 0.155 0.750 4.63 0.125 0.800 5.60 0.097 0.850 7.24 0.071 0.900 10.55 0.046 0.950 20.52 0.022 0.990 100.50 0.004 Graph of -(T ln T) -1 vs. T Contributors Ulrich de la Camp and Oliver Seely (California State University, Dominguez Hills).
Research output: Contribution to journal › Article View graph of relations Original language English Article number 19 Number of pages 46 Journal JHEP Journal publication date 4 Mar 2014 Volume 2014 Issue 03 Early online date 4 Mar 2014 DOIs Publication status E-pub ahead of print - 4 Mar 2014 Abstract We reformulate eleven-dimensional supergravity, including fermions, in terms of generalised geometry, for spacetimes that are warped products of Minkowski space with a $d$-dimensional manifold $M$ with $d\leq7$. The reformation has a $E_{d(d)} \times \mathbb{R}^+$ structure group and is has a local $\tilde{H}_d$ symmetry, where $\tilde{H}_d$ is the double cover of the maximally compact subgroup of $E_{d(d)}$. The bosonic degrees for freedom unify into a generalised metric, and, defining the generalised analogue $D$ of the Levi-Civita connection, one finds that the corresponding equations of motion are the vanishing of the generalised Ricci tensor. To leading order, we show that the fermionic equations of motion, action and supersymmetry variations can all be written in terms of $D$. Although we will not give the detailed decompositions, this reformulation is equally applicable to type IIA or IIB supergravity restricted to a $(d-1)$-dimensional manifold. For completeness we give explicit expressions in terms of $\tilde{H}_4=\mathit{Spin}(5)$ and $\tilde{H}_7=\mathit{SU}(8)$ representations for $d=4$ and $d=7$. ID: 16189394
Denote $[n] \triangleq \{1,2,\ldots,n\}$. Assume we would like to have a data structure $S$ which kinda works as a dictionary from $[k]$ to $[v]$, and supports add/remove/update/query functionality, without any dynamic memory allocation (everything has to be pre-allocated). In general, one cannot do better than using $\Omega(k\log\frac{v}{k})$ bits in a succinct implementation or $k\log v$ bits in a simple array/hash table.. But if we assume that no more than $m$ items may be added to $S$ (assume you may ignore any additional add operation until some item is removed), we can improve the memory requirements, especially if $m<<k$. Without getting into succinct implementation (is such known (which takes $m$ into consideration)?), one may simply allocate an array of $c\cdot m$ (where $c$ is a space/time performance parameter) linked lists, and whenever key $x$ is added to the structure, place it in list $h(x)$ for some function $h:[k]\to [cm]$. If $h$ spread the actual keys (about) evenly, then the structure may still perform operations in $O(1)$ time for constant $c$ (as $\frac{1}{c}$ is the expected list length), while reducing the space requirement to $O(m(c+\log (vk)))$ (this can be done without any need for dynamic allocation). I'm currently working on a succinct implementation of such structure for some application, and was wondering if there is a name for it.
In their proof of the celebrated Kadison-Singer conjecture, Marcus, Spielman and Srivastava exploited so-called interlacing families which are originally defined for their work on Ramanujan graphs. And I have a question on a variant of their Lemma 4.2 about common interlacing stated in http://arxiv.org/pdf/1304.4132v2.pdf. I may need to explain some terminologies before presenting the lemma. Suppose we have a set of polynomials $f_1,\cdots,f_k$ where each polynomial has degree $n$, each has a positive leading coefficient, and each has $n$ real roots. Let $\beta_{i,j}$ be the $j^\mathrm{th}$ smallest root of $f_i$.Then we say these polynomials $f_1,\cdots,f_k$ have a common interlacing when there are numbers $\alpha_0 \leq \alpha_1 \leq \cdots\leq\alpha_n$ so that $\beta_{i,j} \in [\alpha_{j−1}, \alpha_j]$ for all $i$ and $j$.In other words, degree-$n$ polynomials $f_1,\cdots,f_k$ have a common interlacing when there are $n$ non-overlapping regions in $x$ so that all the $i^\mathrm{th}$ root of each polynomial is located the $i^\mathrm{th}$ region. For example, $f_1 = (x+10)(x-1)(x-10)$ and $f_2=(x+11)(x-2)(x-11)$ have a common interlacing. The smallest roots are $\{-10,-11\}$, the second smallest roots are $\{1,2\}$ and the largest roots are $\{10,11\}$. And these three sets of numbers can be placed in three non-overlapping regions. $f_1 = (x + 5)(x − 9)(x − 10)$ and $f_2=(x + 6)(x − 1)(x − 8)$ don't have a common interlacing. The smallest roots are $\{-5,-6\}$, the second smallest roots are $\{1,9\}$ and the largest roots are $\{8,10\}$. The last two sets of numbers cannot be placed on two non-overlapping regions. Lemma 4.2 in http://arxiv.org/pdf/1304.4132v2.pdf:Let $f_1,\cdots,f_k$ be polynomials of the same degree $n$ that are real-rooted and havepositive leading coefficients. Define\begin{equation*}f_\emptyset = \sum_{i=1}^k f_i.\end{equation*} If $f_1,\cdots,f_k$ have a common interlacing, then there exists an $i$ so that the largest root of $f_i$ is at most the largest root of $f_\emptyset$. The proof of Lemma 4.2 is simple. (You may try another simple proof by Dustin G. Mixon.) The paper says "The conclusion of the lemma also holds for the $k^\mathrm{th}$ largest root by a similar argument." My question is: I'm guessing, when they mean by $k^\mathrm{th}$ largest root in the above statement, $k$ means any number between 1 and $n$ but not the number of polynomials $k$ in the definition of common interlacing, right?. The original proof about the largest root of $f_\emptyset$ proceeds with the fact that each $f_i$ has a positive leading coefficient and each of them is positive for sufficiently large $x$ which is larger than each of the largest roots of $f_1,\cdots,f_k$. I'm not sure how we can prove it for the $\ell^\mathrm{th}$ largest roof of $f_\emptyset$ because we cannot assume this kind of facts.
For many operators, their adjoint can be expressed as a function of other known operators, for example $$\hat{T}_a^\dagger = \hat{T}_{-a} \\ \hat{p}_x^\dagger = \hat{p}_x$$ where $\hat{T}_a \psi (x) = \psi (x+a)$ and $\hat{p}_x \psi(x)= -ih \frac{\partial \psi(x)}{\partial x}$. But if we consider the operator $\hat{K}\psi (x) = \psi^* (x)$ (here $^*$ denotes complex conjugate) then, from definition, $$ \langle \phi |\hat{K}^\dagger |\psi \rangle = \langle \psi |\hat{K} |\phi \rangle^* = \langle \psi |\phi^* \rangle^* = \langle \psi^* |\phi \rangle $$ Can it be expressed as $\langle \phi |\hat{A}|\psi \rangle$ for some operator $\hat{A}$? If not, why? Also, what does this result imply? For example, is $\hat{K}$ Hermitian? $$ \langle \phi |\hat{K}^\dagger |\psi \rangle = \langle \phi |\hat{K} |\psi \rangle \\ \therefore \langle \psi^* |\phi \rangle = \langle \phi |\psi^* \rangle$$ So it is Hermitian only for certain wavefunctions?
Algebra and Algebraic Geometry Seminar Spring 2018 The seminar meets on Fridays at 2:25 pm in room B235. Contents 1 Algebra and Algebraic Geometry Mailing List 2 Schedules 3 Spring 2018 Schedule 4 Abstracts Algebra and Algebraic Geometry Mailing List Please join the AGS Mailing List to hear about upcoming seminars, lunches, and other algebraic geometry events in the department (it is possible you must be on a math department computer to use this link). Schedules Spring 2018 Schedule Abstracts Tasos Moulinos Derived Azumaya Algebras and Twisted K-theory Topological K-theory of dg-categories is a localizing invariant of dg-categories over [math] \mathbb{C} [/math] taking values in the [math] \infty [/math]-category of [math] KU [/math]-modules. In this talk I describe a relative version of this construction; namely for [math]X[/math] a quasi-compact, quasi-separated [math] \mathbb{C} [/math]-scheme I construct a functor valued in the [math] \infty [/math]-category of sheaves of spectra on [math] X(\mathbb{C}) [/math], the complex points of [math]X[/math]. For inputs of the form [math]\operatorname{Perf}(X, A)[/math] where [math]A[/math] is an Azumaya algebra over [math]X[/math], I characterize the values of this functor in terms of the twisted topological K-theory of [math] X(\mathbb{C}) [/math]. From this I deduce a certain decomposition, for [math] X [/math] a finite CW-complex equipped with a bundle [math] P [/math] of projective spaces over [math] X [/math], of [math] KU(P) [/math] in terms of the twisted topological K-theory of [math] X [/math] ; this is a topological analogue of a result of Quillen’s on the algebraic K-theory of Severi-Brauer schemes. Roman Fedorov A conjecture of Grothendieck and Serre on principal bundles in mixedcharacteristic Let G be a reductive group scheme over a regular local ring R. An old conjecture of Grothendieck and Serre predicts that such a principal bundle is trivial, if it is trivial over the fraction field of R. The conjecture has recently been proved in the "geometric" case, that is, when R contains a field. In the remaining case, the difficulty comes from the fact, that the situation is more rigid, so that a certain general position argument does not go through. I will discuss this difficulty and a way to circumvent it to obtain some partial results. Juliette Bruce Asymptotic Syzygies in the Semi-Ample Setting In recent years numerous conjectures have been made describing the asymptotic Betti numbers of a projective variety as the embedding line bundle becomes more ample. I will discuss recent work attempting to generalize these conjectures to the case when the embedding line bundle becomes more semi-ample. (Recall a line bundle is semi-ample if a sufficiently large multiple is base point free.) In particular, I will discuss how the monomial methods of Ein, Erman, and Lazarsfeld used to prove non-vanishing results on projective space can be extended to prove non-vanishing results for products of projective space. Andrei Caldararu Computing a categorical Gromov-Witten invariant In his 2005 paper "The Gromov-Witten potential associated to a TCFT" Kevin Costello described a procedure for recovering an analogue of the Gromov-Witten potential directly out of a cyclic A-inifinity algebra or category. Applying his construction to the derived category of sheaves of a complex projective variety provides a definition of higher genus B-model Gromov-Witten invariants, independent of the BCOV formalism. This has several advantages. Due to the categorical invariance of these invariants, categorical mirror symmetry automatically implies classical mirror symmetry to all genera. Also, the construction can be applied to other categories like categories of matrix factorization, giving a direct definition of FJRW invariants, for example. In my talk I shall describe the details of the computation (joint with Junwu Tu) of the invariant, at g=1, n=1, for elliptic curves. The result agrees with the predictions of mirror symmetry, matching classical calculations of Dijkgraaf. It is the first non-trivial computation of a categorical Gromov-Witten invariant. Aron Heleodoro Normally ordered tensor product of Tate objects and decomposition of higher adeles In this talk I will introduce the different tensor products that exist on Tate objects over vector spaces (or more generally coherent sheaves on a given scheme). As an application, I will explain how these can be used to describe higher adeles on an n-dimensional smooth scheme. Both Tate objects and higher adeles would be introduced in the talk. (This is based on joint work with Braunling, Groechenig and Wolfson.) Moisés Herradón Cueto Local type of difference equations The theory of algebraic differential equations on the affine line is very well-understood. In particular, there is a well-defined notion of restricting a D-module to a formal neighborhood of a point, and these restrictions are completely described by two vector spaces, called vanishing cycles and nearby cycles, and some maps between them. We give an analogous notion of "restriction to a formal disk" for difference equations that satisfies several desirable properties: first of all, a difference module can be recovered uniquely from its restriction to the complement of a point and its restriction to a formal disk around this point. Secondly, it gives rise to a local Mellin transform, which relates vanishing cycles of a difference module to nearby cycles of its Mellin transform. Since the Mellin transform of a difference module is a D-module, the Mellin transform brings us back to the familiar world of D-modules. Eva Elduque On the signed Euler characteristic property for subvarieties of Abelian varieties Franecki and Kapranov proved that the Euler characteristic of a perverse sheaf on a semi-abelian variety is non-negative. This result has several purely topological consequences regarding the sign of the (topological and intersection homology) Euler characteristic of a subvariety of an abelian variety, and it is natural to attempt to justify them by more elementary methods. In this talk, we'll explore the geometric tools used recently in the proof of the signed Euler characteristic property. Joint work with Christian Geske and Laurentiu Maxim. Harrison Chen Equivariant localization for periodic cyclic homology and derived loop spaces There is a close relationship between derived loop spaces, a geometric object, and (periodic) cyclic homology, a categorical invariant. In this talk we will discuss this relationship and how it leads to an equivariant localization result, which has an intuitive interpretation using the language of derived loop spaces. We discuss ongoing generalizations and potential applications in computing the periodic cyclic homology of categories of equivariant (coherent) sheaves on algebraic varieties. Phil Tosteson Stability in the homology of Deligne-Mumford compactifications The space [math]\bar M_{g,n}[/math] is a compactification of the moduli space algebraic curves with marked points, obtained by allowing smooth curves to degenerate to nodal ones. We will talk about how the asymptotic behavior of its homology, [math]H_i(\bar M_{g,n})[/math], for [math]n \gg 0[/math] can be studied using the representation theory of the category of finite sets and surjections. Wei Ho Noncommutative Galois closures and moduli problems In this talk, we will discuss the notion of a Galois closure for a possibly noncommutative algebra. We will explain how this problem is related to certain moduli problems involving genus one curves and torsors for Jacobians of higher genus curves. This is joint work with Matt Satriano. Daniel Corey Initial degenerations of Grassmannians Let Gr_0(d,n) denote the open subvariety of the Grassmannian Gr(d,n) consisting of d-1 dimensional subspaces of P^{n-1} meeting the toric boundary transversely. We prove that Gr_0(3,7) is schoen in the sense that all of its initial degenerations are smooth. The main technique we will use is to express the initial degenerations of Gr_0(3,7) as a inverse limit of thin Schubert cells. We use this to show that the Chow quotient of Gr(3,7) by the maximal torus H in GL(7) is the log canonical compactification of the moduli space of 7 lines in P^2 in linear general position. Alena Pirutka Irrationality problems Let X be a projective algebraic variety, the set of solutions of a system of homogeneous polynomial equations. Several classical notions describe how ``unconstrained the solutions are, i.e., how close X is to projective space: there are notions of rational, unirational and stably rational varieties. Over the field of complex numbers, these notions coincide in dimensions one and two, but diverge in higherdimensions. In the last years, many new classes of non stably rational varieties were found, using a specialization technique, introduced by C. Voisin. This method also allowed to prove that the rationality is not a deformation invariant in smooth and projective families of complex varieties: this is a joint work with B. Hassett and Y. Tschinkel. In my talk I will describe classical examples, as well as the recent progress around these rationality questions. Nero Budur Homotopy of singular algebraic varieties By work of Simpson, Kollár, Kapovich, every finitely generated group can be the fundamental group of an irreducible complex algebraic variety with only normal crossings and Whitney umbrellas as singularities. In contrast, we show that if a complex algebraic variety has no weight zero 1-cohomology classes, then the fundamental group is strongly restricted: the irreducible components of the cohomology jump loci of rank one local systems containing the constant sheaf are complex affine tori. Same for links and Milnor fibers. This is joint work with Marcel Rubió. Alexander Yom Din Drinfeld-Gaitsgory functor and contragradient duality for (g,K)-modules Drinfeld suggested the definition of a certain endo-functor, called the pseudo-identity functor (or the Drinfeld-Gaitsgory functor), on the category of D-modules on an algebraic stack. We extend this definition to an arbitrary DG category, and show that if certain finiteness conditions are satisfied, this functor is the inverse of the Serre functor. We show that the pseudo-identity functor for (g,K)-modules is isomorphic to the composition of cohomological and contragredient dualities, which is parallel to an analogous assertion for p-adic groups. In this talk I will try to discuss some of these results and around them. This is joint work with Dennis Gaitsgory. John Lesieutre Some higher-dimensional cases of the Kawaguchi-Silverman conjecture Given a dominant rational self-map f : X -->X of a variety defined over a number field, the first dynamical degree $\lambda_1(f)$ and the arithmetic degree $\alpha_f(P)$ are two measures of the complexity of the dynamics of f: the first measures the rate of growth of the degrees of the iterates f^n, while the second measures the rate of growth of the heights of the iterates f^n(P) for a point P. A conjecture of Kawaguchi and Silverman predicts that if P has Zariski-dense orbit, then these two quantities coincide. I will prove this conjecture in several higher-dimensional settings, including for all automorphisms of hyper-K\"ahler varieties. This is joint work with Matthew Satriano.
This set of Network Theory Multiple Choice Questions & Answers (MCQs) focuses on “Advanced Problems on Network Theory – 1”. 1. Branch current and loop current relation is expressed in matrix form shown below, where Ij represents branch current and Ik represents loop current. [I 1; I 2; I 3; I 4; I 5; I 6; I 7; I 8] = [0 0 1 0; -1 -1 -1 0; 0 1 0 0; 1 0 0 0; 0 0 -1 -1; 1 1 0 -1; 1 0 0 0; 0 0 0 1] [I 1; I 2; I 3; I 4]The rank of the incidence matrix is? a) 4 b) 5 c) 6 d) 8 View Answer Explanation: Number of branches b = 8 Number of links l = 4 Number of twigs t = b – l = 4 Rank of matrix = n – 1 = t = 4. 2. A capacitor, used for power factor correction in a single phase circuit decreases which of the following? a) Power factor b) Line current c) Both Line current and Power factor d) Neither Line current nor Power factor View Answer Explanation: We know that a capacitor is used to increase the Power factor. However, with decrease in line current the power factor is increased. Hence line current decreases. 3. D is the distance between the plates of a parallel plate capacitor. The dielectric constants are ∈ 1 and ∈ 2 respectively. The total capacitance is proportional to ____________ a) \(\frac{∈_1 ∈_2}{∈_1+∈_2}\) b) ∈ 1 – ∈ 2 c) \(\frac{∈_1}{∈_2}\) d) ∈ 1 ∈ 2 View Answer Explanation: The combination is equal to two capacitors in series. So, C = \(\frac{\Big[∈_0 ∈_1 \left(\frac{A}{0.5d}\right)\Big]\Big[∈_0 ∈_2 \left(\frac{A}{0.5d}\right)\Big]}{∈_0 ∈_1 \frac{A}{0.5d} + ∈_0 ∈_1 \frac{A}{0.5d}}\) Hence, C is proportional to \(\frac{∈_1 ∈_2}{∈_1+∈_2}\). 4. A two branch circuit has a coil of resistance R 1, inductance L 1 in one branch and capacitance C 2 in the second branch. If R is increased, the dynamic resistance is going to ___________ a) Increase b) Decrease c) Remains constant d) May increase or decrease View Answer Explanation: We know that, Dynamic resistance = \(\frac{L_1}{R_1 C_2}\) So, if R 1is increased, keeping Inductance and Capacitance same, so The Dynamic resistance will decrease, as the denomination is increasing. 5. A 1 μF capacitor is connected to 12 V batteries. The energy stored in the capacitor is _____________ a) 12 x 10 -6 J b) 24 x 10 -6 J c) 60 x 10 -6 J d) 72 x 10 -6 J View Answer Explanation: We know that, Energy, E = 0.5 CV 2 = 0.5 X 1 X 10 -6X 144 = 72 x 10 -6J. Explanation: In the circuit of figure (I B), transforming 3A source into 18 V source, all sources are 1.5 times of that in circuit (I A). Hence, I B= 1.5I A. Explanation: X EQ= sL + \(\frac{R×1/sC}{R+1/sC} = sL + \frac{R}{1+sRC}\) I O= \(\frac{V}{X_{EQ}}\) ∴ I = \(\frac{X_C}{X_C+R}\) I O = \(\frac{1/sC}{\frac{1}{sC}+R} × \frac{V}{\frac{sL(1+sRC)+R}{(1+sRC)}}\) = \(\frac{1}{1+sRC} × \frac{V}{\frac{sL(1+sRC)+R}{(1+sRC)}}\) = \(\frac{V}{sL(1+sRC)+R}\) = \(\frac{V}{j×10^3×20×10^{-3} (1+j×10^3×50×10^{-6}+1)}\) = \(\frac{V}{20j(1+j50×10^{-3})+1}\) = \(\frac{V}{20j-1+1} = \frac{20}{20j}\) = -j1 A. 8. An AC source of RMS voltage 20 V with internal impedance Z S = (1+2j) Ω feeds a load of impedance Z L = (7+4j) Ω in the circuit given below. The reactive power is _________ a) 8 VAR b) 16 VAR c) 28 VAR d) 32 VAR View Answer Explanation: Current I = \(\frac{V}{Z_L+Z_S} = \frac{20∠0°}{8+6j}\) = \(\frac{20}{\sqrt{8^2+6^2}} = \frac{∠0°}{∠arc tan(\frac{3}{4})}\) = \(\frac{20}{10}\) ∠-arc tan(\(\frac{3}{4}\)) = 2∠-arc tan(\(\frac{3}{4}\)) Power consumed by load = |I| 2Z L = 4(7+4j) = 28 + j16 ∴ The reactive power = 16 VAR. 9. In the circuit given below, R I = 1 MΩ, R O = 10 Ω, A = 10 6 and V I = 1μV. Then the output voltage, input impedance and output impedance respectively are _________ a) 1 V, ∞ and 10 Ω b) 1 V, 0 and 10 Ω c) 1V, 0 and ∞ d) 10 V, ∞ and 10 Ω View Answer Explanation: V O(output voltage) = AV I= 10 6× 10 -6= 1 V V 1= Z 11I 1+ Z 12I 2 V 2= Z 21I 1+ Z 22I 2 Here, I 1= 0 Z 11= \(\frac{V_1}{I_1} = \frac{V_O}{0}\) = ∞ Z 22= \(\frac{V_2}{I_2} = \frac{AV_I}{I_2}\) Or, Z 22= \(\frac{1}{I_2}\) = R O= 10 Ω. 10. If operator ‘a’ = 1 ∠120°. Then (1 – a) is equal to ____________ a) \(\sqrt{3}\) b) \(\sqrt{3}\)∠-30° c) \(\sqrt{3}\)∠30° d) \(\sqrt{3}\)∠60° View Answer Explanation: Given that, ‘a’ = 1 ∠120° So, 1 – a = 1 – 1∠120° = 1 + 0.5 – j 0.866 = 1.5 – j 0.866 = 3∠-30°. 11. For making a capacitor, the dielectric should have __________ a) High relative permittivity b) Low relative permittivity c) Relative permittivity = 1 d) Relative permittivity neither too high nor too low View Answer Explanation: Relative permittivity is for ideal dielectric which is air. Achieving such a precise dielectric is very difficult. Low relative permittivity will lead to low value of capacitance. High relative permittivity will lead to a higher value of capacitance. Explanation: By applying KVL, I = 1 A V AB– 2 × 1 + 5 = 0 Or, V AB= -3 V. 13. If A = 3 + j1, then A 4 is equal to __________ a) 3.16 ∠18.4° b) 100 ∠73.72° c) 100 ∠18.4° d) 3.16 ∠73.22° View Answer Explanation: Given A = 3 + j1 So, 3 + j1 = 10∠18.43° Or, 3 + j1 = (10) 4∠4 X 18.43° = 100∠73.72°. 14. In the figures given below, Value of R A, R B and R C are 20 Ω, 10 Ω and 10 Ω respectively. The resistances R 1, R 2 and R 3 in Ω are ________ a) 2.5, 5 and 5 b) 5, 2.5 and 5 c) 5, 5 and 2.5 d) 2.5, 5 and 2.5 View Answer Explanation: R 1= \(\frac{R_B R_C}{R_A+R_B+R_C} = \frac{100}{40}\) = 2.5 Ω R 2= \(\frac{R_A R_C}{R_A+R_B+R_C} = \frac{200}{40}\) = 5 Ω R 3= \(\frac{R_B R_A}{R_A+R_B+R_C} = \frac{200}{40}\) = 5 Ω. 15. The resistance of a thermistor decreases with increases in __________ a) temperature b) circuit c) light control d) sensors View Answer Explanation: The resistance of a thermistor decreases with increases in temperature. Hence, it is used to monitor hot spot temperature of electric machines. Sanfoundry Global Education & Learning Series – Network Theory. To practice all areas of Network Theory, here is complete set of 1000+ Multiple Choice Questions and Answers.
A Belyi-extender (or dessinflateur) is a rational function $q(t) = \frac{f(t)}{g(t)} \in \mathbb{Q}(t)$ that defines a map \[ q : \mathbb{P}^1_{\mathbb{C}} \rightarrow \mathbb{P}^1_{\mathbb{C}} \] unramified outside $\{ 0,1,\infty \}$, and has the property that $q(\{ 0,1,\infty \}) \subseteq \{ 0,1,\infty \}$. An example of such a Belyi-extender is the power map $q(t)=t^n$, which is totally ramified in $0$ and $\infty$ and we clearly have that $q(0)=0,~q(1)=1$ and $q(\infty)=\infty$. The composition of two Belyi-extenders is again an extender, and we get a rather mysterious monoid $\mathcal{E}$ of all Belyi-extenders. Very little seems to be known about this monoid. Its units form the symmetric group $S_3$ which is the automrphism group of $\mathbb{P}^1_{\mathbb{C}} – \{ 0,1,\infty \}$, and mapping an extender $q$ to its degree gives a monoid map $\mathcal{E} \rightarrow \mathbb{N}_+^{\times}$ to the multiplicative monoid of positive natural numbers. If one relaxes the condition of $q(t) \in \mathbb{Q}(t)$ to being defined over its algebraic closure $\overline{\mathbb{Q}}$, then such maps/functions have been known for some time under the name of dynamical Belyi-functions, for example in Zvonkin’s Belyi Functions: Examples, Properties, and Applications (section 6). Here, one is interested in the complex dynamical system of iterations of $q$, that is, the limit-behaviour of the orbits \[ \{ z,q(z),q^2(z),q^3(z),… \} \] for all complex numbers $z \in \mathbb{C}$. In general, the 2-sphere $\mathbb{P}^1_{\mathbb{C}} = S^2$ has a finite number of open sets (the Fatou domains) where the limit behaviour of the series is similar, and the union of these open sets is dense in $S^2$. The complement of the Fatou domains is the Julia set of the function, of which we might expect a nice fractal picture. Let’s take again the power map $q(t)=t^n$. For a complex number $z$ lying outside the unit disc, the series $\{ z,z^n,z^{2n},… \}$ has limit point $\infty$ and for those lying inside the unit circle, this limit is $0$. So, here we have two Fatou domains (interior and exterior of the unit circle) and the Julia set of the power map is the (boring?) unit circle. Fortunately, there are indeed dynamical Belyi-maps having a more pleasant looking Julia set, such as this one But then, many dynamical Belyi-maps (and Belyi-extenders) are systems of an entirely different nature, they are completely chaotic, meaning that their Julia set is the whole $2$-sphere! Nowhere do we find an open region where points share the same limit behaviour… (the butterfly effect). There’s a nice sufficient condition for chaotic behaviour, due to Dennis Sullivan, which is pretty easy to check for dynamical Belyi-maps. A periodic point for $q(t)$ is a point $p \in S^2 = \mathbb{P}^1_{\mathbb{C}}$ such that $p = q^m(p)$ for some $m > 1$. A critical point is one such that either $q(p) = \infty$ or $q'(p)=0$. Sullivan’s result is that $q(t)$ is completely chaotic when all its critical points $p$ become eventually periodic, that is some $q^k(p)$ is periodic, but $p$ itself is not periodic. For a Belyi-map $q(t)$ the critical points are either comlex numbers mapping to $\infty$ or the inverse images of $0$ or $1$ (that is, the black or white dots in the dessin of $q(t)$) which are not leaf-vertices of the dessin. Let’s do an example, already used by Sullivan himself: \[ q(t) = (\frac{t-2}{t})^2 \] This is a Belyi-function, and in fact a Belyi-extender as it is defined over $\mathbb{Q}$ and we have that $q(0)=\infty$, $q(1)=1$ and $q(\infty)=1$. The corresponding dessin is (inverse images of $\infty$ are marked with an $\ast$) The critical points $0$ and $2$ are not periodic, but they become eventually periodic: \[ 2 \rightarrow^q 0 \rightarrow^q \infty \rightarrow^q 1 \rightarrow^q 1 \] and $1$ is periodic. For a general Belyi-extender $q$, we have that the image under $q$ of any critical point is among $\{ 0,1,\infty \}$ and because we demand that $q(\{ 0,1,\infty \}) \subseteq \{ 0,1,\infty \}$, every critical point of $q$ eventually becomes periodic. If we want to avoid the corresponding dynamical system to be completely chaotic, we have to ensure that one of the periodic points among $\{ 0,1,\infty \}$ (and there is at least one of those) must be critical. Let’s consider the very special Belyi-extenders $q$ having the additional property that $q(0)=0$, $q(1)=1$ and $q(\infty)=\infty$, then all three of them are periodic. So, the system is always completely chaotic unless the black dot at $0$ is not a leaf-vertex of the dessin, or the white dot at $1$ is not a leaf-vertex, or the degree of the region determined by the starred $\infty$ is at least two. Going back to the mystery Manin-Marcolli sub-monoid of $\mathcal{E}$, it might explain why it is a good idea to restrict to very special Belyi-extenders having associated dessin a $2$-coloured tree, for then the periodic point $\infty$ is critical (the degree of the outside region is at least two), and therefore the conditions of Sullivan’s theorem are not satisfied. So, these Belyi-extenders do not necessarily have to be completely chaotic. (tbc)Leave a Comment
Consider a phase space volume element \(dx_0\) at t=0, containing a small collection of initial conditions on a set of trajectories. The trajectories evolve in time according to Hamilton's equations of motion, and at a time t later will be located in a new volume element \(dx_t\) as shown in the figure below: Figure 1: How is \(dx_0\) related to \(dx_t\)dxdd ? To answer this, consider a trajectory starting from a phase space vector \(x_0\) in \(dx_0\) and having a phase space vector \(x_t\) at time t in \(dx_t\). Since the solution of Hamilton's equations depends on the choice of initial conditions, \(x_t\) depends on \(x_0\) : \[ x_0 = \left ( p_1 (0), \cdots , p_N(0), r_1(0), \cdots , r_N (0) \right ) \] \[ x_0 = \left ( p_1 (t), \cdots , p_N(t), r_1(t), \cdots , r_N (t) \right ) \] \[ x^i_t = x^i_t \left ( x^1_0 , \cdots , x^{6N}_0 \right ) \] Thus, the phase space vector components can be viewed as a coordinate transformation on the phase space from t=0 to time t. The phase space volume element then transforms according to \[ dx_t = J (x_t ; x_0 ) dx_0 \] where \(J (x_t ; x_0 )\) is the Jacobian of the transformation: \[ J (x_t ; x_0 ) = \frac {\partial (x^1_t \cdots x^n_t )}{\partial (x^1_0 \cdots x^n_0 )} \] where n=6 N. The precise form of the Jacobian can be determined as will be demonstrated below. The Jacobian is the determinant of a matrix M, \[ J (x_t ; x_0 ) = \text {det} (M) = e^{TrlnM} \] whose matrix elements are \[ M_{ij} = \frac {\partial x^i_t}{\partial x^j_0}\] Taking the time derivative of the Jacobian, we therefore have \[ \frac {dJ}{dt} = Tr \left ( M^{-1} \frac {dM}{dt} \right ) e^{TrlnM} \] \[ = J \sum _{i=1}^n \sum _{j=1}^n M^{-1}_{ij} \frac {dM_{ij}}{dt} \] The matrices M \(_{-1} \) and \( \frac {dM}{dt} \) can be seen to be given by \[ M^{-1}_{ij} = \frac {\partial x^i_0}{\partial x^j_t} \] \[\frac {dM_{ji}}{dt} = \frac {\partial \dot {x}^i_t}{\partial x^i_0} \]}} Substituting into the expression for dJ/ dt gives \[\frac {dJ}{dt} = J \sum _{i,j=1}^n \frac {\partial x^i_0}{\partial x^j_t} \frac {\partial \dot {x}^i_t}{\partial x^i_0} \] \[= J \sum _{i,j,k=1}^n \frac {\partial x^i_0}{\partial x^j_t} \frac {\partial \dot {x}^i_t}{\partial x^k_t} \frac {\partial x^k_t}{\partial x^i_0} \] where the chain rule has been introduced for the derivative \(\frac {\partial x^j_t}{\partial x^i_0}\). The sum over i can now be performed: \[\sum _{i=1}^n \frac {\partial x^i_0}{\partial x^j_t} \frac {\partial x^k_t}{\partial x^i_0} = \sum ^n_{i=1} M^{-1}_{ij} M_{ki} = \sum ^n_{i=1} M_{ki}M^{-1}_{ij} = \delta _{kj} \] Thus, \[\frac {dJ}{dt} = J \sum ^n_{j,k=1} \delta _{jk} \frac {\partial \dot {x}^j_t}{\partial x^k_0}\] \[ J \sum ^n_{j=1} \frac {\partial \dot {x}^j_t}{\partial x^j_t} = J \nabla _x \cdot \dot {x}\] or \[ \frac {dJ}{dt} = J \nabla _x \cdot \dot {x} \] The initial condition on this differential equation is \(J (0) \equiv J (x_0; x_0) = 1 \). Moreover, for a Hamiltonian system \(\nabla _x \cdot \dot {x} = 0 \). This says that dJ/ dt=0 and J(0)=1. Thus, \(J (x_t ; x_0 ) = 1 \). If this is true, then the phase space volume element transforms according to \[ dx_o = dx_t\] which is another conservation law. This conservation law states that the phase space volume occupied by a collection of systems evolving according to Hamilton's equations of motion will be preserved in time. This is one statement of Liouville's theorem. Combining this with the fact that df/ dt=0, we have a conservation law for the phase space probability: \[ f(x_o, o) dx_o = f(x_t,t)dx_t\] which is an equivalent statement of Liouville's theorem.
In "A Really Temporal Logic", by R.Alur and A.Henziger, they introduce an extension of Linear Temporal Logic with a freeze quantifier $x.\phi$, which allows to "give a name" to the current time point to later compare it with others. An example formula from the paper is $$ \Box x.(p \to \Diamond y.(q \land y \le x + 10)) $$ which means that whenever $p$ holds, $q$ must eventually hold within 10 time points. This logic has been proved to be EXPSPACE-complete in the above paper. Since the paper is very old, I'm wondering which developments have been done on this logic. Specifically, I'm asking if it has been proved whether this logic augmented with LTL past operators is still EXPSPACE-complete, as is the case for classic LTL+past which remains PSPACE-complete as the future-only version. I've found something by searching for "freeze quantifier" but recent works such as this paper talk about further extensions with registers that I'm not interested about (or maybe I am but I'm misunderstanding). So is this logic augmented with past operators still EXPSPACE-complete?
Admissible Monomials and Generating Sets for the Polynomial Algebra as a Module Over the Steenrod Algebra Admissible Monomials and Generating Sets for the Polynomial Algebra as a Module Over the Steenrod Algebra Abstract For $n\geq 1,$ let $ {\mathbb P}(n) = {\mathbb F}_2[x_1,\ldots,x_n]$ be the polynomial algebra in $n$ variables $x_i,$ of degree one, over the field ${\mathbb F}_2$ of two elements. The mod-2 Steenrod algebra ${\mathcal A}$ acts on ${\mathbb P }(n)$ according to well known rules. Let ${\mathcal A}^+{\mathbb P}(n)$ denote the image of the action of the positively graded part of ${\mathcal A}.$ A major problem is that of determining a basis for the quotient vector space ${\mathbb Q}(n) = {\mathbb P}(n)/{\mathcal A}^+{\mathbb P}(n).$ Both ${\mathbb P }(n) = \oplus_{d\geq0}{\mathbb P}^{d}(n)$ and ${\mathbb Q}(n)$ are graded where ${\mathbb P}^{d}(n)$ denotes the set of homogeneous polynomials of degree $d.$ In this paper we show that if $n \geq 2,$ and $d \geq 1$ can be expressed in the form $d = \sum_{i=1}^{n-1} (2^{\lambda_i}-1) \; \mbox{with} \; {\lambda_1}> {\lambda_2} > \ldots >{\lambda_{n-2}} \geq {\lambda_{n-1}}\geq 1,$ then $${\rm {dim}}({\mathbb Q}^{d}(n)) \geq \left (\sum_{q=1}^{{\rm min}\{ {\lambda}_{n-1},n\}} {{n}\choose {q}}\right ) ({\rm {dim}}({\mathbb Q}^{d'}(n-1)) )$$ where $ d'= \sum_{i=1}^{n-1} (2^{\lambda_i - \lambda_{n-1}}-1)$.
You are correct, Bogoliubov transformations are not unitary in general. By definition, Bogoliubov transformations are linear transformations of creation/annihilation operators that preserve the algebraic relations among them. The algebraic relations are mainly the commutation/anticommutation relations which define the bosonic/fermionic operators. Nowhere in the definition did we specified that the transformation should be unitary. In fact, the Bogoliubov transformation (in its most generic form) is symplectic for bosons and orthogonal for fermions. In neither case is the Bogoliubov transformation unitary. The Bogoliubov transformation of bosons correspond to the linear canonical transformation of oscillators in classical mechanics (because bosons are quanta of oscillators), and we know the linear canonical transformations are symplectic due to the symplectic structure of the classical phase space. So to be more specific, what are the restrictions on Bogoliubov transformations? Let us consider the case of $n$ single particle modes of either bosons $b_i$ or fermions $f_i$ (where $i=1,2,\cdots,n$ labels the single particle states, such as momentum eigenstates). Both $b_i$ and $f_i$ are not Hermitian operators, which are not quite convenient for a general treatment (because we can't simply treat $b_i$ and $b_i^\dagger$ as the independent basis since they are still related by the particle-hole transformation). Therefore we choose to rewrite the operators as the following linear combinations (motivated by the idea of decomposing a complex number into two real numbers like $z=x+\mathrm{i}y$):$$\begin{split}b_i&=a_i+\mathrm{i}a_{n+i}\\b_i^\dagger&=a_i-\mathrm{i}a_{n+i}\end{split}\qquad\begin{split}f_i&=c_i+\mathrm{i}c_{n+i}\\f_i^\dagger&=c_i-\mathrm{i}c_{n+i}\end{split}$$ where $a_i=a_i^\dagger$ and $c_i=c_i^\dagger$ (for $i=1,2,\cdots,2n$) are Hermitian operators (analogus to real numbers). They must inherit the commutation or anticommutation relations from the "complex" bosons $b_i$ and fermions $f_i$:$$\begin{split}[b_i,b_j^\dagger]=\delta_{ij},[b_i,b_j]=[b_i^\dagger,b_j^\dagger]=0&\Rightarrow[a_i,a_j]=\frac{1}{2}g_{ij}^a \\\{f_i,f_j^\dagger\}=\delta_{ij}, \{f_i,f_j\}=\{f_i^\dagger,f_j^\dagger\}=0&\Rightarrow\{c_i,c_j\}=\frac{1}{2}g_{ij}^c\end{split}$$where $g_{ij}^a$ and $g_{ij}^c$ are sometimes called the quantum metric for bosons and fermions respectively. In matrix forms, they are given by$$g^a=\mathrm{i}\left[\begin{matrix}0&\mathbb{1}_{n\times n}\\-\mathbb{1}_{n\times n}&0\end{matrix}\right] \qquad g^c=\left[\begin{matrix}\mathbb{1}_{n\times n}&0\\0&\mathbb{1}_{n\times n}\end{matrix}\right],$$with $\mathbb{1}_{n\times n}$ being the $n\times n$ identity matrix. So to preserve the algebraic relations among the creation/annihilation operators is to preserve the quantum metric. General linear transformations of the operators $a_i$ and $c_i$ take the form of$$a_i\to \sum_{j}W_{ij}^a a_j\qquad c_i\to \sum_{j}W_{ij}^c c_j,$$where the transformation matrix elements $W_{ij}^a, W_{ij}^c\in\mathbb{R}$ must be real, in order to ensure that the operators $a_i$ and $c_i$ remain Hermitian after the transformation. Then to preserve the quantum metric is to require$$W^a g^a W^{a\intercal}= g^a\qquad W^c g^c W^{c\intercal}= g^c.$$So any real linear transformation satisfying the above conditions is a Bogoliubov transformation in the most general sense. Then depending on the property of the quantum metric, the Bogoliubov transformation is either symplectic or orthogonal. For the bosonic quantum metric, $g^a=-g^{a\intercal}$ is antisymmetric, so the transformation $W^a$ is symplectic. For the fermionic quantum metric, $g^c=g^{c\intercal}$ is symmetric, so the transformation $W^c$ is orthogonal.
Search Now showing items 1-10 of 26 Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider (American Physical Society, 2016-02) The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ... Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (Elsevier, 2016-02) Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ... Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (Springer, 2016-08) The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ... Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2016-03) The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ... Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2016-03) Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ... Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV (Elsevier, 2016-07) The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ... $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2016-03) The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ... Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV (Elsevier, 2016-09) The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ... Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV (Elsevier, 2016-12) We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ... Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV (Springer, 2016-05) Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ...
Let’s try to identify the $\Psi(n) = n \prod_{p|n}(1+\frac{1}{p})$ points of $\mathbb{P}^1(\mathbb{Z}/n \mathbb{Z})$ with the lattices $L_{M \frac{g}{h}}$ at hyperdistance $n$ from the standard lattice $L_1$ in Conway’s big picture. Here are all $24=\Psi(12)$ lattices at hyperdistance $12$ from $L_1$ (the boundary lattices): You can also see the $4 = \Psi(3)$ lattices at hyperdistance $3$ (those connected to $1$ with a red arrow) as well as the intermediate $12 = \Psi(6)$ lattices at hyperdistance $6$. The vertices of Conway’s Big Picture are the projective classes of integral sublattices of the standard lattice $\mathbb{Z}^2=\mathbb{Z} e_1 \oplus \mathbb{Z} e_2$. Let’s say our sublattice is generated by the integral vectors $v=(v_1,v_2)$ and $w=(w_1.w_2)$. How do we determine its class $L_{M,\frac{g}{h}}$ where $M \in \mathbb{Q}_+$ is a strictly positive rational number and $0 \leq \frac{g}{h} < 1$? Here’s an example: the sublattice (the thick dots) is spanned by the vectors $v=(2,1)$ and $w=(1,4)$ Well, we try to find a basechange matrix in $SL_2(\mathbb{Z})$ such that the new 2nd base vector is of the form $(0,z)$. To do this take coprime $(c,d) \in \mathbb{Z}^2$ such that $cv_1+dw_1=0$ and complete with $(a,b)$ satisfying $ad-bc=1$ via Bezout to a matrix in $SL_2(\mathbb{Z})$ such that \[ \begin{bmatrix} a & b \\ c & d \end{bmatrix} \begin{bmatrix} v_1 & v_2 \\ w_1 & w_2 \end{bmatrix} = \begin{bmatrix} x & y \\ 0 & z \end{bmatrix} \] then the sublattice is of class $L_{\frac{x}{z},\frac{y}{z}~mod~1}$. In the example, we have \[ \begin{bmatrix} 0 & 1 \\ -1 & 2 \end{bmatrix} \begin{bmatrix} 2 & 1 \\ 1 & 4 \end{bmatrix} = \begin{bmatrix} 1 & 4 \\ 0 & 7 \end{bmatrix} \] so this sublattice is of class $L_{\frac{1}{7},\frac{4}{7}}$. Starting from a class $L_{M,\frac{g}{h}}$ it is easy to work out its hyperdistance from $L_1$: let $d$ be the smallest natural number making the corresponding matrix integral \[ d. \begin{bmatrix} M & \frac{g}{h} \\ 0 & 1 \end{bmatrix} = \begin{bmatrix} u & v \\ 0 & w \end{bmatrix} \in M_2(\mathbb{Z}) \] then $L_{M,\frac{g}{h}}$ is at hyperdistance $u . w$ from $L_1$. Now that we know how to find the lattice class of any sublattice of $\mathbb{Z}^2$, let us assign a class to any point $[c:d]$ of $\mathbb{P}^1(\mathbb{Z}/n\mathbb{Z})$. As $gcd(c,d)=1$, by Bezout we can find a integral matrix with determinant $1$ \[ S_{[c:d]} = \begin{bmatrix} a & b \\ c & d \end{bmatrix} \] But then the matrix \[ \begin{bmatrix} a.n & b.n \\ c & d \end{bmatrix} \] has determinant $n$. Working backwards we see that the class $L_{[c:d]}$ of the sublattice of $\mathbb{Z}^2$ spanned by the vectors $(a.n,b.n)$ and $(c,d)$ is of hyperdistance $n$ from $L_1$. This is how the correspondence between points of $\mathbb{P}^1(\mathbb{Z}/n\mathbb{Z})$ and classes in Conway’s big picture at hyperdistance $n$ from $L_1$ works. Let’s do an example. Take the point $[7:3] \in \mathbb{P}^1(\mathbb{Z}/12\mathbb{Z})$ (see last time), then \[ \begin{bmatrix} -2 & -1 \\ 7 & 3 \end{bmatrix} \in SL_2(\mathbb{Z}) \] so we have to determine the class of the sublattice spanned by $(-24,-12)$ and $(7,3)$. As before we have to compute \[ \begin{bmatrix} -2 & -7 \\ 7 & 24 \end{bmatrix} \begin{bmatrix} -24 & -12 \\ 7 & 3 \end{bmatrix} = \begin{bmatrix} -1 & 3 \\ 0 & -12 \end{bmatrix} \] giving us that the class $L_{[7:3]} = L_{\frac{1}{12}\frac{3}{4}}$ (remember that the second term must be taken $mod~1$). If you do this for all points in $\mathbb{P}^1(\mathbb{Z}/12\mathbb{Z})$ (and $\mathbb{P}^1(\mathbb{Z}/6\mathbb{Z})$ and $\mathbb{P}^1(\mathbb{Z}/3 \mathbb{Z})$) you get this version of the picture we started with You’ll spot that the preimages of a canonical coordinate of $\mathbb{P}^1(\mathbb{Z}/m\mathbb{Z})$ for $m | n$ are the very same coordinate together with ‘new’ canonical coordinates in $\mathbb{P}^1(\mathbb{Z}/n\mathbb{Z})$. To see that this correspondence is one-to-one and that the index of the congruence subgroup \[ \Gamma_0(n) = \{ \begin{bmatrix} p & q \\ r & s \end{bmatrix}~|~n|r~\text{and}~ps-qr=1 \} \] in the full modular group $\Gamma = PSL_2(\mathbb{Z})$ is equal to $\Psi(n)$ it is useful to consider the action of $PGL_2(\mathbb{Q})^+$ on the right on the classes of lattices. The stabilizer of $L_1$ is the full modular group $\Gamma$ and the stabilizer of any class is a suitable conjugate of $\Gamma$. For example, for the class $L_n$ (that is, of the sublattice spanned by $(n,0)$ and $(0,1)$, which is of hyperdistance $n$ from $L_1$) this stabilizer is \[ Stab(L_n) = \{ \begin{bmatrix} a & \frac{b}{n} \\ c.n & d \end{bmatrix}~|~ad-bc = 1 \} \] and a very useful observation is that \[ Stab(L_1) \cap Stab(L_n) = \Gamma_0(n) \] This is the way Conway likes us to think about the congruence subgroup $\Gamma_0(n)$: it is the joint stabilizer of the classes $L_1$ and $L_n$ (as well as all classes in the ‘thread’ $L_m$ with $m | n$). On the other hand, $\Gamma$ acts by rotations on the big picture: it only fixes $L_1$ and maps a class to another one of the same hyperdistance from $L_1$.The index of $\Gamma_0(n)$ in $\Gamma$ is then the number of classes at hyperdistance $n$. To see that this number is $\Psi(n)$, first check that the classes at hyperdistance $p^k$ for $p$ a prime number and for all $k$ for the $p+1$ free valent tree with root $L_1$, so there are exactly $p^{k-1}(p+1)$ classes as hyperdistance $p^k$. To get from this that the number of hyperdistance $n$ classes is indeed $\Psi(n) = \prod_{p|n}p^{v_p(n)-1}(p+1)$ we have to use the prime- factorisation of the hyperdistance (see this post). The fundamental domain for the action of $\Gamma_0(12)$ by Moebius tranfos on the upper half plane must then consist of $48=2 \Psi(12)$ black or white hyperbolic triangles Next time we’ll see how to deduce the ‘monstrous’ Grothendieck dessin d’enfant for $\Gamma_0(12)$ from it
Yesterday, there was an interesting post by John Baez at the n-category cafe: The Riemann Hypothesis Says 5040 is the Last. The 5040 in the title refers to the largest known counterexample to a bound for the sum-of-divisors function \[ \sigma(n) = \sum_{d | n} d = n \sum_{d | n} \frac{1}{n} \] In 1983, the french mathematician Guy Robin proved that the Riemann hypothesis is equivalent to \[ \frac{\sigma(n)}{n~log(log(n))} < e^{\gamma} = 1.78107... \] when $n > 5040$. The other known counterexamples to this bound are the numbers 3,4,5,6,8,9,10,12,16,18,20,24,30,36,48,60,72,84,120,180,240,360,720,840,2520. In Baez’ post there is a nice graph of this function made by Nicolas Tessore, with 5040 indicated with a grey line towards the right and the other counterexamples jumping over the bound 1.78107… Robin’s theorem has a remarkable history, starting in 1915 with good old Ramanujan writing a part of this thesis on “highly composite numbers” (numbers divisible by high powers of primes). His PhD. adviser Hardy liked his result but called them “in the backwaters of mathematics” and most of it was not published at the time of Ramanujan’s degree ceremony in 1916, due to paper shortage in WW1. When Ramanujan’s paper “Highly Composite Numbers” was first published in 1988 in ‘The lost notebook and other unpublished papers’ it became clear that Ramanujan had already part of Robin’s theorem. Ramanujan states that if the Riemann hypothesis is true, then for $n_0$ large enough we must have for all $n > n_0$ that \[ \frac{\sigma(n)}{n~log(log(n))} < e^{\gamma} = 1.78107... \] When Jean-Louis Nicolas, Robin's PhD. adviser, read Ramanujan's lost notes he noticed that there was a sign error in Ramanujan's formula which prevented him from seeing Robin's theorem. Nicolas: “Soon after discovering the hidden part, I read it and saw the difference between Ramanujan’s result and Robin’s one. Of course, I would have bet that the error was in Robin’s paper, but after recalculating it several times and asking Robin to check, it turned out that there was an error of sign in what Ramanujan had written.” If you are interested in the full story, read the paper by Jean-Louis Nicolas and Jonathan Sondow: Ramanujan, Robin, Highly Composite Numbers, and the Riemann Hypothesis. What’s the latest on Robin’s inequality? An arXiv-search for Robin’s inequality shows a flurry of activity. For starters, it has been verified for all numbers smaller that $10^{10^{13}}$… It has been verified, unconditionally, for certain classes of numbers: all odd integers $> 9$ all numbers not divisible by a 25-th power of a prime Rings a bell? Here’s another hint:According to Xiaolong Wu in A better method than t-free for Robin’s hypothesis one can replace the condition of ‘not divisible by an N-th power of a prime’ by ‘not divisible by an N-th power of 2’. Further, he claims to have an (as yet unpublished) argument that Robin’s inequality holds for all numbers not divisible by $2^{42}$. So, where should we look for counterexamples to the Riemann hypothesis? What about the orders of huge simple groups? The order of the Monster group is too small to be a counterexample (yet, it is divisible by $2^{46}$).
Let $q:(X,T)\to (X',T')$ be a continuous and surjective map. If $q$ is open or closed, then it is a quotient map. Suppose that $q$ is open and let's check that $\bar q:\bar X\longrightarrow X'$ is open too (and therefore homeomorphism, because continuous and open bijections identify the open subsets of each space). Let $\bar U\subset \bar X$ be an open subset: $\bar U=\pi(U)$ for some $U\subset X$ saturated open set ($U=\pi^{-1}(\bar U)$). Then $\bar q(\bar U)=\bar q(\pi(U))=q(U)$, which is open by hypothesis. The case in which $q$ is closed is similar. $\bar U=\pi(U)$ for some $U\subset X$ saturated open set ($U=\pi^{-1}(\bar U)$). Then $\bar q(\bar U)=\bar q(\pi(U))=q(U)$, which is open by hypothesis. The case in which $q$ is closed is similar.
Yeah, this software cannot be too easy to install, my installer is very professional looking, currently not tied into that code, but directs the user how to search for their MikTeX and or install it and does a test LaTeX rendering Some body like Zeta (on codereview) might be able to help a lot... I'm not sure if he does a lot of category theory, but he does a lot of Haskell (not that I'm trying to conflate the two)... so he would probably be one of the better bets for asking for revision of code. he is usually on the 2nd monitor chat room. There are a lot of people on those chat rooms that help each other with projects. i'm not sure how many of them are adept at category theory though... still, this chat tends to emphasize a lot of small problems and occasionally goes off tangent. you're project is probably too large for an actual question on codereview, but there is a lot of github activity in the chat rooms. gl. In mathematics, the Fabius function is an example of an infinitely differentiable function that is nowhere analytic, found by Jaap Fabius (1966). It was also written down as the Fourier transform off^(z)=∏m=1∞(cos... Defined as the probability that $\sum_{n=1}^\infty2^{-n}\zeta_n$ will be less than $x$, where the $\zeta_n$ are chosen randomly and independently from the unit interval @AkivaWeinberger are you familiar with the theory behind Fourier series? anyway here's a food for thought for $f : S^1 \to \Bbb C$ square-integrable, let $c_n := \displaystyle \int_{S^1} f(\theta) \exp(-i n \theta) (\mathrm d\theta/2\pi)$, and $f^\ast := \displaystyle \sum_{n \in \Bbb Z} c_n \exp(in\theta)$. It is known that $f^\ast = f$ almost surely. (a) is $-^\ast$ idempotent? i.e. is it true that $f^{\ast \ast} = f^\ast$? @AkivaWeinberger You need to use the definition of $F$ as the cumulative function of the random variables. $C^\infty$ was a simple step, but I don't have access to the paper right now so I don't recall it. > In mathematics, a square-integrable function, also called a quadratically integrable function, is a real- or complex-valued measurable function for which the integral of the square of the absolute value is finite. I am having some difficulties understanding the difference between simplicial and singular homology. I am aware of the fact that they are isomorphic, i.e. the homology groups are in fact the same (and maybe this doesnt't help my intuition), but I am having trouble seeing where in the setup they d... Usually it is a great advantage to consult the notes, as they tell you exactly what has been done. A book will teach you the field, but not necessarily help you understand the style that the prof. (who creates the exam) creates questions. @AkivaWeinberger having thought about it a little, I think the best way to approach the geometry problem is to argue that the relevant condition (centroid is on the incircle) is preserved by similarity transformations hence you're free to rescale the sides, and therefore the (semi)perimeter as well so one may (for instance) choose $s=(a+b+c)/2=1$ without loss of generality that makes a lot of the formulas simpler, e.g. the inradius is identical to the area It is asking how many terms of the Euler Maclaurin formula do we need in order to compute the Riemann zeta function in the complex plane? $q$ is the upper summation index in the sum with the Bernoulli numbers. This appears to answer it in the positive: "By repeating the above argument we see that we have analytically continued the Riemann zeta-function to the right-half plane σ > 1 − k, for all k = 1, 2, 3, . . .."
We saw that the \(E (N, V, S)\) and \(A (N, V, T)\) could be related by a Legendre transformation. The partition functions \(\Omega (N, V, E)\) and \(Q (N, V, T)\) can be related by a Laplace transform. Recall that the Laplace transform \(\tilde {f} (\lambda)\) of a function \( f (x)\) is given by \[ \tilde {f} (\lambda) = \int _{0}^{\infty} dx e^{- \lambda x} f (x) \] \[ \tilde {\Omega} (N, V, \lambda ) = C_N \int _{0}^{\infty} dE e^{- \lambda E} \int dx \delta ( H (x) - E ) \] Using the \(\delta\)-function to do the integral over \(E\): \[\tilde {\Omega} (N, V, \lambda ) = C_N \int dx e^{- \lambda H (x) } \] By identifying \(\lambda = \beta \), we see that the Laplace transform of the microcanonical partition function gives the canonical partition function \(Q (N, V, T ) \).
Characterization of Strict Positive Definiteness on products of complex spheres 77 Downloads Abstract In this paper we consider Positive Definite functions on products \(\Omega _{2q}\times \Omega _{2p}\) of complex spheres, and we obtain a condition, in terms of the coefficients in their disc polynomial expansions, which is necessary and sufficient for the function to be Strictly Positive Definite. The result includes also the more delicate cases in which p and/or q can be 1 or \(\infty \). The condition we obtain states that a suitable set in \({\mathbb {Z}}^2\), containing the indexes of the strictly positive coefficients in the expansion, must intersect every product of arithmetic progressions. KeywordsStrictly Positive Definite functions Product of complex spheres Generalized Zernike polynomial Mathematics Subject Classification42A82 42C10 Notes Acknowledgements Mario H. Castro was supported by: Grant \(\#\)APQ-00474-14, FAPEMIG and CNPq/Brazil. Eugenio Massa was supported by: Grant \(\#\)2014/25398-0, São Paulo Research Foundation (FAPESP) and Grant \(\#\)303447/2017-6, CNPq/Brazil. Ana P. Peron was supported by: Grants \(\#\)2016/03015-7, \(\#\)2016/09906-0 and \(\#\)2014/25796-5, São Paulo Research Foundation (FAPESP). References 1. 2. 3. 4. 5. 6. 7. 8. 9.Cheney, E.W.: Approximation using positive definite functions. In: Approximation theory VIII, Vol. 1 (College Station, TX, 1995), vol. 6 of Ser. Approx. Decompos., World Sci. Publ., River Edge, NJ, pp. 145–168 (1995)Google Scholar 10.Cheney, W., Light, W.: A course in approximation theory. Graduate Studies in Mathematics, vol. 101. American Mathematical Society, Providence, RI, 2009, reprint of the 2000 originalGoogle Scholar 11. 12. 13.Godement, R.: Introduction aux travaux de A. Selberg. In: Séminaire Bourbaki, vol. 4, Soc. Math. France, Paris, pp. Exp. No. 144, 95–110 (1995)Google Scholar 14. 15. 16.Guella, J.C., Menegatto, V.A.: Schoenberg’s theorem for positive definite functions on products: a unifying framework. J. Fourier Anal. Appl. (2018). https://doi.org/10.1007/s00041-018-9631-5 17. 18. 19.Guella, J.C., Menegatto, V.A., Peron, A.P.: Strictly positive definite kernels on a product of spheres II. SIGMA Symmetry Integrability Geom. Methods Appl. 12 Paper No. 103, 15 (2016)Google Scholar 20. 21. 22.Horn, R.A., Johnson, C.R.: Matrix Analysis. Cambridge University Press, Cambridge (1990). corrected reprint of the 1985 originalGoogle Scholar 23. 24.Koornwinder, T.H.: The addition formula for Jacobi Polynomials II. The Laplace type integral representation and the product formula. Math. Centrum Amsterdam, report TW133 (1972)Google Scholar 25. 26. 27. 28. 29. 30. 31. 32.Musin, O.R.: Multivariate positive definite functions on spheres. In: Discrete Geometry and Algebraic Combinatorics, vol. 625. Contemp. Math., Amer. Math. Soc., Providence, RI, pp. 177–190 (2014)Google Scholar 33. 34. 35. 36. 37.Szegö, G.: Orthogonal polynomials. American Mathematical Society Colloquium Publications, vol. 23. Revised ed. American Mathematical Society, Providence, RI (1959)Google Scholar 38. 39. 40. 41.
Regular Pentagon Construction by K. Knop What Is This About? Problem Construction Draw line $OA.\,$ Mark point $R,\,$ the second intersection of the line with the givem circle $O(A).$ Draw the circle $R(O)\,$ through $O\,$ centered at $R.\,$ Mark point $N,\,$ one of the intersections of the two circles. Draw the circle $S(O),\,$ through $O\,$ centered at $S.\,$ Mark point $M\,$ - one of the intersections of $S(O)\,$ and $O(A).\,$ Choose the one nearest to $N.$ Draw line $MN\,$ and mark point $Q\,$ of its intersection with $OA.$ Draw circle $R(Q)\,$ and mark its intersections, say, $B\,$ and $E,\,$ with $O(A).$ Draw $BQ\,$ and $EQ\,$ and mark their intersections - $D\,$ and $C\,$ - with $O(A).\,$ $ABDCE\,$ is a regular pentagon. Proof of the construction Assum circle $O(A)\,$ is defined by $x^2+y^2=1.\,$ Then $R(O)\,$ is defined by $(x+1)^2+y^2=1,\,$ and $S(O)\,$ by $(x+2)^2+y^2=4.$ We can find $\displaystyle M=\left(-\frac{1}{4},-\frac{\sqrt{15}}{4}\right)\,$ and $\displaystyle N=\left(-\frac{1}{2},-\frac{\sqrt{3}}{2}\right).\,$ With these $\displaystyle Q=\left(-\frac{3+\sqrt{5}}{2},0\right)\,$ such that $|QR|=\displaystyle \frac{1+\sqrt{5}}{2}=\varphi,\,$ the Golden ratio. The rest of the proof is left as an exercise to the reader. Acknowledgment The above problem comes from an uncommon site euclidea, devoted to the problems of Euclidean construction. The site and the problem have been brought to my attention by Konstantin Knop who also generously shared his construction. Approximate Construction of Regular Pentagon by A. Durer Construction of Regular Pentagon by H. W. Richmond Inscribing a regular pentagon in a circle - and proving it Regular Pentagon Construction by Y. Hirano Regular Pentagon Inscribed in Circle by Paper Mascheroni Construction of a Regular Pentagon Regular Pentagon Construction by K. Knop 65463160
Here we want to give an easy mathematical bootstrap argument why solutions to the time independent 1D Schrödinger equation (TISE) tend to be rather nice. First formally rewrite the differential form$$-\frac{\hbar^2}{2m} \psi^{\prime\prime}(x) + V(x) \psi(x) ~=~ E \psi(x) \tag{1}$$into the int... [Some time travel comments] Since in the previous paragraph, we have explained how travelling to the future will not necessary result in you to arrive in the future that is resulted as if you have never time travelled (via twin paradox), what is the reason that the past you travelled back, has to be the past you learnt from historical records :? @0ßelö7 Well, I'd omit the explanation of the notation on the slide itself, and since there seems to be two pairs of formulae, I'd just put one of the two and then say that there's another one with suitable substitutions. I mean, "Hey, I bet you've always wondered how to prove X - here it is" is interesting. "Hey, you know that statement everyone knows how to prove but doesn't bother to write down? Here is the proof written down" significantly less so Sorry I have a quick question: For questions like this physics.stackexchange.com/questions/356260/… where the accepted answer clearly does not answer the original question what is the best thing to do; downvote, flag or just leave it? So this question says express $u^0$ in terms of $u^j$ where $u$ is the four-velocity and I get what $u^0$ and $u^j$ are but I'm a bit confused how to go about this one? I thought maybe using the space-time interval and evaluating for $\frac{dt}{d\tau}$ but it's not workin out for me... :/ Anyone give me a quickie starter please? :p Although a physics question, this is still important to chemistry. The delocalized electric field is related to the force (and therefore the repulsive potential) between two electrons. This in turn is what we need to solve the Schrödinger Equation to describe molecules. Short answer: You can calculate the expectation value of the corresponding operator, which comes close to the mentioned superposition. — Feodoran13 hours ago If we take an electron that's delocalised w.r.t position, how can one evaluate the electric field over some space? Is it some superposition or a sort of field with all the charge at the expectation value of the position? @0ßelö7 I just looked back at chat and noticed Phase's question, I wasn't purposefully ignoring you - do you want me to look over it? Because I don't think I'll gain much personally from reading the slides. Maybe it's just me having not really done much with Eigenbases but I don't recognise where I "put it in terms of M's eigenbasis". I just wrote it down for some vector v, rather than a space that contains all of the vectors v If we take an electron that's delocalised w.r.t position, how can one evaluate the electric field over some space? Is it some superposition or a sort of field with all the charge at the expectation value of the position? Honey, I Shrunk the Kids is a 1989 American comic science fiction film. The directorial debut of Joe Johnston and produced by Walt Disney Pictures, it tells the story of an inventor who accidentally shrinks his and his neighbor's kids to a quarter of an inch with his electromagnetic shrinking machine and throws them out into the backyard with the trash, where they must venture into their backyard to return home while fending off insects and other obstacles.Rick Moranis stars as Wayne Szalinski, the inventor who accidentally shrinks his children, Amy (Amy O'Neill) and Nick (Robert Oliveri). Marcia...
The Monster is the largest of the 26 sporadic simple groups and has order 808 017 424 794 512 875 886 459 904 961 710 757 005 754 368 000 000 000 = 2^46 3^20 5^9 7^6 11^2 13^3 17 19 23 29 31 41 47 59 71. It is not so much the size of its order that makes it hard to do actual calculations in the monster, but rather the dimensions of its smallest non-trivial irreducible representations (196 883 for the smallest, 21 296 876 for the next one, and so on). In characteristic two there is an irreducible representation of one dimension less (196 882) which appears to be of great use to obtain information. For example, Robert Wilson used it to prove that The Monster is a Hurwitz group. This means that the Monster is generated by two elements g and h satisfying the relations $g^2 = h^3 = (gh)^7 = 1 $ Geometrically, this implies that the Monster is the automorphism group of a Riemann surface of genus g satisfying the Hurwitz bound 84(g-1)=#Monster. That is, g=9619255057077534236743570297163223297687552000000001=42151199 * 293998543 * 776222682603828537142813968452830193 Or, in analogy with the Klein quartic which can be constructed from 24 heptagons in the tiling of the hyperbolic plane, there is a finite region of the hyperbolic plane, tiled with heptagons, from which we can construct this monster curve by gluing the boundary is a specific way so that we get a Riemann surface with exactly 9619255057077534236743570297163223297687552000000001 holes. This finite part of the hyperbolic tiling (consisting of #Monster/7 heptagons) we’ll call the empire of the monster and we’d love to describe it in more detail. Look at the half-edges of all the heptagons in the empire (the picture above learns that every edge in cut in two by a blue geodesic). They are exactly #Monster such half-edges and they form a dessin d’enfant for the monster-curve. If we label these half-edges by the elements of the Monster, then multiplication by g in the monster interchanges the two half-edges making up a heptagonal edge in the empire and multiplication by h in the monster takes a half-edge to the one encountered first by going counter-clockwise in the vertex of the heptagonal tiling. Because g and h generated the Monster, the dessin of the empire is just a concrete realization of the monster. Because g is of order two and h is of order three, the two permutations they determine on the dessin, gives a group epimorphism $C_2 \ast C_3 = PSL_2(\mathbb{Z}) \rightarrow \mathbb{M} $ from the modular group $PSL_2(\mathbb{Z}) $ onto the Monster-group. In noncommutative geometry, the group-algebra of the modular group $\mathbb{C} PSL_2 $ can be interpreted as the coordinate ring of a noncommutative manifold (because it is formally smooth in the sense of Kontsevich-Rosenberg or Cuntz-Quillen) and the group-algebra of the Monster $\mathbb{C} \mathbb{M} $ itself corresponds in this picture to a finite collection of ‘points’ on the manifold. Using this geometric viewpoint we can now ask the question What does the Monster see of the modular group? To make sense of this question, let us first consider the commutative equivalent : what does a point P see of a commutative variety X? Evaluation of polynomial functions in P gives us an algebra epimorphism $\mathbb{C}[X] \rightarrow \mathbb{C} $ from the coordinate ring of the variety $\mathbb{C}[X] $ onto $\mathbb{C} $ and the kernel of this map is the maximal ideal $\mathfrak{m}_P $ of $\mathbb{C}[X] $ consisting of all functions vanishing in P. Equivalently, we can view the point $P= \mathbf{spec}~\mathbb{C}[X]/\mathfrak{m}_P $ as the scheme corresponding to the quotient $\mathbb{C}[X]/\mathfrak{m}_P $. Call this the 0-th formal neighborhood of the point P. This sounds pretty useless, but let us now consider higher-order formal neighborhoods. Call the affine scheme $\mathbf{spec}~\mathbb{C}[X]/\mathfrak{m}_P^{n+1} $ the n-th forml neighborhood of P, then the first neighborhood, that is with coordinate ring $\mathbb{C}[X]/\mathfrak{m}_P^2 $ gives us tangent-information. Alternatively, it gives the best linear approximation of functions near P. The second neighborhood $\mathbb{C}[X]/\mathfrak{m}_P^3 $ gives us the best quadratic approximation of function near P, etc. etc. These successive quotients by powers of the maximal ideal $\mathfrak{m}_P $ form a system of algebra epimorphisms $\ldots \frac{\mathbb{C}[X]}{\mathfrak{m}_P^{n+1}} \rightarrow \frac{\mathbb{C}[X]}{\mathfrak{m}_P^{n}} \rightarrow \ldots \ldots \rightarrow \frac{\mathbb{C}[X]}{\mathfrak{m}_P^{2}} \rightarrow \frac{\mathbb{C}[X]}{\mathfrak{m}_P} = \mathbb{C} $ and its inverse limit $\underset{\leftarrow}{lim}~\frac{\mathbb{C}[X]}{\mathfrak{m}_P^{n}} = \hat{\mathcal{O}}_{X,P} $ is the completion of the local ring in P and contains all the infinitesimal information (to any order) of the variety X in a neighborhood of P. That is, this completion $\hat{\mathcal{O}}_{X,P} $ contains all information that P can see of the variety X. In case P is a smooth point of X, then X is a manifold in a neighborhood of P and then this completion $\hat{\mathcal{O}}_{X,P} $ is isomorphic to the algebra of formal power series $\mathbb{C}[[ x_1,x_2,\ldots,x_d ]] $ where the $x_i $ form a local system of coordinates for the manifold X near P. Right, after this lengthy recollection, back to our question what does the monster see of the modular group? Well, we have an algebra epimorphism $\pi~:~\mathbb{C} PSL_2(\mathbb{Z}) \rightarrow \mathbb{C} \mathbb{M} $ and in analogy with the commutative case, all information the Monster can gain from the modular group is contained in the $\mathfrak{m} $-adic completion $\widehat{\mathbb{C} PSL_2(\mathbb{Z})}_{\mathfrak{m}} = \underset{\leftarrow}{lim}~\frac{\mathbb{C} PSL_2(\mathbb{Z})}{\mathfrak{m}^n} $ where $\mathfrak{m} $ is the kernel of the epimorphism $\pi $ sending the two free generators of the modular group $PSL_2(\mathbb{Z}) = C_2 \ast C_3 $ to the permutations g and h determined by the dessin of the pentagonal tiling of the Monster’s empire. As it is a hopeless task to determine the Monster-empire explicitly, it seems even more hopeless to determine the kernel $\mathfrak{m} $ let alone the completed algebra… But, (surprise) we can compute $\widehat{\mathbb{C} PSL_2(\mathbb{Z})}_{\mathfrak{m}} $ as explicitly as in the commutative case we have $\hat{\mathcal{O}}_{X,P} \simeq \mathbb{C}[[ x_1,x_2,\ldots,x_d ]] $ for a point P on a manifold X. Here the details : the quotient $\mathfrak{m}/\mathfrak{m}^2 $ has a natural structure of $\mathbb{C} \mathbb{M} $-bimodule. The group-algebra of the monster is a semi-simple algebra, that is, a direct sum of full matrix-algebras of sizes corresponding to the dimensions of the irreducible monster-representations. That is, $\mathbb{C} \mathbb{M} \simeq \mathbb{C} \oplus M_{196883}(\mathbb{C}) \oplus M_{21296876}(\mathbb{C}) \oplus \ldots \ldots \oplus M_{258823477531055064045234375}(\mathbb{C}) $ with exactly 194 components (the number of irreducible Monster-representations). For any $\mathbb{C} \mathbb{M} $-bimodule $M $ one can form the tensor-algebra $T_{\mathbb{C} \mathbb{M}}(M) = \mathbb{C} \mathbb{M} \oplus M \oplus (M \otimes_{\mathbb{C} \mathbb{M}} M) \oplus (M \otimes_{\mathbb{C} \mathbb{M}} M \otimes_{\mathbb{C} \mathbb{M}} M) \oplus \ldots \ldots $ and applying the formal neighborhood theorem for formally smooth algebras (such as $\mathbb{C} PSL_2(\mathbb{Z}) $) due to Joachim Cuntz (left) and Daniel Quillen (right) we have an isomorphism of algebras $\widehat{\mathbb{C} PSL_2(\mathbb{Z})}_{\mathfrak{m}} \simeq \widehat{T_{\mathbb{C} \mathbb{M}}(\mathfrak{m}/\mathfrak{m}^2)} $ where the right-hand side is the completion of the tensor-algebra (at the unique graded maximal ideal) of the $\mathbb{C} \mathbb{M} $-bimodule $\mathfrak{m}/\mathfrak{m}^2 $, so we’d better describe this bimodule explicitly. Okay, so what’s a bimodule over a semisimple algebra of the form $S=M_{n_1}(\mathbb{C}) \oplus \ldots \oplus M_{n_k}(\mathbb{C}) $? Well, a simple S-bimodule must be either (1) a factor $M_{n_i}(\mathbb{C}) $ with all other factors acting trivially or (2) the full space of rectangular matrices $M_{n_i \times n_j}(\mathbb{C}) $ with the factor $M_{n_i}(\mathbb{C}) $ acting on the left, $M_{n_j}(\mathbb{C}) $ acting on the right and all other factors acting trivially. That is, any S-bimodule can be represented by a quiver (that is a directed graph) on k vertices (the number of matrix components) with a loop in vertex i corresponding to each simple factor of type (1) and a directed arrow from i to j corresponding to every simple factor of type (2). That is, for the Monster, the bimodule $\mathfrak{m}/\mathfrak{m}^2 $ is represented by a quiver on 194 vertices and now we only have to determine how many loops and arrows there are at or between vertices. Using Morita equivalences and standard representation theory of quivers it isn’t exactly rocket science to determine that the number of arrows between the vertices corresponding to the irreducible Monster-representations $S_i $ and $S_j $ is equal to $dim_{\mathbb{C}}~Ext^1_{\mathbb{C} PSL_2(\mathbb{Z})}(S_i,S_j)-\delta_{ij} $ Now, I’ve been wasting a lot of time already here explaining what representations of the modular group have to do with quivers (see for example here or some other posts in the same series) and for quiver-representations we all know how to compute Ext-dimensions in terms of the Euler-form applied to the dimension vectors. Right, so for every Monster-irreducible $S_i $ we have to determine the corresponding dimension-vector $~(a_1,a_2;b_1,b_2,b_3) $ for the quiver $\xymatrix{ & & & & \vtx{b_1} \\ \vtx{a_1} \ar[rrrru]^(.3){B_{11}} \ar[rrrrd]^(.3){B_{21}} \ar[rrrrddd]_(.2){B_{31}} & & & & \\ & & & & \vtx{b_2} \\ \vtx{a_2} \ar[rrrruuu]_(.7){B_{12}} \ar[rrrru]_(.7){B_{22}} \ar[rrrrd]_(.7){B_{23}} & & & & \\ & & & & \vtx{b_3}} $ Now the dimensions $a_i $ are the dimensions of the +/-1 eigenspaces for the order 2 element g in the representation and the $b_i $ are the dimensions of the eigenspaces for the order 3 element h. So, we have to determine to which conjugacy classes g and h belong, and from Wilson’s paper mentioned above these are classes 2B and 3B in standard Atlas notation. So, for each of the 194 irreducible Monster-representations we look up the character values at 2B and 3B (see below for the first batch of those) and these together with the dimensions determine the dimension vector $~(a_1,a_2;b_1,b_2,b_3) $. For example take the 196883-dimensional irreducible. Its 2B-character is 275 and the 3B-character is 53. So we are looking for a dimension vector such that $a_1+a_2=196883, a_1-275=a_2 $ and $b_1+b_2+b_3=196883, b_1-53=b_2=b_3 $ giving us for that representation the dimension vector of the quiver above $~(98579,98304,65663,65610,65610) $. Okay, so for each of the 194 irreducibles $S_i $ we have determined a dimension vector $~(a_1(i),a_2(i);b_1(i),b_2(i),b_3(i)) $, then standard quiver-representation theory asserts that the number of loops in the vertex corresponding to $S_i $ is equal to $dim(S_i)^2 + 1 – a_1(i)^2-a_2(i)^2-b_1(i)^2-b_2(i)^2-b_3(i)^2 $ and that the number of arrows from vertex $S_i $ to vertex $S_j $ is equal to $dim(S_i)dim(S_j) – a_1(i)a_1(j)-a_2(i)a_2(j)-b_1(i)b_1(j)-b_2(i)b_2(j)-b_3(i)b_3(j) $ This data then determines completely the $\mathbb{C} \mathbb{M} $-bimodule $\mathfrak{m}/\mathfrak{m}^2 $ and hence the structure of the completion $\widehat{\mathbb{C} PSL_2}_{\mathfrak{m}} $ containing all information the Monster can gain from the modular group. But then, one doesn’t have to go for the full regular representation of the Monster. Any faithful permutation representation will do, so we might as well go for the one of minimal dimension. That one is known to correspond to the largest maximal subgroup of the Monster which is known to be a two-fold extension $2.\mathbb{B} $ of the Baby-Monster. The corresponding permutation representation is of dimension 97239461142009186000 and decomposes into Monster-irreducibles $S_1 \oplus S_2 \oplus S_4 \oplus S_5 \oplus S_9 \oplus S_{14} \oplus S_{21} \oplus S_{34} \oplus S_{35} $ (in standard Atlas-ordering) and hence repeating the arguments above we get a quiver on just 9 vertices! The actual numbers of loops and arrows (I forgot to mention this, but the quivers obtained are actually symmetric) obtained were found after laborious computations mentioned in this post and the details I’ll make avalable here. Anyone who can spot a relation between the numbers obtained and any other part of mathematics will obtain quantities of genuine (ie. non-Inbev) Belgian beer…
Search Now showing items 1-10 of 33 The ALICE Transition Radiation Detector: Construction, operation, and performance (Elsevier, 2018-02) The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ... Constraining the magnitude of the Chiral Magnetic Effect with Event Shape Engineering in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2018-02) In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow $v_2$ reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality ... First measurement of jet mass in Pb–Pb and p–Pb collisions at the LHC (Elsevier, 2018-01) This letter presents the first measurement of jet mass in Pb-Pb and p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV and 5.02 TeV, respectively. Both the jet energy and the jet mass are expected to be sensitive to jet ... First measurement of $\Xi_{\rm c}^0$ production in pp collisions at $\mathbf{\sqrt{s}}$ = 7 TeV (Elsevier, 2018-06) The production of the charm-strange baryon $\Xi_{\rm c}^0$ is measured for the first time at the LHC via its semileptonic decay into e$^+\Xi^-\nu_{\rm e}$ in pp collisions at $\sqrt{s}=7$ TeV with the ALICE detector. The ... D-meson azimuthal anisotropy in mid-central Pb-Pb collisions at $\mathbf{\sqrt{s_{\rm NN}}=5.02}$ TeV (American Physical Society, 2018-03) The azimuthal anisotropy coefficient $v_2$ of prompt D$^0$, D$^+$, D$^{*+}$ and D$_s^+$ mesons was measured in mid-central (30-50% centrality class) Pb-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm ... Search for collectivity with azimuthal J/$\psi$-hadron correlations in high multiplicity p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 and 8.16 TeV (Elsevier, 2018-05) We present a measurement of azimuthal correlations between inclusive J/$\psi$ and charged hadrons in p-Pb collisions recorded with the ALICE detector at the CERN LHC. The J/$\psi$ are reconstructed at forward (p-going, ... Systematic studies of correlations between different order flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (American Physical Society, 2018-02) The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The results are ... $\pi^0$ and $\eta$ meson production in proton-proton collisions at $\sqrt{s}=8$ TeV (Springer, 2018-03) An invariant differential cross section measurement of inclusive $\pi^{0}$ and $\eta$ meson production at mid-rapidity in pp collisions at $\sqrt{s}=8$ TeV was carried out by the ALICE experiment at the LHC. The spectra ... J/$\psi$ production as a function of charged-particle pseudorapidity density in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2018-01) We report measurements of the inclusive J/$\psi$ yield and average transverse momentum as a function of charged-particle pseudorapidity density ${\rm d}N_{\rm ch}/{\rm d}\eta$ in p-Pb collisions at $\sqrt{s_{\rm NN}}= 5.02$ ... Energy dependence and fluctuations of anisotropic flow in Pb-Pb collisions at √sNN=5.02 and 2.76 TeV (Springer Berlin Heidelberg, 2018-07-16) Measurements of anisotropic flow coefficients with two- and multi-particle cumulants for inclusive charged particles in Pb-Pb collisions at 𝑠NN‾‾‾‾√=5.02 and 2.76 TeV are reported in the pseudorapidity range |η| < 0.8 ...
The parameter called rate is indeed the one with the usual name $\lambda$. And the mean of the distribution with density function $\lambda e^{-\lambda t}$ (for $t\ge 0$) is $\frac{1}{\lambda}$. This makes the choice of name $\mu$ for the rate surprising, since $\mu$ is a common name for the mean. By the memorylessness property of the exponential, the time $X_0$ until the person currently being served is finished has exponential distribution with mean $\frac{1}{\mu}$. And the times of service $X_1$, $X_2$, $X_3$, and $X_4$ of the people waiting in line have mean $\frac{1}{\mu}$. And the time $X_5$ from the instant you get to the teller to the time you are finished has the same mean. So your expected total time at the bank is $E(X_0+X_1+\cdots +X_4+X_5)$. By the linearity of expectation, this is $\frac{6}{\mu}$.
Computer Science - Networking and Internet Architecture Abstract Packet classification according to multi-field ruleset is a key component for many network applications. Emerging software defined networking and cloud computing need to update the rulesets frequently for flexible policy configuration. Their success depends on the availability of the new generation of classifiers that can support both fast ruleset updating and high-speed packet classification. However, existing packet classification approaches focus either on high-speed packet classification or fast rule update, but no known scheme meets both requirements. In this paper, we propose Range-vector Hash (RVH) to effectively accelerate the packet classification with a hash-based algorithm while ensuring the fast rule update. RVH is built on our key observation that the number of distinct combinations of each field prefix lengths is not evenly distributed. To reduce the number of hash tables for fast classification, we introduce a novel concept range-vector with each specified the length range of each field prefix of the projected rules. RVH can overcome the major obstacle that hinders hash-based packet classification by balancing the number of hash tables and the probability of hash collision. Experimental results demonstrate that RVH can achieve the classification speed up to 15.7 times and the update speed up to 2.3 times that of the state-of-the-art algorithms on average, while only consuming 44% less memory. Computer Science - Machine Learning and Statistics - Machine Learning Abstract Deep learning (DL) techniques have demonstrated satisfactory performance in many tasks, even in safety-critical applications. Reliability is hence a critical consideration to DL-based systems. However, the statistical nature of DL makes it quite vulnerable to invalid inputs, i.e., those cases that are not considered in the training phase of a DL model. This paper proposes to perform data sanity check to identify invalid inputs, so as to enhance the reliability of DL-based systems. To this end, we design and implement a tool to detect behavior deviation of a DL model when processing an input case, and considers it the symptom of invalid input cases. Via a light, automatic instrumentation to the target DL model, this tool extracts the data flow footprints and conducts an assertion-based validation mechanism. Computer Science - Computation and Language and Computer Science - Computer Vision and Pattern Recognition Abstract This paper presents a new metric called TIGEr for the automatic evaluation of image captioning systems. Popular metrics, such as BLEU and CIDEr, are based solely on text matching between reference captions and machine-generated captions, potentially leading to biased evaluations because references may not fully cover the image content and natural language is inherently ambiguous. Building upon a machine-learned text-image grounding model, TIGEr allows to evaluate caption quality not only based on how well a caption represents image content, but also on how well machine-generated captions match human-generated captions. Our empirical tests show that TIGEr has a higher consistency with human judgments than alternative existing metrics. We also comprehensively assess the metric's effectiveness in caption evaluation by measuring the correlation between human judgments and metric scores. Gu, Jiazhen, Xu, Huanlin, Zhou, Yangfan, Wang, Xin, Xu, Hui, and Lyu, Michael Subjects Computer Science - Machine Learning, Electrical Engineering and Systems Science - Signal Processing, and Statistics - Machine Learning Abstract Deep neural networks (DNNs) are shown to be promising solutions in many challenging artificial intelligence tasks, including object recognition, natural language processing, and even unmanned driving. A DNN model, generally based on statistical summarization of in-house training data, aims to predict correct output given an input encountered in the wild. In general, 100% precision is therefore impossible due to its probabilistic nature. For DNN practitioners, it is very hard, if not impossible, to figure out whether the low precision of a DNN model is an inevitable result, or caused by defects such as bad network design or improper training process. This paper aims at addressing this challenging problem. We approach with a careful categorization of the root causes of low precision. We find that the internal data flow footprints of a DNN model can provide insights to locate the root cause effectively. We then develop a tool, namely, DeepMorph (DNN Tomography) to analyze the root cause, which can instantly guide a DNN developer to improve the model. Case studies on four popular datasets show the effectiveness of DeepMorph. Electrical Engineering and Systems Science - Audio and Speech Processing, Computer Science - Computation and Language, Computer Science - Sound, and Statistics - Machine Learning Abstract End-to-end text-to-speech (TTS) synthesis is a method that directly converts input text to output acoustic features using a single network. A recent advance of end-to-end TTS is due to a key technique called attention mechanisms, and all successful methods proposed so far have been based on soft attention mechanisms. However, although network structures are becoming increasingly complex, end-to-end TTS systems with soft attention mechanisms may still fail to learn and to predict accurate alignment between the input and output. This may be because the soft attention mechanisms are too flexible. Therefore, we propose an approach that has more explicit but natural constraints suitable for speech signals to make alignment learning and prediction of end-to-end TTS systems more robust. The proposed system, with the constrained alignment scheme borrowed from segment-to-segment neural transduction (SSNT), directly calculates the joint probability of acoustic features and alignment given an input text. The alignment is designed to be hard and monotonically increase by considering the speech nature, and it is treated as a latent variable and marginalized during training. During prediction, both the alignment and acoustic features can be generated from the probabilistic distributions. The advantages of our approach are that we can simplify many modules for the soft attention and that we can train the end-to-end TTS model using a single likelihood function. As far as we know, our approach is the first end-to-end TTS without a soft attention mechanism. Comment: To be appeared at SSW10 Computer Science - Artificial Intelligence and Computer Science - Computation and Language Abstract Learning target side syntactic structure has been shown to improve Neural Machine Translation (NMT). However, incorporating syntax through latent variables introduces additional complexity in inference, as the models need to marginalize over the latent syntactic structures. To avoid this, models often resort to greedy search which only allows them to explore a limited portion of the latent space. In this work, we introduce a new latent variable model, LaSyn, that captures the co-dependence between syntax and semantics, while allowing for effective and efficient inference over the latent space. LaSyn decouples direct dependence between successive latent variables, which allows its decoder to exhaustively search through the latent syntactic choices, while keeping decoding speed proportional to the size of the latent variable vocabulary. We implement LaSyn by modifying a transformer-based NMT system and design a neural expectation maximization algorithm that we regularize with part-of-speech information as the latent sequences. Evaluations on four different MT tasks show that incorporating target side syntax with LaSyn improves both translation quality, and also provides an opportunity to improve diversity. Comment: In proceedings of EMNLP 2019 Ma, Qinglin, Guo, Yiqing, Li, Xiao-Dong, Wang, Xin, Miao, Haitao, Li, Zhigang, Sabiu, Cristiano G., and Park, Hyunbae Subjects Astrophysics - Cosmology and Nongalactic Astrophysics Abstract The tomographic AP method is so far the best method in separating the Alcock-Paczynski (AP) signal from the redshift space distortion (RSD) effects and deriving powerful constraints on cosmological parameters using the $\lesssim40h^{-1}\ \rm Mpc$ clustering region. To guarantee that the method can be safely applied to the future large scale structure (LSS) surveys, we perform a detailed study on the systematics of the method. The major contribution of the systematics comes from the non-zero redshift evolution of the RSD effects. It leads to non-negligible redshift evolution in the clustering anisotropy, which is quantified by $\hat\xi_{\Delta s}(\mu,z)$ in our analysis, and estimated using the BigMultidark N-body and COLA simulation samples. We find about 5\%/10\% evolution when comparing the $\hat\xi_{\Delta s}(\mu,z)$ measured as $z=0.5$/$z=1$ to the measurements at $z=0$. The inaccuracy of COLA is 5-10 times smaller than the intrinsic systematics, indicating that using it to estimate the systematics is good enough. In addition, we find the magnitude of systematics significantly increases if we enlarge the clustering scale to $40-150\ h^{-1}\ \rm Mpc$, indicating that using the $\lesssim40h^{-1}\ \rm Mpc$ region is an optimal choice. Finally, we test the effect of halo bias, and find $\lesssim$1.5\% change in $\hat\xi_{\Delta s}$ when varying the halo mass within the range of $2\times 10^{12}$ to $10^{14}$ $M_{\odot}$. We will perform more studies to achieve an accurate and efficient estimation of the systematics in redshift range $z=0-1.5$. Comment: 13 pages, 5 figures Electrical Engineering and Systems Science - Audio and Speech Processing and Computer Science - Sound Abstract Neural source-filter (NSF) models are deep neural networks that produce waveforms given input acoustic features. They use dilated-convolution-based neural filter modules to filter sine-based excitation for waveform generation, which is different from WaveNet and flow-based models. One of the NSF models, called harmonic-plus-noise NSF (h-NSF) model, uses separate pairs of source and neural filters to generate harmonic and noise waveform components. It is close to WaveNet in terms of speech quality while being superior in generation speed. The h-NSF model can be improved even further. While h-NSF merges the harmonic and noise components using pre-defined digital low- and high-pass filters, it is well known that the maximum voice frequency (MVF) that separates the periodic and aperiodic spectral bands are time-variant. Therefore, we propose a new h-NSF model with time-variant and trainable MVF. We parameterize the digital low- and high-pass filters as windowed-sinc filters and predict their cut-off frequency (i.e., MVF) from the input acoustic features. Our experiments demonstrated that the new model can predict a good trajectory of the MVF and produce high-quality speech for a text-to-speech synthesis system. Comment: Accepted by Speech Synthesis Workshop 2019 Computer Science - Computer Vision and Pattern Recognition, Computer Science - Machine Learning, and Computer Science - Robotics Abstract We propose a self-supervised learning framework for visual odometry (VO) that incorporates correlation of consecutive frames and takes advantage of adversarial learning. Previous methods tackle self-supervised VO as a local structure from motion (SfM) problem that recovers depth from single image and relative poses from image pairs by minimizing photometric loss between warped and captured images. As single-view depth estimation is an ill-posed problem, and photometric loss is incapable of discriminating distortion artifacts of warped images, the estimated depth is vague and pose is inaccurate. In contrast to previous methods, our framework learns a compact representation of frame-to-frame correlation, which is updated by incorporating sequential information. The updated representation is used for depth estimation. Besides, we tackle VO as a self-supervised image generation task and take advantage of Generative Adversarial Networks (GAN). The generator learns to estimate depth and pose to generate a warped target image. The discriminator evaluates the quality of generated image with high-level structural perception that overcomes the problem of pixel-wise loss in previous methods. Experiments on KITTI and Cityscapes datasets show that our method obtains more accurate depth with details preserved and predicted pose outperforms state-of-the-art self-supervised methods significantly. Comment: Accept to ICCV 2019 In this paper, we show that the ratio of the effective Jarlskog invariant $\widetilde{\cal J}$ for leptonic CP violation in three-flavor neutrino oscillations in matter to its counterpart ${\cal J}$ in vacuum $\widetilde{\cal J}/{\cal J} \approx 1/(\widehat{C}^{}_{12} \widehat{C}^{}_{13})$ holds as an excellent approximation, where $\widehat{C}^{}_{12} \equiv \sqrt{1 - 2 \widehat{A}^{}_* \cos 2\theta^{}_{12} + \widehat{A}^2_*}$ with $\widehat{A}^{}_* \equiv a\cos^2 \theta^{}_{13}/\Delta^{}_{21}$ and $\widehat{C}^{}_{13} \equiv \sqrt{1 - 2 A^{}_{\rm c} \cos 2\theta^{}_{13} + A^2_{\rm c}}$ with $A^{}_{\rm c} \equiv a/\Delta^{}_{\rm c}$. Here $\Delta^{}_{ij} \equiv m^2_i - m^2_j$ (for $ij = 21, 31, 32$) stand for the neutrino mass-squared differences in vacuum and $\theta^{}_{ij}$ (for $ij = 12, 13, 23$) are the neutrino mixing angles in vacuum, while $\Delta^{}_{\rm c} \equiv \Delta^{}_{31}\cos^2\theta^{}_{12} + \Delta^{}_{32} \sin^2 \theta^{}_{12}$ and the matter parameter $a \equiv 2\sqrt{2}G^{}_{\rm F} N^{}_e E$ are defined. This result has been explicitly derived by improving the previous analytical solutions to the renormalization-group equations of effective neutrino masses and mixing parameters in matter. Furthermore, as a practical application, such a simple analytical formula has been implemented to understand the existence and location of the extrema of $\widetilde{\cal J}$. Comment: 15 pages, 3 figures
Advanced topics in information theory From CYPHYNETS (→July 14: Rate distortion theory - I) (→July 14: Rate distortion theory - I) Line 86: Line 86: * There is also an information theoretic definition, * There is also an information theoretic definition, - <math>R^I(D) = \min_{p( + <math>R^I(D) = \min_{p(, \hat{x}), \sum p(x; \hat{x}) d(x; \hat{x}) \leq D} I(X; \hat{X})</math> * We can show that both are equivalent. * We can show that both are equivalent. Revision as of 12:09, 30 July 2009 Reading Group: Advanced Topics in Information Theory Calendar: Summer 2009 Venue: LUMS School of Science & Engineering Organizer: Abubakr Muhammad This group meets every week at LUMS to discuss some advanced topics in information theory. This is a continuation of our formal course at LUMS, CS-683: Information theory (offered most recently in Spring 2008). We hope to cover some advanced topics in information theory as well as its connections to other fundamental disciplines such as statistics, mathematics, physics and technology. Participants Mubasher Beg Shahida Jabeem Qasim Maqbool Muhammad Bilal Muzammad Baig Hassan Mohy-ud-Din Zartash Uzmi Shahab Baqai Abubakr Muhammad Topics Include the following, but not limited to: Rate distortion theory Network information theory Kolmogorov complexity Quantum information theory Sessions July 7: Organization. Recap of CS-683 Basic organization, presentation assignments. Review of Information theory ideas Entropy, AEP, Compression and Capacity Entropy of a random variable is given by The capacity of a channel is defined by Compression and Capacity determine the two fundamental information theoretic limits of data transmission, A review of Gaussain channels and their capacities. Let us take these analysis one step further. How much do you loose when you cross these barriers? We saw one situation when you try to transmit over the capacity. By Fano's inequality Rate distortion: A theory for lossy data compression. References/Literature Elements of Information theoryby Cover and Thomas. July 14: Rate distortion theory - I Rate–distortion provides the theoretical foundations for lossy data compression. We try to find answer to the following question: Given an acceptable level of distortion, what is the minimal information that should be sent over a channel, so that the source can be reconstructed (up to that level of distortion) at the receiver? Quantization for a single random variable. Given a distribution for a random variable, what are the optimal choices for quantization? Answer is Lloyd's algorithm (closely related to k-means clustering). What about multiple random variables, treated at the same time? Even if the RVs are IID, quantizing them in sequences can result in better performance. (stated without proof). Define distortion D. When is a distortion pair (R,D) achievable? The rate distortion function R(D) is the min rate R such that (R,D) is achievable for a given D. There is also an information theoretic definition, We can show that both are equivalent. Proof follows closely the treatment on channel capacity.
It is not necessary that an electron be described by an eigenfunction of the Hamiltonian operator. Many problems encountered by quantum chemists and computational chemists lead to wavefunctions that are not eigenfunctions of the Hamiltonian operator. Science is like that; interesting problems are not simple to solve. They require adaptation of current techniques, creative energy, and a good set of skills developed by studying solutions to previously solved interesting problems. Consider a free electron in one dimension that is described by the wavefunction \[ \Psi (x) = C_1\psi _1 (x) + C_2 \psi _2 (x) \label {5-21}\] with \[ \begin{align} \psi _1(x) &= \left ( \dfrac {1}{2L} \right )^{1/2} e^{ik_1x} \label {5-22} \\[4pt] \psi _2(x) &= \left ( \dfrac {1}{2L} \right )^{1/2} e^{ik_2x} \label {5-23} \end{align} \] where \(k_1\) and \(k_2\) have different magnitudes. Although such a function is not an eigenfunction of the momentum operator or the Hamiltonian operator, we can calculate the average momentum and average energy of an electron in this state from the expectation value integral. (Note: "in-this-state" means "described-by-this-wavefunction".) Exercise \(\PageIndex{1}\) Show that the function \(Ψ(x)\) defined by Equation \(\ref{5-21}\) is not an eigenfunction of the momentum operator or the Hamiltonian operator for a free electron in one dimension. The function shown in Equation \(\ref{5-21}\) belongs to a class of functions known as superposition functions, which are linear combinations of eigenfunctions. A linear combination of functions is a sum of functions, each multiplied by a weighting coefficient, which is a constant. The adjective linear is used because the coefficients are constants. The constants, e.g. \(C_1\) and \(C_2\) in Equation \(\ref{5-21}\), give the weight of each component (\(\psi_1\) and \(\psi_2\)) in the total wavefunction. Notice from the discussion previously that each component in Equation \(\ref{5-21}\) is an eigenfunction of the momentum operator and the Hamiltonian operator although the linear combination function (i.e., \(\Psi(x)\)) is not. The expectation value, i.e. average value, of the momentum operator is found as follows. First, write the integral for the expectation value and then substitute into this integral the superposition function and its complex conjugate as shown below. Since we are considering a free particle in one dimension, the limits on the integration are \(–L\) and \(+L\) with \(L\) going to infinity. \[ \begin{align} \left \langle p \right \rangle &= \int \Psi ^* (x) \left ( -i\hbar \dfrac {d}{dx} \right ) \Psi (x) dx \\[4pt]&= \dfrac {-i\hbar}{2L} \int \limits _{-L}^{+L} \left ( C_1^* e^{-ik_1x} + C_2^* e^{-ik_2x} \right ) \dfrac {d}{dx} \left ( C_1 e^{ik_1x} + C_2 e^{ik_2x} \right ) dx \\[4pt] &= \dfrac {-i\hbar}{2L} \int \limits _{-L}^{+L} \left ( C_1^* e^{-ik_1x} + C_2^* e^{-ik_2x} \right ) \left ( (ik_1)C_1 e^{ik_1x} + (ik_2)C_2 e^{ik_2x} \right ) dx \label {5-24} \end{align} \] Cross-multiplying the two factors in parentheses yields four terms. \[\left \langle p \right \rangle = I_1 + I_2 + I_3 + I_4 \nonumber \] with \[ \begin{align} I_1 &= \dfrac {\hbar k_1}{2L} C^*_1 C_1 \int \limits ^{+L} _{-L} dx = C^*_1 C_1 \hbar k_1 \\[4pt] I_2 &= \dfrac {\hbar k_2}{2L} C^*_2 C_2 \int \limits ^{+L} _{-L} dx = C^*_2 C_2 \hbar k_2 \\[4pt] I_3 &= \dfrac {\hbar k_1}{2L} C^*_1 C_2 \int \limits ^{+L} _{-L} e^{i(k_2 - k_1)x} dx \\[4pt] I_4 &= \dfrac {\hbar k_1}{2L} C^*_2 C_1 \int \limits ^{+L} _{-L} e^{i(k_1 - k_2)x} dx \label {5-25} \end{align} \] An integral of two different functions, e.g. \(\int \psi _1^* \psi _2 dx\), is called an overlap integral or orthogonality integral. When such an integral equals zero, the functions are said to be orthogonal. The integrals in \(I_3\) and \(I_4\) are zero because the functions \(\psi_1\) and \(\psi_2\) are orthogonal. We know \(\psi_1\) and \(\psi_2\) are orthogonal because of the Orthogonality Theorem, described previously, that states that eigenfunctions of any Hermitian operator, such as the momentum operator or the Hamiltonian operator, with different eigenvalues, which is the case here, are orthogonal. Also, by using Euler's formula and following Example \(\PageIndex{1}\) below, you can see why these integrals are zero. Example \(\PageIndex{1}\) For the integral part of \(I_3\) obtain \[ \int \cos [(k_2 - k_1 ) x ] dx + i \int \sin [(k_2 - k_1)x ] dx \nonumber\] from Euler’s formula. SOLUTION Here we have the integrals of a cosine and a sine function along the x-axis from minus infinity to plus infinity. Since these integrals are the area under the cosine and sine curves, they must be zero because the positive lobes are canceled by the negatives lobes when the integration is carried out from \(–∞\) to \(+∞\). As a result of this orthogonality, \(\left \langle p \right \rangle\) is just \(I_1 + I_2\), which is \[ \begin{align} \left \langle p \right \rangle &= C_1^* C_1 \hbar k_1 + C^*_2 C_2 \hbar k_2 \\[4pt] &= C_1^* C_1p_1 + C^*_2 C_2 p_2 \label {5-26} \end{align} \] where \(\hbar k_1\) is the momentum \(p_1\) of state \(\psi_1\), and \(\hbar k_2\) is the momentum \(p_2\) of state \(\psi_2\). As explained in Chapter 3, an average value can be calculated by summing, over all possibilities, the possible values times the probability of each value. Equation \(\ref{5-26}\) has this form if we interpret \(C_1^*C_1\) and \(C_2^*C_2\) as the probability that the electron has momentum \(p_1\) and \(p_2\), respectively. These coefficients therefore are called probability amplitude coefficients, and their absolute value squared gives the probability that the electron is described by \(\psi_1\) and \(\psi_2\), respectively. This interpretation of these coefficients as probability amplitudes is very important. Exercise \(\PageIndex{2}\) Find the expectation value for the energy \(\left \langle E \right \rangle\) for the superposition wavefunction given by Equation \(\ref{5-23}\). Explain why \(C_1^*C_1\) is the probability that the electron has energy \(\dfrac {\hbar ^2 k^2_1}{2m}\) and \(C_2^*C_2\) is the probability that the electron has energy \(\dfrac {\hbar ^2 k^2_2}{2m}\). What is the expectation value for the energy when both components have equal weights in the superposition function, i.e. when \(C_1 = C_2 = 2^{-1/2}\)? Contributors Adapted from "Quantum States of Atoms and Molecules" by David M. Hanson, Erica Harvey, Robert Sweeney, Theresa Julia Zielinski
File:Vacuum fluctuations revealed through spontaneous parametric down-conversion.ogv In quantum physics, a quantum vacuum fluctuation (or quantum fluctuation or vacuum fluctuation) is the temporary change in the amount of energy in a point in space, [1] arising from Werner Heisenberg's uncertainty principle. According to one formulation of the principle, energy and time can be related by the relation [2] That means that conservation of energy can appear to be violated, but only for small times. This allows the creation of particle-antiparticle pairs of virtual particles.The effects of these particles are measurable, for example, in the effective charge of the electron, different from its "naked" charge. In the modern view, energy is always conserved, but the eigenstates of the Hamiltonian (energy observable) are not the same as (i.e., the Hamiltonian doesn't commute with) the particle number operators. Quantum fluctuations may have been very important in the origin of the structure of the universe: according to the model of inflation the ones that existed when inflation began were amplified and formed the seed of all current observed structure. Quantum fluctuations of a field A quantum fluctuation is the temporary appearance of energetic particles out of empty space, as allowed by the Uncertainty Principle. The Uncertainty Principle states that for a pair of conjugate variables such as position/momentum and energy/time, it is impossible to have a precisely determined value of each member of the pair at the same time. For example, a particle pair can pop out of the vacuum during a very short time interval. An extension is applicable to the "uncertainty in time" and "uncertainty in energy" (including the rest mass energy ). When the mass is very large like a macroscopic object, the uncertainties and thus the quantum effect become very small, and classical physics is applicable. This was proposed by scientist Adam Jonathon Davis's study in 1916 at Harvard's Laboratory 1996a. Davis's theory was later proven in the 1920s and became a law of quantum physics by Louis de Broglie. In quantum field theory, fields undergo quantum fluctuations. A reasonably clear distinction can be made between quantum fluctuations and thermal fluctuationsTemplate:How of a quantum field (at least for a free field; for interacting fields, renormalization substantially complicates matters). For the quantized Klein–Gordon field in the vacuum state, we can calculate the probability density that we would observe a configuration at a time in terms of its Fourier transform to be \int\frac{d^3k}{(2\pi)^3} \tilde\varphi_t^*(k)\sqrt{|k|^2+m^2}\;\tilde \varphi_t(k)\right]}. In contrast, for the classical Klein–Gordon field at non-zero temperature, the Gibbs probability density that we would observe a configuration at a time is \tilde\varphi_t^*(k){\scriptstyle\frac{1}{2}}(|k|^2+m^2)\;\tilde \varphi_t(k)\right]}. The amplitude of quantum fluctuations is controlled by the amplitude of Planck's constant , just as the amplitude of thermal fluctuations is controlled by . Note that the following three points are closely related: Planck's constant has units of action instead of units of energy, the quantum kernel is instead of (the quantum kernel is nonlocal from a classical heat kernel viewpoint, but it is local in the sense that it does not allow signals to be transmitted), the quantum vacuum state is Lorentz invariant (although not manifestly in the above), whereas the classical thermal state is not (the classical dynamics is Lorentz invariant, but the Gibbs probability density is not a Lorentz invariant initial condition). We can construct a classical continuous random field that has the same probability density as the quantum vacuum state, so that the principal difference from quantum field theory is the measurement theory (measurement in quantum theory is different from measurement for a classical continuous random field, in that classical measurements are always mutually compatible — in quantum mechanical terms they always commute). Quantum effects that are consequences only of quantum fluctuations, not of subtleties of measurement incompatibility, can alternatively be models of classical continuous random fields. See also References External links http://universe-review.ca This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002. Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles. By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
Finding the most useful single-electron wavefunctions to serve as building blocks for a multi-electron wavefunction is one of the main challenges in finding approximate solutions to the multi-electron Schrödinger Equation. The functions must be different for different atoms because the nuclear charge and number of electrons are different. The attraction of an electron for the nucleus depends on the nuclear charge, and the electron-electron interaction depends upon the number of electrons. As we saw in our initial approximation methods, the most straightforward place to start in finding reasonable single-electron wavefunctions for multi-electron atoms is with the atomic orbitals produced in the quantum treatment of hydrogen, the so-called “hydrogenic” spin-orbitals. These traditional atomic orbitals, with a few modifications, give quite reasonable calculated results and are still in wide use for conceptually understanding multi-electron atoms. In this section and in Chapter 10 we will explore some of the many other single-electron functions that also can be used as atomic orbitals. Hydrogenic spin-orbitals used as components of multi-electron systems are identified in the same way as they are for the hydrogen atom. Each spin-orbital consists of a spatial wavefunction, specified by the quantum numbers (n, \(l , m_l\)) and denoted ls, 2s, 2p, 3s, 3p, 3d, etc, multiplied by a spin function, specified by the quantum number \(m_s\) and denoted \(\alpha\) or \(\beta\). In our initial approximation methods, we ignored the spin components of the hydrogenic orbitals, but they must be considered in order to develop a complete description of multi-electron systems. The subscript on the argument of the spatial function reveals which electron is being described (\(r_1\) is a vector that refers to the coordinates of electron 1, for example.) No argument is given for the spin function. An example of a spin-orbital for electron 2 in a \(3p_z\) orbital: \[ | \varphi _{3p_z} \alpha (r_2) \rangle = \varphi _{3,1,0}(r_2) \alpha \label {9.5.1}\] In the alternative shorthand notation for this spin-orbital shown below, the coordinates for electron 2 in the spatial function are abbreviated simply by the number “2,” and the spatial function is represented by “\(3p_z\)” rather than "\(\varphi _{3,1,0}\)". The argument “2” given for the spin function refers to the unknown spin variable for electron 2. Many slight variations on these shorthand forms are in use in this and other texts, so flexibility and careful reading are important. \[ | \varphi _{3p_z}\alpha (2) \rangle = 3p_z (2) \alpha (2) \label {9.5.2}\] In this chapter we will continue the trend of moving away from writing specific mathematical functions and toward a more symbolic, condensed representation. Your understanding of the material in this and future chapters requires that you keep in mind the form and properties of the specific functions denoted by the symbols used in each equation. Exercise \(\PageIndex{1}\) Write the full mathematical form of \(\varphi _{3pz\alpha}\) using as much explicit functional detail as possible. The basic mathematical functions and thus the general shapes and angular momenta for hydrogenic orbitals are the same as those for hydrogen orbitals. The differences between atomic orbitals for the hydrogen atom and those used as components in the wavefunctions for multi-electron systems lie in the radial parts of the wavefunctions and in the energies. Specifically, the differences arise from the replacement of the nuclear charge Z in the radial parts of the wavefunctions by an adjustable parameter \(\zeta\) that is allowed to vary in approximation calculations in order to model the interactions between the electrons. We discussed such a procedure for helium The Variational Method previously. The result is that electrons in orbitals with different values for the angular momentum quantum number, \(l\), have different energies. Figure \(\PageIndex{1}\) shows the results of a quantum mechanical calculation on argon in which the degeneracy of the 2s and 2p orbitals is found to be removed, as is the degeneracy of the 3s, 3p, and 3d orbitals. Figure \(\PageIndex{1}\): Ordering of energy levels for Ar. Energy level differences are not to scale. The energy of each electron now depends not only on its principal quantum number, \(n\), but also on its angular momentum quantum number, \(l\). The presence of \(\zeta\) in the radial portions of the wavefunctions also means that the electron probability distributions associated with hydrogenic atomic orbitals in multi-electron systems are different from the exact atomic orbitals for hydrogen. Figure \(\PageIndex{2}\) compares the radial distribution functions for an electron in a 1s orbital of hydrogen (the ground state), a 2s orbital in hydrogen (an excited configuration of hydrogen) and a 1s orbital in helium that is described by the best variational value of \(\zeta\). Our use of hydrogen-like orbitals in quantum mechanical calculations for multi-electron atoms helps us to interpret our results for multi-electron atoms in terms of the properties of a system we can solve exactly. Figure \(\PageIndex{2}\): Radial distribution functions for 1s of hydrogen (red, \(\zeta\) = 1), 2s of hydrogen (blue, \(\zeta\) = 1) and 1s of helium (black, \(\zeta\) = 1.6875). Exercise \(\PageIndex{2}\) Analyze Figure \(\PageIndex{2}\) and write a paragraph about what you can discern about the relative sizes of ground state hydrogen, excited state hydrogen and ground state helium atoms. While they provide useful stepping off points for understanding computational results, nothing requires us to use the hydrogenic functions as the building blocks for multi-electrons wavefunctions. In practice, the radial part of the hydrogenic atomic orbital presents a computational difficulty because the radial function has nodes, positive and negative lobes, and steep variations that make accurate evaluation of integrals by a computer slow. Consequently other types of functions are generally used in building multi-electron functions. These usually are related to the hydrogenic orbitals to aid in the analysis of molecular electronic structure. For example, Slater-type atomic orbitals (STO’s), designated below as \(S_{nlm} (r, \theta , \varphi )\), avoid the difficulties imposed by the hydrogenic functions. The STO’s, named after their creator, John Slater, were the first alternative functions that were used extensively in computations. STO’s do not have any radial nodes, but still contain a variational parameter \(\zeta\) (zeta), that corresponds to the effective nuclear charge in the hydrogenic orbitals. In Equation \(\ref{9-36}\) and elsewhere in this chapter, the distance, \(r\), is measured in units of the Bohr radius, \(a_0\). \[ S_{nlm} (r, \theta , \varphi ) = \dfrac {(2 \zeta )^{n+1/2}}{[(2n)!]^{1/2}} r^{n-1} e^{-\zeta r } Y^m_l (\theta , \varphi ) \label {9-36}\] Exercise \(\PageIndex{3}\) Write the radial parts of the 1s, 2s, and 2p atomic orbitals for hydrogen. Write the radial parts of the n = 1 and n = 2 Slater–type orbitals (STO). Check that the above five functions are normalized. Graph these five functions, measuring r in units of the Bohr radius. Graph the radial probability densities for these orbitals. Put the hydrogen orbital and the corresponding STO on the same graph so they can be compared easily. Adjust the zeta parameter \(\zeta\) in each case to give the best match of the radial probability density for the STO with that of the corresponding hydrogen orbital. Comment on the similarities and differences between the hydrogen orbitals and the STOs and the corresponding radial probability densities. Linear Variational Method An alternative approach to the general problem of introducing variational parameters into wavefunctions is the construction of a single-electron wavefunction as a linear combination of other functions. For hydrogen, the radial function decays, or decreases in amplitude, exponentially as the distance from the nucleus increases. For helium and other multi-electron atoms, the radial dependence of the total probability density does not fall off as a simple exponential with increasing distance from the nucleus as it does for hydrogen. More complex single-electron functions therefore are needed in order to model the effects of electron-electron interactions on the total radial distribution function. One way to obtain more appropriate single-electron functions is to use a sum of exponential functions in place of the hydrogenic spin-orbitals. An example of such a wavefunction created from a sum or linear combination of exponential functions is written as \[ \varphi _{1s} (r_1) = \sum _j c_j e^{-\zeta _j r_j /a_o} \label{9-37}\] The linear combination permits weighting of the different exponentials through the adjustable coefficients (cj) for each term in the sum. Each exponential term has a different rate of decay through the zeta-parameter \(\zeta _j\). The exponential functions in Equation \(\ref{9-37}\) are called basis functions. Basis functions are the functions used in linear combinations to produce the single-electron orbitals that in turn combine to create the product multi-electron wavefunctions. Originally the most popular basis functions used were the STO’s, but today STO’s are not used in most quantum chemistry calculations. However, they are often the functions to which more computationally efficient basis functions are fitted. Physically, the \(\zeta _j\) parameters account for the effective nuclear charge (often denoted with \(Z_{eff}\). The use of several zeta values in the linear combination essentially allows the effective nuclear charge to vary with the distance of an electron from the nucleus. This variation makes sense physically. When an electron is close to the nucleus, the effective nuclear charge should be close to the actual nuclear charge. When the electron is far from the nucleus, the effective nuclear charge should be much smaller. See Slater's rules for a rule-of-thumb approach to evaluate \(Z_{eff}\) values. A term in Equation \(\ref{9-37}\) with a small \(\zeta\) will decay slowly with distance from the nucleus. A term with a large \(\zeta\) will decay rapidly with distance and not contribute at large distances. The need for such a linear combination of exponentials is a consequence of the electron-electron repulsion and its effect of screening the nucleus for each electron due to the presence of the other electrons. Exercise \(\PageIndex{4}\) Make plots of \(\varphi\) in Equation \(\ref{9-37}\) using three equally weighted terms with \(\zeta\) = 1.0, 2.0, and 5.0. Also plot each term separately. Computational procedures in which an exponential parameter like \(\zeta\) is varied are more precisely called the Nonlinear Variational Method because the variational parameter is part of the wavefunction and the change in the function and energy caused by a change in the parameter is not linear. The optimum values for the zeta parameters in any particular calculation are determined by doing a variational calculation for each orbital to minimize the ground-state energy. When this calculation involves a nonlinear variational calculation for the zetas, it requires a large amount of computer time. The use of the variational method to find values for the coefficients, \(\{c_j\}\), in the linear combination given by Equation \(\ref{9-37}\) above is called the Linear Variational Method because the single-electron function whose energy is to be minimized (in this case \(\varphi _{1s}\)) depends linearly on the coefficients. Although the idea is the same, it usually is much easier to implement the linear variational method in practice. Nonlinear variational calculations are extremely costly in terms of computer time because each time a zeta parameter is changed, all of the integrals need to be recalculated. In the linear variation, where only the coefficients in a linear combination are varied, the basis functions and the integrals do not change. Consequently, an optimum set of zeta parameters were chosen from variational calculations on many small multi-electron systems, and these values, which are given in Table \(\PageIndex{1}\), generally can be used in the STOs for other and larger systems. Atom \(\zeta _{1s}\) \(\zeta _{2s,2p}\) Exercise \(\PageIndex{5}\) Compare the value \(\zeta _{1s}\) = 1.24 in Table \(\PageIndex{1}\) for hydrogen with the value you obtained in Exercise \(\PageIndex{3}\). and comment on possible reasons for any difference. Why are the zeta values larger for 1s than for 2s and 2p orbitals? Why do the \(\zeta _{1s}\) values increase by essentially one unit for each element from He to Ne while the increase for the \(\zeta _{2s, 2p}\) values is much smaller? The discussion above gives us some new ideas about how to write flexible, useful single-electron wavefunctions that can be used to construct multi-electron wavefunctions for variational calculations. Single-electron functions built from the basis function approach are flexible because they have several adjustable parameters, and useful because the adjustable parameters still have clear physical interpretations. Such functions will be needed in the Hartree-Fock method discussed elsewhere. Contributors Adapted from "Quantum States of Atoms and Molecules" by David M. Hanson, Erica Harvey, Robert Sweeney, Theresa Julia Zielinski
This set of Network Theory Question Paper focuses on “Advanced Problems on Two Port Network – 2”. 1. A network contains linear resistors and ideal voltage source S. If all the resistors are made twice their initial value, then voltage across each resistor is __________ a) Halved b) Doubled c) Increases by 2 times d) Remains same View Answer Explanation: The voltage/ resistance ratio is a constant (say K). If K is doubled then, electric current will become half. So voltage across each resistor remains same as was initially. 2. A voltage waveform V(t) = 12t 2 is applied across a 1 H inductor for t ≥ 0, with initial electric current through it being zero. The electric current through the inductor for t ≥ 0 is given by __________ a) 12 t b) 24 t c) 12 t 3 d) 4 t 3 View Answer Explanation: We know that, I = \( \frac{1}{L} \int_0^t V \,dt\) = \(1\int_0^t 12 t^2 \,dt\) = 4 t 3. 3. The linear circuit element among the following is ___________ a) Capacitor b) Inductor c) Resistor d) Inductor & Capacitor View Answer Explanation: A linear circuit element does not change their value with voltage or current. The resistance is only one among the others does not change its value with voltage or current. 4. Consider a circuit having resistance 10 kΩ, excited by voltage 5 V and an ideal switch S. If the switch is repeatedly closed for 2 ms and opened for 2 ms, the average value of i(t) is ____________ a) 0.25 mA b) 0.35 mA c) 0.125 mA d) 1 mA View Answer Explanation: Since i = \(\frac{5}{10 × 2 X 10^{-3}}\) = 0.25 × 10 -3= 0.25 mA. As the switch is repeatedly close, then i (t) will be a square wave. So average value of electric current is (\(\frac{0.25}{2}\)) = 0.125 mA. Explanation: The circuit is as shown in figure below. R eq= 5 + \(\frac{10(R_{eq}+5)}{10 + 5 + R_{eq}}\) Or, \(R_{eq}^2\) + 15R eq= 5R eq+ 75 + 10R eq+ 50 Or, \(R_{eq} = \sqrt{125}\) = 11.18 Ω. 6. A particular electric current is made up of two components a 10 A, a sine wave of peak value 14.14 A. The average value of electric current is __________ a) 0 b) 24.14 A c) 10 A d) 14.14 A View Answer Explanation: Average dc electric current = 10 A. Average ac electric current = 0 A since it is alternating in nature. Average electric current = 10 + 0 = 10 A. 7. Given that, R 1 = 36 Ω and R 2 = 75 Ω, each having tolerance of ±5% are connected in series. The value of resultant resistance is ___________ a) 111 ± 0 Ω b) 111 ± 2.77 Ω c) 111 ± 5.55 Ω d) 111 ± 7.23 Ω View Answer Explanation: R 1= 36 ± 5% = 36 ± 1.8 Ω R 2= 75 ± 5% = 75 ± 3.75 Ω ∴ R 1+ R 2= 111 ± 5.55 Ω. 8. Consider a circuit having a charge of 600 C, which is delivered to 100 V source in a 1 minute. The value of Voltage source V is ___________ a) 30 V b) 60 V c) 120 V d) 240 V View Answer Explanation: In order for 600 C charges to be delivered to 100 V source, the electric current must be in reverse clockwise direction. Now, I = \(\frac{dQ}{dt}\) = \(\frac{600}{60}\) = 10 A Applying KVL we get V 1+ 60 – 100 = 10 × 20 ⇒ V 1= 240 V. 9. The energy required to charge a 10 μF capacitor to 100 V is ____________ a) 0.01 J b) 0.05 J c) 5 X 10 -9 J d) 10 X 10 -9 J View Answer Explanation: E = \(\frac{1}{2}\) CV 2 = 5 X 10 -6X 100 2 = 0.05 J. Explanation: Hybrid parameter h 11is given by, h 11= \(\frac{V_1}{I_1}\), when V 2=0. Therefore short circuiting the terminal Y-Y’, we get, V 1= I 1((50||50) + 50) = I 1\(\left(\left(\frac{50×50}{50+50}\right) + 50\right)\) = 75I 1 ∴ \(\frac{V_1}{I_1}\) = 75. Hence h 11= 75 Ω. Explanation: Hybrid parameter h 21is given by, h 21= \(\frac{I_2}{I_1}\), when V 2= 0. Therefore short circuiting the terminal Y-Y’, and applying Kirchhoff’s law, we get, -50 I 2– (I 2– I 1)50 = 0 Or, -I 2= I 2– I 1 Or, -2I 2= -I 1 ∴ \(\frac{I_2}{I_1} = \frac{1}{2}\) Hence h 21= 0.5 Ω. Explanation: Inverse Hybrid parameter g 11is given by, g 11= \(\frac{I_1}{V_1}\), when I 2= 0. Therefore short circuiting the terminal Y-Y’, we get, V 1= I 1((5||5) + 5) = I 1\(\left(\left(\frac{5×5}{5+5}\right) + 5\right)\) = 7.5I 1 ∴ \(\frac{I_1}{V_1} = \frac{1}{7.5}\) = 0.133 Ω Hence g 11= 0.133 Ω. 13. A resistor of 10 kΩ with the tolerance of 5% is connected in series with 5 kΩ resistors of 10% tolerance. What is the tolerance limit for a series network? a) 9% b) 12.04% c) 8.67% d) 6.67% View Answer Explanation: Error in 10 kΩ resistance = 10 × \(\frac{5}{100}\) = 0.5 kΩ Error in 5 kΩ resistance = 5 × \(\frac{10}{100}\) = 5 kΩ Total measurement resistance = 10 + 0.5 + 5 + 0.5 = 16 kΩ Original resistance = 10 + 5 = 15 kΩ Error = \(\frac{16-15}{15}\) × 100 = \(\frac{1}{15}\) × 100 = 6.67%. 14. A 200 μA ammeter has an internal resistance of 200 Ω. The range is to be extended to 500μA. The shunt required is of resistance __________ a) 20.0 Ω b) 22.22 Ω c) 25.0 Ω d) 50.0 Ω View Answer Explanation: I shR sh= I mR m I sh= I – I mor, \(\frac{I}{I_m} – 1 = \frac{R_m}{R_{sh}}\) Now, m = \(\frac{I}{I_m}\) Or, m – 1 = \(\frac{R_m}{R_{sh}}\) ∴R sh= 25 Ω. 15. A voltmeter has a sensitivity of 1000 Ω/V reads 200 V on its 300 V scale. When connected across an unknown resistor in series with a millimeter, it reads 10 mA. The error due to the loading effect of the voltmeter is a) 3.33% b) 6.67% c) 13.34% d) 13.67% View Answer Explanation: R T= \(\frac{V_T}{I_T}\) V T= 200 V, I T= 10 A So, R T= 20 kΩ Resistance of voltmeter, R V= 1000 × 300 = 300 kΩ Voltmeter is in parallel with unknown resistor, R X= \(\frac{R_T R_V}{R_T – R_V} = \frac{20 ×300}{280}\) = 21.43 kΩ Percentage error = \(\frac{Actual-Apparent}{Actual}\) × 100 = \(\frac{21.43-20}{21.43}\) × 100 = 6.67%. Sanfoundry Global Education & Learning Series – Network Theory. To practice all questions papers on Network Theory, here is complete set of 1000+ Multiple Choice Questions and Answers.
Definition:Ordering/Notation Definition Symbols used to denote a general ordering relation are usually variants on $\preceq$, $\le$ and so on. On $\mathsf{Pr} \infty \mathsf{fWiki}$, to denote a general ordering relation it is recommended to use $\preceq$ and its variants: $\preccurlyeq$ $\curlyeqprec$ $\leqslant$ $\leqq$ $\eqslantless$ The symbol $\subseteq$ is universally reserved for the subset relation. $a \preceq b$ can be read as: $a$ precedes, or is the same as, $b$. Similarly: $a \preceq b$ can be read as: $b$ succeeds, or is the same as, $a$. If, for two elements $a, b \in S$, it is not the case that $a \preceq b$, then the symbols $a \npreceq b$ and $b \nsucceq a$ can be used.
Abstract : The following multiple integral is involved in the neutron star theory: \[\tau(\epsilon,\upsilon)=\frac{1}{\omega(\epsilon)}\int_{0}^{\pi/2}d\theta sin(\theta) \int_{0}^{\infty}dnn^{2} \int_{0}^{\infty}dph(n, p, \theta, \epsilon, \upsilon)\] where \[h(n,p,\theta,\epsilon,\upsilon)=\psi(z)\phi(n-\epsilon-z)+\psi(-z)\phi(n-\epsilon+z)-\psi(z)\phi(n+\epsilon-z)-\psi(z)\phi(n+\epsilon+z)\] and \[z=\sqrt{p^{2}+(\upsilon \sin(\theta))^{2}},\psi(x)=\frac{1}{\exp x + 1},\phi(x)=\frac{x}{\exp x - 1}.\] $\omega(\epsilon)$ is a normalization function. The aim is to get a table for $\tau(\epsilon,\upsilon)$ for some values of $(\epsilon,\upsilon)$ in $[10^{−4},10^{4}]\times[10^{−4},10^{3}]$ and then to interpolate for the others. We present a new strategy, using the Gauss–Legendre quadrature formula, which allows to have one code available whatever the values of vv and εε are. We guarantee the accuracy of the final result including both the truncation error and the round-off error using Discrete Stochastic Arithmetic.
The diffusive flow of particles can be studied by applying a constant force \(f\) to a system using the microscopic equations of motion \[\dot{\textbf r}_i = { {\textbf p}_i \over m_i}\] \[\dot{\textbf p}_i = {\textbf F}_i({\textbf q}_1,..,{\textbf q}_N) + f\hat{\textbf x} \] which have the conserved energy \[ H' = \sum_{i=1}^N {{\textbf p}_i^2 \over 2m_i} + U({\textbf q}_1,...,{\textbf q}_N) -f\sum_{i=1}^Nx_i \] Since the force is applied in the \( \hat{\textbf x} \) direction, there will be a net flow of particles in this direction, i.e., a current \(J_x \). Since this current is a thermodynamic quantity, there is an estimator for it: \[ u_x = \sum_{i=1}^N \dot{x}_i \] and \(J_x = \langle u_x \rangle \). The constant force can be considered as arising from a potential field \[ \phi(x) = -xf \] The potential gradient \( \partial \phi/\partial x \) will give rise to a concentration gradient \(\partial c / \partial x \) which is opposite to the potential gradient and related to it by \[ {\partial c \over \partial x} = -{1 \over kT}{\partial \phi \over \partial x} \] However, Fick's law tells how to relate the particle current \(J_x \) to the concentration gradient \[ J_x = D{\partial c \over \partial x} = -{D \over kT}{\partial \phi \over \partial x}= {D \over kT}f \] where \(D\) is the diffusion constant. Solving for \(D\) gives \[ D = kT{J_x \over f} = kT\lim_{t\rightarrow\infty}{\langle u_x(t)\rangle \over f} \] Let us apply the linear response formula again to the above nonequilibrium average. Again, we make the identification: \[ F_e(t) = 1\;\;\;\;\;\;{\textbf D}_i = f\hat{\textbf x}\;\;\;\;\;{\textbf C}_i=0 \] Thus, \[\langle u_x(t) \rangle = \langle u_x\rangle_0 + \beta\int_0^t dsf\langle \left(\sum_{i=1}^N\dot{x}_i(0)\right)\left(\sum_{i=1}^N\dot{x}_i(t-s)\right)\rangle_0 \] \[\langle u_x \rangle_0 + \beta f\int_0^t ds\sum_{i,j}\langle \dot{x}_i(0)\dot{x}_j(t-s)\rangle_0\] In equilibrium, it can be shown that there are no cross correlations between different particles. Consider the initial value of the correlation function. From the virial theorem, we have \[ \langle \dot{x}_i\dot{x}_j\rangle_0 = \delta_{ij}\langle \dot{x}_i^2\rangle_0 \] which vanishes for \(i \ne j \). In general, \[ \langle \dot{x}_i(0)\dot{x}_j(t)\rangle_0 = \delta_{ij}\langle \dot{x}_i(0)\dot{x}_i(t-s)\rangle_0 \] Thus, \[ \langle u_x(t)\rangle = \langle u_x\rangle_0 + \beta f \int_0^t ds\sum_{i=1}^N\dot{x}_i(0)\dot{x}_i(t-s)\rangle_0 \] In equilibrium, \(\langle u_x \rangle _0 = 0 \) being linear in the velocities (hence momenta). Thus, the diffusion constant is given by, when the limit \( t \rightarrow \infty \) is taken, \[ D = \int_0^{\infty} \sum_{i=1}^N\langle\dot{x}_i(0)\dot{x}_i(t)\rangle_0 \] However, since no spatial direction is preferred, we could also choose to apply the external force in the \(\underline {y} \) or \(z \) directions and average the result over the these three. This would give a diffusion constant \[ D = {1 \over 3}\int_0^{\infty} dt \sum_{i=1}^N\langle \dot{\textbf r}_i(0)\cdot\dot{\textbf r}_i(t)\rangle_0\] The quantity \[ \sum_{i=1}^N\langle \dot{\textbf r}_i(0)\cdot\dot{\textbf r}_i(t)\rangle_0 \] is known as the velocity autocorrelation function, a quantity we will encounter again in other contexts.
Answer There are no values of x that will make this equation true. Work Step by Step Because 2x + 1 is in the denominator, and the denominator cannot equal 0, 2x + 1 $\ne$ 0 2x $\ne$ -1 x $\ne$ -$\frac{1}{2}$ Because x - 3 is in the denominator, and the denominator cannot equal 0, x - 3 $\ne$ 0 x $\ne$ 3 To solve this equation, we equate the cross products. 8 $\times$ (x - 3) = 4 $\times$ (2x + 1) Use the distributive property. 8x - 24 = 8x + 4 Subtract 8x from both sides. -24 = 4 We can see that this equation will never be true for any value of x.
Strongly Nonlinear p(x)-Elliptic Problems with L¹-Data Strongly Nonlinear p(x)-Elliptic Problems with L¹-Data Abstract In this paper, we will study the existence of solutions in the sense of distributions for the quasilinear $p(x)$-elliptic problem, $$ Au + g(x,u,\nabla u) = f,$$ where $A$ is a Leray-Lions operator from $W_{0}^{1,p(\cdot)}(\Omega)$ into its dual, the nonlinear term $g(x,s,\xi)$ has a growth condition with respect to $\xi$ and the sign condition with respect to $s.$ The datum $\>f\>$ is assumed in the dual space $\>W^{-1,p'(\cdot)}(\Omega),\>$ and then in $\>L^{1}(\Omega).$
The equations of motion of a system can be cast in the generic form \[\dot {x} = \xi (x) \] where, for a Hamiltonian system, the vector function \(\xi\) would be \[ \xi (x) = \left ( - \dfrac {\partial H}{\partial r_1} , \cdots , - \dfrac {\partial H}{\partial r_N} , \dfrac {\partial H}{\partial p_1}, \cdots , \dfrac {\partial H}{\partial p_N} \right ) \] and the incompressibility condition would be a condition on \(\xi \) : \[ \Delta _x \cdot \dot {x} = \Delta _x \cdot \xi = 0 \] A non-Hamiltonian system, described by a general vector funciton \(\xi \), will not, in general, satisfy the incompressibility condition. That is: \[ \Delta _x \cdot \dot {x} = \Delta _x \cdot \xi \ne 0 \] Non-Hamiltonian dynamical systems are often used to describe open systems, i.e., systems in contact with heat reservoirs or mechanical pistons or particle reservoirs. They are also often used to describe driven systems or systems in contact with external fields. The fact that the compressibility does not vanish has interesting consequences for the structure of the phase space. The Jacobian, which satisfies \[ \dfrac {d J}{ dt} = J \Delta _x \cdot \dot {x} \] will no longer be 1 for all time. Defining \( k = \Delta _x \cdot \dot {x} \), the general solution for the Jacobian can be written as \[ J ( x_t; x_0 ) = J ( x_0 ; x_0) exp \left ( \int \limits _0 ^t dR k (x_A) \right ) \] Note that \(J (x_0; x_0 ) = 1 \) as before. Also, note that \( k = d \ln \dfrac {J}{dt} \). Thus, \(k\) can be expressed as the total time derivative of some function, which we will denote W, i.e., \(k = \dot {W} \). Then, the Jacobian becomes \[ J (x_t ; x_0) = exp \left ( \int \limits _0^t dR W (x_A) \right ) \] \[ = exp ( W (x_t) - W (x_0)) \] Thus, the volume element in phase space now transforms according to \[ dx_t = exp \left ( W (x_t) - W (x_0) \right ) dx_0 \] which can be arranged to read as a conservation law: \[ e^{-W(x_t)} dx_t = e^{-W(x_0)} dx_0 \]{ Thus, we have a conservation law for a modified volume element, involving a ``metric factor'' \(exp (-W (x)) \). Introducing the suggestive notation \(\sqrt {g} = exp (-W(x)) \), the conservation law reads \(\sqrt {g(x_t} dx_t = \sqrt {g(x_0} dx_0 \). This is a generalized version of Liouville's theorem. Furthermore, a generalized Liouville equation for non-Hamiltonian systems can be derived which incorporates this metric factor. The derivation is beyond the scope of this course, however, the result is \[ \partial (f \sqrt {g}) + \nabla _x \cdot (\dot {x} f \sqrt {g} ) = 0 \] We have called this equation, the generalized Liouville equation Finally, noting that \(\sqrt {g} \) satisfies the same equation as J, i.e., \[ \dfrac {d \sqrt {g}}{dt} = k \sqrt {g} \] the presence of \(\sqrt {g} \) in the generalized Liouville equation can be eliminated, resulting in \[ \dfrac {\partial f}{\partial t} + \dot {x} \cdot \nabla _x f = \dfrac {df}{dt} = 0 \] which is the ordinary Liouville equation from before. Thus, we have derived a modified version of Liouville's theorem and have shown that it leads to a conservation law for f equivalent to the Hamiltonian case. This, then, supports the generality of the Liouville equation for both Hamiltonian and non-Hamiltonian based ensembles, an important fact considering that this equation is the foundation of statistical mechanics.
On Jacobi Fields Along Eigenmappings of the Tension Field for Mappings into a Symmetric Riemannian Manifold On Jacobi Fields Along Eigenmappings of the Tension Field for Mappings into a Symmetric Riemannian Manifold Abstract We prove that the mean value ( for some measure $\mu =\chi dx$ with $\chi \geq 0,dx=$ Riemannian measure ) of the squared norm of the gradient of the unitary direction of a Jacobi field along an eigenmapping $v$ ( associated to an eigenvalue $\lambda \geq 0$ ) of the tension field, for mappings from a compact Riemannian manifold $(M,g)$ into a symmetric Riemannian manifold $(N,h)$ of positive sectional curvature, is smaller than $c\lambda $, where $c>0$ depends only on the diameter and upper and lower curvature bounds of $(N,h)$. For negative $\lambda $, we prove that there is no nonvanishing Jacobi field along the eigenmappings, under the same assumptions on $(M,g)$ and $(N,h)$.
Systematics on lifetime measurments We propose the following text (latex format). latex abbreviations are defined using the standard lhcb latex defintion file. The particle decay times are measured from the distance between the primary vertex and secondary decay vertex in the VELO. The accuracy with which this distance is known is dependent on the precision with which the relative position along the beam line ($z$ axis) of the LHCb modules is determined.\\ There are two contributions to this systematic uncertainty. First there is the precision with which the VELO modules were assembled. This has been determined during a survey at the time of assembly to be 100~\mum~ of the measurement of the baseplate over the whole length of the VELO \cite{bib:VELOPerformance}. \begin{equation} \sigma_{\text{survey}} = \frac{100\times10^{-3}~\text{mm}}{1000~\text{mm}} = 0.01 \%. \end{equation} The second contribution originates from track-based alignment \cite{bib:alignKalman, bib:alignVELO, bib:alignVELOResult}. This is mostly determined by the first two modules on the track since the following modules are weighted down due to multiple scattering effects. So in principle the $z$-scale uncertainty is obtained comparing the $z$ module position from the track-based alignment with the metrology (20~\mum) divided by the spacing between two modules (30~mm). However since the signal tracks have some spread in $z$ within the VELO, they do not all hit the same module first (see Fig.\ref{fig_zpos}). The RMS of this distributions (100~mm) is a measure for the effective spread of the tracks. Therefore the resulting uncertainty from tack-based alignment is given by \begin{equation} \sigma_{\text{track}} = \frac{20\times10^{-3}~\text{mm}}{100~\text{mm}} = 0.02 \%. \end{equation} \begin{figure} \centering \includegraphics[width=0.6\linewidth]{./plots/ct/zpos_first_hit.eps} \caption{$z$-position of the first hit on each track used in this analysis.\label{fig_zpos}} \end{figure} For the overall $z$-scale systematic we add the two contributions in quadrature and end up with \begin{equation} \sigma_{z\text{-scale}} = 0.022 \%. \end{equation} This is directly translated into a relative uncertainty on \Dms. Therefore the systematic uncertainty on \Dms that we assign to the $z$-scale is $\pm 0.004$ ps$^{-1}$. \bibitem{bib:alignKalman}{W.~Hulsbergen, ``The global covariance matrix of tracks fitted with a Kalman filter and an application in detector alignment'', Nucl.\ Instrum.\ Meth.\ A {\bf 600} (2009) 471 [arXiv:0810.2241 [physics.ins-det]]. %%CITATION = NUIMA,A600,471;%% } \bibitem{bib:alignVELO}{ S.~Viret, C.~Parkes and M.~Gersabeck, ``Alignment procedure of the LHCb Vertex Detector'', Nucl.\ Instrum.\ Meth.\ A {\bf 596} (2008) 157 [arXiv:0807.5067 [physics.ins-det]]. %%CITATION = NUIMA,A596,157;%% } \bibitem{bib:alignVELOResult}{ S.~Borghi {\it et al.}, ``First spatial alignment of the LHCb VELO and analysis of beam absorber collision data,'', Nucl.\ Instrum.\ Meth.\ A {\bf 618} (2010) 108. %%CITATION = NUIMA,A618,108;%% } \bibitem{bib:VELOPerformance}{LHCb VELO grou},``Performance of the LHCb VELO, to be submitted to JINST''}
Search Now showing items 1-10 of 18 J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-02) Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ... Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-12) The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ... Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC (Springer, 2014-10) Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ... Centrality, rapidity and transverse momentum dependence of the J/$\psi$ suppression in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2014-06) The inclusive J/$\psi$ nuclear modification factor ($R_{AA}$) in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76TeV has been measured by ALICE as a function of centrality in the $e^+e^-$ decay channel at mid-rapidity |y| < 0.8 ... Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... Multiplicity Dependence of Pion, Kaon, Proton and Lambda Production in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV (Elsevier, 2014-01) In this Letter, comprehensive results on $\pi^{\pm}, K^{\pm}, K^0_S$, $p(\bar{p})$ and $\Lambda (\bar{\Lambda})$ production at mid-rapidity (0 < $y_{CMS}$ < 0.5) in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV, measured ... Multi-strange baryon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2014-01) The production of ${\rm\Xi}^-$ and ${\rm\Omega}^-$ baryons and their anti-particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been measured using the ALICE detector. The transverse momentum spectra at ... Measurement of charged jet suppression in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2014-03) A measurement of the transverse momentum spectra of jets in Pb–Pb collisions at √sNN = 2.76TeV is reported. Jets are reconstructed from charged particles using the anti-kT jet algorithm with jet resolution parameters R ... Two- and three-pion quantum statistics correlations in Pb-Pb collisions at √sNN = 2.76 TeV at the CERN Large Hadron Collider (American Physical Society, 2014-02-26) Correlations induced by quantum statistics are sensitive to the spatiotemporal extent as well as dynamics of particle-emitting sources in heavy-ion collisions. In addition, such correlations can be used to search for the ... Exclusive J /ψ photoproduction off protons in ultraperipheral p-Pb collisions at √sNN = 5.02TeV (American Physical Society, 2014-12-05) We present the first measurement at the LHC of exclusive J/ψ photoproduction off protons, in ultraperipheral proton-lead collisions at √sNN=5.02 TeV. Events are selected with a dimuon pair produced either in the rapidity ...
Current browse context: hep-ph Change to browse by: Bookmark(what is this?) High Energy Physics - Phenomenology Title: Sub-MeV Bosonic Dark Matter, Misalignment Mechanism and Galactic Dark Matter Halo Luminosities (Submitted on 26 Oct 2016 (v1), last revised 21 Jun 2017 (this version, v3)) Abstract: We explore a scenario that the dark matter is a boson condensate created by the misalignment mechanism, in which a spin 0 boson (an axion-like particle) and a spin 1 boson (the dark photon) are considered, respectively. We find that although the sub-MeV dark matter boson is extremely stable, the huge number of dark matter particles in a galaxy halo makes the decaying signal detectable. A galaxy halo is a large structure bounded by gravity with a typical $\sim10^{12}$ solar mass, and the majority of its components are made of dark matter. For the axion-like particle case, it decays via $\phi\to \gamma\gamma$, therefore the photon spectrum is monochromatic. For the dark photon case, it is a three body decay $A'\to\gamma\gamma\gamma$. However, we find that the photon spectrum is heavily peaked at $M/2$ and thus can facilitate observation. We also suggest a physical explanation for the three body decay spectrum by comparing the physics in the decay of orthopositronium. In addition, for both cases, the decaying photon flux can be measured for some regions of parameter space using current technologies. Submission historyFrom: Qiaoli Yang [view email] [v1]Wed, 26 Oct 2016 15:29:06 GMT (319kb) [v2]Fri, 18 Nov 2016 07:20:13 GMT (318kb) [v3]Wed, 21 Jun 2017 14:03:59 GMT (320kb)
To discuss the electronic states of atoms we need a system of notation for multi-electron wavefunctions. As we saw in Chapter 8, the assignment of electrons to orbitals is called the electron configuration of the atom. One creates an electronic configuration representing the electronic structure of a multi-electron atom or ion in its ground or lowest-energy state as follows. First, obey the Pauli Exclusion Principle, which requires that each electron in an atom or molecule must be described by a different spin-orbital. Second, assign the electrons to the lowest energy spin-orbitals, then to those at higher energy. This procedure is called the Aufbau Principle (which translates from German as build-up principle). The mathematical analog of this process is the construction of the approximate multi-electron wavefunction as a product of the single-electron atomic orbitals. For example, the configuration of the boron atom, shown schematically in the energy level diagram in Figure \(\PageIndex{1}\), is written in shorthand form as 1s 22s 22p 1. As we saw in previously, the degeneracy of the 2s and 2p orbitals is broken by the electron-electron interactions in multi-electron systems. Figure \(\PageIndex{1}\): Orbital energy level diagram that represents the electron configuration of the boron atom. Orbital energy differences are approximately to scale. Rather than showing the individual spin-orbitals in the diagram or in the shorthand notation, we commonly say that up to two electrons can be described by each spatial orbital, one with spin function \(\alpha\) (electron denoted by an arrow pointing up) and the other with spin function \(\beta\) (arrow pointing down). This restriction is a manifestation of the Pauli Exclusion Principle mentioned above. An equivalent statement of the Pauli Exclusion Principle is that each electron in an atom has a unique set of quantum numbers (n,\(l , m_l , m_s\)). Since the two spin functions are degenerate in the absence of a magnetic field, the energy of the two electrons with different spin functions in a given spatial orbital is the same, and they are shown on the same line in the energy diagram. Exercise \(\PageIndex{1}\) Write the electronic configuration of the carbon atom and draw the corresponding energy level diagram. Exercise \(\PageIndex{2}\) Write the values for the quantum numbers (n, \(l , m_l , m_s\)) for each of the six electrons in carbon. We can deepen our understanding of the quantum mechanical description of multi-electron atoms by examining the concepts of electron indistinguishability and the Pauli Exclusion Principle in detail. We will use the following statement as a guide to keep our explorations focused on the development of a clear picture of the multi-electron atom: “When a multi-electron wavefunction is built as a product of single-electron wavefunctions, the corresponding concept is that exactly one electron’s worth of charge density is described by each atomic spin-orbital.” A subtle, but important part of the conceptual picture is that the electrons in a multi-electron system are not distinguishable from one another by any experimental means. Since the electrons are indistinguishable, the probability density we calculate by squaring the modulus of our multi-electron wavefunction also cannot change when the electrons are interchanged (permuted) between different orbitals. In general, if we interchange two identical particles, the world does not change. As we will see below, this requirement leads to the idea that the world can be divided into two types of particles based on their behavior with respect to permutation or interchange. For the probability density to remain unchanged when two particles are permuted, the wavefunction itself can change only by a factor of \(e^{i\varphi}\), which represents a complex number, when the particles described by that wavefunction are permuted. As we will show below, the \(e^{i\varphi}\) factor is possible because the probability density depends on the absolute square of the function and all expectation values involve \(\psi \psi ^*\). Consequently \(e^{i\varphi}\) disappears in any calculation that relates to the real world because \(e^{i\varphi} e^{-i\varphi} = 1\). We could symbolically write an approximate two-particle wavefunction as \(\psi (r_1, r_2)\). This could be, for example, a two-electron wavefunction for helium. To exchange the two particles, we simply substitute the coordinates of particle 1 (\(r_l\)) for the coordinates of particle 2 (\(r_2\)) and vice versa, to get the new wavefunction \(\psi (r_1, r_2)\). This new wavefunction must have the property that \[|\psi (r_1, r_2)|^2 = \psi (r_2, r_1)^*\psi (r_2, r_1) = \psi (r_1, r_2)^* \psi (r_1, r_2) \label {9-38}\] since the probability density of the electrons in the atom does not change upon permutation of the electrons. Exercise \(\PageIndex{3}\) Permute the electrons in Equation \(\ref{9-13}\) (the product function for He wavefunction.) Equation \(\ref{9-38}\) will be true only if the wavefunctions before and after permutation are related by a factor of \(e^{i\varphi}\), \[\psi (r_1, r_2) = e^{i\varphi} \psi (r_1, r_2) \label {9-39}\] so that \[ \left ( e^{-i\varphi} \psi (r_1, r_2) ^*\right ) \left ( e^{i\varphi} \psi (r_1, r_2) ^*\right ) = \psi (r_1 , r_2 ) ^* \psi (r_1 , r_2) \label {9-40}\] If we exchange or permute two identical particles twice, we are (by definition) back to the original situation. If each permutation changes the wavefunction by \(e^{i \varphi}\), the double permutation must change the wavefunction by \(e^{i\varphi} e^{i\varphi}\). Since we then are back to the original state, the effect of the double permutation must equal 1; i.e., \[e^{i\varphi} e^{i\varphi} = e^{i 2\varphi} = 1 \label {9-41}\] which is true only if \(\varphi = 0 \) or an integer multiple of π. The requirement that a double permutation reproduce the original situation limits the acceptable values for \(e^{i\varphi}\) to either +1 (when \(\varphi = 0\)) or -1 (when \(\varphi = \pi\)). Both possibilities are found in nature. Exercise \(\PageIndex{4}\) Wavefunctions for which \(e^{i \varphi} = +1\) are defined as symmetric with respect to permutation, because the wavefunction is identical before and after a single permutation. Wavefunctions that are symmetric with respect to interchange of the particles obey the following mathematical relationship: \[e^{i\varphi} e^{i\varphi} = e^{i 2\varphi} = 1 \label {9-42}\] The behavior of some particles requires that the wavefunction be symmetric with respect to permutation. These particles are called bosons and have integer spin such as deuterium nuclei, photons, and gluons. The behavior of other particles requires that the wavefunction be antisymmetric with respect to permutation \((e^{i\varphi} = -1)\). A wavefunction that is antisymmetric with respect to electron interchange is one whose output changes sign when the electron coordinates are interchanged, as shown below: \[ \psi (r_2 , r_1) = e^{i\varphi} \psi (r_1, r_2) = - \psi (r_1, r_2) \label {9-43}\] These particles, called fermions, have half-integer spin and include electrons, protons, and neutrinos. Exercise \(\PageIndex{5}\) In fact, an elegant statement of the Pauli Exclusion Principle is simply “electrons are fermions.” This statement means that any wavefunction used to describe multiple electrons must be antisymmetric with respect to permutation of the electrons, providing yet another statement of the Pauli Exclusion Principle. The requirement that the wavefunction be antisymmetric applies to all multi-electron functions \(\psi (r_1, r_2, \cdots r_i)\), including those written as products of single electron functions \(\varphi _1 (r_1) \varphi _2 (r_2) \cdots \varphi _i (r_i)\). Another way to simply restate the Pauli Exclusion Principle is that “electrons are fermions.” The first statement of the Pauli Exclusion Principle was that two electrons could not be described by the same spin orbital. To see the relationship between this statement and the requirement that the wavefunction be antisymmetric for electrons, try to construct an antisymmetric wavefunction for two electrons that are described by the same spin-orbital. We can try to do so for helium. Write the He approximate two-electron wavefunction as a product of identical 1s spin-orbitals for each electron,\(\varphi _{1s_{\alpha}} (r_1) \) and \(\varphi _{1s_{\alpha}} (r_2) \): \[ \psi (r_1, r_2 ) = \varphi _{1s\alpha} (r_1) \varphi _{1s\alpha} (r_2) \label {9-44}\] To permute the electrons in this two-electron wavefunction, we simply substitute the coordinates of electron 1 (\(r_l\)) for the coordinates of electron 2 (\(r_2\)) and vice versa, to get \[ \psi (r_2, r_1 ) = \varphi _{1s\alpha} (r_2) \varphi _{1s\alpha} (r_1) \label {9-45}\] This is identical to the original function (Equatin \(\ref{9-44}\)) since the two single-electron component functions commute. The two-electron function has not changed sign, as it must for fermions. We can construct a wavefunction that is antisymmetric with respect to permutation symmetry only if each electron is described by a different function. Exercise \(\PageIndex{6}\) What is meant by the term permutation symmetry? Exercise \(\PageIndex{7}\) Explain why the product function \(\varphi (r_1) \varphi (r_2)\) could describe two bosons (deuterium nuclei) but can not describe two fermions (e.g. electrons). Let’s try to construct an antisymmetric function that describes the two electrons in the ground state of helium. Blindly following the first statement of the Pauli Exclusion Principle, that each electron in a multi-electron atom must be described by a different spin-orbital, we try constructing a simple product wavefunction for helium using two different spin-orbitals. Both have the 1s spatial component but one has spin function \(\alpha\) and the other has spin function \(\beta\) so the product wavefunction matches the form of the ground state electron configuration for He, \(1s^2\). \[ \psi (r_1, r_2 ) = \varphi _{1s\alpha} (r_1) \varphi _{1s\beta} (r_2) \label {9-46}\] After permutation of the electrons, this becomes \[ \psi (r_2, r_1 ) = \varphi _{1s\alpha} (r_2) \varphi _{1s\beta} (r_1) \label {9-47}\] which is different from the starting function (Equation \(\ref{9-46}\)) since \(\varphi _{1s\alpha}\) and \(\varphi _{1s\beta}\) are different functions. However, an antisymmetric function must produce the same function multiplied by (–1) after permutation, and that is not the case here. We must try something else. To avoid getting a totally different function when we permute the electrons, we can make a linear combination of functions. A very simple way of taking a linear combination involves making a new function by simply adding or subtracting functions. The function that is created by subtracting the right-hand side of Equation \(\ref{9-47}\) from the right-hand side of Equation \(\ref{9-46}\) has the desired antisymmetric behavior. \[\psi (r_1, r_2) = \dfrac {1}{\sqrt {2}} [ \varphi _{1s\alpha}(r_1) \varphi _{1s\beta}(r_2) - \varphi _{1s\alpha}(r_2) \varphi _{1s\beta}(r_1)] \label {9-48}\] The constant on the right-hand side accounts for the fact that the total wavefunction must be normalized. Exercise \(\PageIndex{8}\) Show that the linear combination in Equation \(\ref{9-48}\) is antisymmetric with respect to permutation of the two electrons. Replace the minus sign with a plus sign (i.e. take the positive linear combination of the same two functions) and show that the resultant linear combination is symmetric. Exercise \(\PageIndex{9}\) Write a similar linear combination to describe the \(1s^12s^1\) excited configuration of helium. A linear combination that describes an appropriately antisymmetrized multi-electron wavefunction for any desired orbital configuration is easy to construct for a two-electron system. However, interesting chemical systems usually contain more than two electrons. For these multi-electron systems a relatively simple scheme for constructing an antisymmetric wavefunction from a product of one-electron functions is to write the wavefunction in the form of a determinant. John Slater introduced this idea so the determinant is called a Slater determinant. The Slater determinant for the two-electron wavefunction of helium is \[ \psi (r_1, r_2) = \frac {1}{\sqrt {2}} \begin {vmatrix} \varphi _{1s} (1) \alpha (1) & \varphi _{1s} (1) \beta (1) \\ \varphi _{1s} (2) \alpha (2) & \varphi _{1s} (2) \beta (2) \end {vmatrix} \label {9-49}\] and a shorthand notation for this determinant is \[ \psi (r_1 , r_2) = 2^{-\frac {1}{2}} Det | \varphi _{1s} (r_1) \varphi _{1s} (r_2) | \label {9-50}\] The determinant is written so the electron coordinate changes in going from one row to the next, and the spin orbital changes in going from one column to the next. The advantage of having this recipe is clear if you try to construct an antisymmetric wavefunction that describes the orbital configuration for uranium! Note that the normalization constant is \((N!)^{-\frac {1}{2}}\) for N electrons. Exercise \(\PageIndex{10}\) Show that the determinant form is the same as the form for the helium wavefunction that is given in Equation \(\ref{9-48}\). Exercise \(\PageIndex{11}\) Expand the Slater determinant in Equation \(\ref{9-49}\) for the He atom. Exercise \(\PageIndex{12}\) Write and expand the Slater determinant for the electronic wavefunction of the Li atom. Exercise \(\PageIndex{13}\) Write the Slater determinant for the carbon atom. If you expanded this determinant, how many terms would be in the linear combination of functions? Exercise \(\PageIndex{14}\) Write the Slater determinant for the \(1s^12s^1\) excited state orbital configuration of the helium atom. Now that we have seen how acceptable multi-electron wavefunctions can be constructed, it is time to revisit the “guide” statement of conceptual understanding with which we began our deeper consideration of electron indistinguishability and the Pauli Exclusion Principle. What does a multi-electron wavefunction constructed by taking specific linear combinations of product wavefunctions mean for our physical picture of the electrons in multi-electron atoms? Overall, the antisymmetrized product function describes the configuration (the orbitals, regions of electron density) for the multi-electron atom. Because of the requirement that electrons be indistinguishable, we can’t visualize specific electrons assigned to specific spin-orbitals. Instead, we construct functions that allow each electron’s probability distribution to be dispersed across each spin-orbital. The total charge density described by any one spin-orbital cannot exceed one electron’s worth of charge, and each electron in the system is contributing a portion of that charge density. Exercise \(\PageIndex{13}\) Critique the energy level diagram and shorthand electron configuration notation from the perspective of the indistinguishability criterion. Can you imagine a way to represent the wavefunction expressed as a Slater determinant in a schematic or shorthand notation that more accurately represents the electrons? (This is not a solved problem!) Contributors Adapted from "Quantum States of Atoms and Molecules" by David M. Hanson, Erica Harvey, Robert Sweeney, Theresa Julia Zielinski
I would like to know a reference for Grothendieck duality in a resolution of singularities. More precisely, let $Y$ be a normal, Gorenstein variety with finite quotient singularities, and suppose that $f\colon X \to Y$ is a crepant resolution of singularities, meaning that $f^*\omega_Y \cong \omega_X$. In particular, since $Y$ is normal, we know that $Rf_*\mathcal{O}_X \cong \mathcal{O}_Y$. By general results valid for any projective morphism, there is a functor $f^!\colon D^b(Y) \to D^b(X)$ between the derived categories of $X$ and $Y$ which gives an isomorphism $$ Rf_*R\mathcal{Hom}(F, f^! G ) \cong R\mathcal{Hom}(Rf_*F,G) $$ for any $F \in D^b(X)$ and $G \in D^b(Y)$ My question is: is there nice description of $f^{!}$ in this situation? I guess it should be related to $Lf^*$ and the relative canonical, but I don't know much about this topic.
Definition:Iterated Binary Operation over Finite Set Jump to navigation Jump to search Contents Definition Let $\left({G, *}\right)$ be a commutative semigroup. Let $f: S \to G$ be a mapping. Let $n \in \N$ be the cardinality of $S$. $\displaystyle \prod_{s \mathop \in S} f \left({s}\right) = \displaystyle \prod_{i \mathop = 0}^{n - 1} f \left({g \left({i}\right)}\right)$ Commutative Monoid Let $G$ be a commutative monoid. Let $S$ be a non-empty set. Let $f : S \to G$ be a mapping Also known as Also see Special cases Definition:Indexed Iterated Binary Operation, as shown at Iteration of Operation over Interval equals Indexed Iteration Definition:Summation Definition:Product over Finite Set
Difference between revisions of "SageMath" (Add note about the flask notebook deprecation) (→Documentation: Delete build instructions, docs are provided in the sagemath-doc package) Line 63: Line 63: Cantor can be installed with the {{Pkg|cantor}} package or as part of the {{Grp|kde-applications}} or {{Grp|kdeedu}} groups, available in the [[official repositories]]. Cantor can be installed with the {{Pkg|cantor}} package or as part of the {{Grp|kde-applications}} or {{Grp|kdeedu}} groups, available in the [[official repositories]]. − − − − − − − − == Optional additions == == Optional additions == Revision as of 19:46, 10 August 2016 SageMath (formerly Sage) is a program for numerical and symbolic mathematical computation that uses Python as its main language. It is meant to provide an alternative for commercial programs such as Maple, Matlab, and Mathematica. SageMath provides support for the following: Calculus: using Maxima and SymPy. Linear Algebra: using the GSL, SciPy and NumPy. Statistics: using R (through RPy) and SciPy. Graphs: using matplotlib. An interactive shellusing IPython. Access to Python modulessuch as PIL, SQLAlchemy, etc. Contents Installation contains the command-line version; for HTML documentation and inline help from the command line. includes the browser-based notebook interface. Note:Most if not all of the standard sage packages are available as optional dependencies of the package, therefore they have to be installed additionally as normal Arch packages in order to take advantage of their features. Note that there is no need to install them with sage -i, in fact mixing system and user packages is discouraged. Usage SageMath mainly uses Python as a scripting language with a few modifications to make it better suited for mathematical computations. SageMath command-line SageMath can be started from the command-line: $ sage For information on the SageMath command-line see this page. Note, however, that it is not very comfortable for some uses such as plotting. When you try to plot something, for example: sage: plot(sin,(x,0,10)) SageMath opens a browser window with the Sage Notebook. Sage Notebook Note:The SageMath Flask notebook is currently in maintenance mode and will be deprecated in favour of the Jupyter notebook. The Jupyter notebook is recommended for all new worksheets. You can use the application to convert your Flask notebooks to Jupyter A better suited interface for advanced usage in SageMath is the Notebook. To start the Notebook server from the command-line, execute: $ sage -n The notebook will be accessible in the browser from http://localhost:8080 and will require you to login. However, if you only run the server for personal use, and not across the internet, the login will be an annoyance. You can instead start the Notebook without requiring login, and have it automatically pop up in a browser, with the following command: $ sage -c "notebook(automatic_login=True)" Jupyter Notebook SageMath also provides a kernel for the Jupyter notebook. To use it, install and , launch the notebook with the command $ jupyter notebook and choose "SageMath" in the drop-down "New..." menu. The SageMath Jupyter notebook supports LaTeX output via the %display latex command and 3D plots if is installed. Cantor Cantor is an application included in the KDE Edu Project. It acts as a front-end for various mathematical applications such as Maxima, SageMath, Octave, Scilab, etc. See the Cantor page on the Sage wiki for more information on how to use it with SageMath. Cantor can be installed with the official repositories.package or as part of the or groups, available in the Optional additions SageTeX If you have installed TeX Live on your system, you may be interested in using SageTeX, a package that makes the inclusion of SageMath code in LaTeX files possible. TeX Live is made aware of SageTeX automatically so you can start using it straight away. As a simple example, here is how you include a Sage 2D plot in your TEX document (assuming you use pdflatex): include the sagetexpackage in the preamble of your document with the usual \usepackage{sagetex} create a sagesilentenvironment in which you insert your code: \begin{sagesilent} dob(x) = sqrt(x^2 - 1) / (x * arctan(sqrt(x^2 - 1))) dpr(x) = sqrt(x^2 - 1) / (x * log( x + sqrt(x^2 - 1))) p1 = plot(dob,(x, 1, 10), color='blue') p2 = plot(dpr,(x, 1, 10), color='red') ptot = p1 + p2 ptot.axes_labels(['$\\xi$','$\\frac{R_h}{\\max(a,b)}$']) \end{sagesilent} create the plot, e.g. inside a floatenvironment: \begin{figure} \begin{center} \sageplot[width=\linewidth]{ptot} \end{center} \end{figure} compile your document with the following procedure: $ pdflatex <doc.tex> $ sage <doc.sage> $ pdflatex <doc.tex> you can have a look at your output document. The full documentation of SageTeX is available on CTAN. Troubleshooting TeX Live does not recognize SageTex If your TeX Live installation does not find the SageTex package, you can try the following procedure (as root or use a local folder): Copy the files to the texmf directory: # cp /opt/sage/local/share/texmf/tex/* /usr/share/texmf/tex/ Refresh TeX Live: # texhash /usr/share/texmf/ texhash: Updating /usr/share/texmf/.//ls-R... texhash: Done.
On p10 of these EFT lecture notes, the "relevance" of operators in a Lagrangian is determined by comparing their mass dimension to the spacetime "d" one considers such that an operator is Relevant if its dimension is $ < d$ Marginal if its dimension is $ = d$ And irrelevant if its dimension is $ > d$ This means for example for the action of e scalar field in $d=4$\(S[\phi] = \int d^d x\left( \frac{1}{2} \partial_{\mu}\phi \partial^{\mu} \phi - \frac{1}{2} m^2\phi^2- \frac{\lambda}{4!} \phi^4 - \frac{\tau}{6!} \phi^6 + ...\right)\) that the mass term is relevant, the $\phi^4$ coupling is marginal, and the $\phi^6$ coupling is irrelevant, etc However, when analyzing the RG flow around a fixed point $S^{*}[(\phi)]$, the (ir)rrelevance of an operator is determined by linearizing the RG flow equation around this fixed point ($M$ is the linearized right hand side of the for example Wilson RG flow equation, $t$ is the RG time), \(\frac{\partial S}{\partial t} = M S^{*}[\phi]\) solving the corresponding Eigenvalue problem \(M O_i(\phi) = \lambda_i O_i(\phi)\) and looking at the sign of each Eigenvalue $\lambda_i$. The action around the fixed point can then be approximated as \(S[\phi] = S^{*}[\phi] + \sum\limits_i \alpha_i e^{\lambda_i t} O_i(t)\) and the operator $O_i$ (or direction in coupling space) is said to be Relevant, if $\lambda_i > 0$ (leads away from the fixed point) Marginal, if $\lambda_i = 0$ Irrelevant, if $\lambda_i < 0$ (these operators describe the fixed point theory) So my question is: What is the relationship between these two notions / definitions of (ir)relevant and marginal operators in an effective field theory? Are they they equivalent, and if so how can I see this (mathematically) ?
Infrared Bandpass Filtered Thinsats Thermal infrared filtering the front side of a thinsat reduces mass by half. Adding an optical high-pass filter to the front (sunwards) side of a thinsat allows almost all the high energy sunlight to reach the solar cells, while pushing all the black body thermal emissions to the back side. The infrared emissions create light pressure thrust that balances 2/3 of the front side thrust, reducing orbit-distorting light sail thrust to 1/3 of the thrust caused by light absorption on the solar cells in front. Backside Infrared Thrust Solid angle is the two dimensional "angular area" of something projected onto a unit sphere. The solid angle of a sphere is 4π steradians, the solid angle of a half-sphere is 2π steradians, the solid angle of a 1 degree wide band around any circumference of the sphere is 4π sin( 0.5° ) ≅ 0.1097 ster . A 1° radius circle is 9.570E-4 ster, and the visual size of the earth seen from the m288 orbit, close to 30° radius, is 2π( 1-cos(30°) ) ≅ 0.8418 ster. Black body radiation from a flat surface is Lambertian - the intensity of the radiation per solid angle observed is constant. Perpendicular to the surface, a disk appears round with maximum solid angle. At angle \theta from perpendicular, the disk appears just as "bright" but the disk appears elliptical; just as bright per solid angle, but the solid angle size is smaller. Lambertian radiation means the intensity of the solid angle is the same, it just looks smaller so less radiation comes at you as the angle increases, dropping to zero edge-on. The radiation intensity at angle \theta is I_0 \cos( \theta ) where I_0 is a constant we shall calculate now. For a thin band with an angle of \theta and width d\theta, the solid angle receiving the radiation is 2 \pi \sin( \theta ) d\theta and the radiation received is 2 \pi I_0 \cos( \theta ) \sin( \theta ) d\theta , or \pi I_0 \cos( 2 \theta ) d\theta . The total power P is the integral of this for \theta between 0 and \pi/2, so P = \pi I_0 or I_0 = P/\pi . The light pressure thrust is proportional to power divided by the speed of light c . Perpendicular emissions create a perpendicular thrust, thrust at right angles to the perpendicular creates zero perpendicular thrust, and thrust at angle \theta from the perpendicular creates \cos( \theta ) times the perpendicular thrust. The perpendicular thrust from the band an angle of \theta and width d\theta is dF = ( 2 \pi I_0 / c ) \cos( \theta )^2 \sin( \theta ) d\theta = ( 2 P / c ) \cos( \theta )^2 \sin( \theta ) d\theta . Again integrating for \theta between 0 and \pi/2, the total force F = 2P/3 c , 2/3 of the light pressure thrust if the thermal radiation all came perpendicularly out the back. While it would be nice to beam ALL the thrust backwards captured by the front, summing to a total thrust of zero, that is not thermodynamically possible. Reducing the sum of frontside and backside thrust to 1/3, and reducing launch mass in proportion, is still Very Nice. Note to perpetual motion machinists: A beam of light has a property called Etendue (ay-tahn-doo), a measure of the disorganization of the beam. Perfectly collimated light has an etendue of zero, all the rays are parallel. Disorganized light leaves a surface with rays scattering in all directions, and has higher etendue. No combination of mirrors and lenses can reduce etendue of the whole beam, the quantity always increases. Black body radiation has maximum etendue; there's no way to focus it on a spot smaller than the emitting surface. If you could, that spot could be hotter than the source, violating the second law of thermodynamics. Front side absorption and thrust versus IR filter energy cutoff W/m² 1366 sun power 1347 emitted power 1347 absorbed power 19.1 reflected power 8.9 IR power emitted frontside 1338 IR power emitted backside 499.1 forward thrust power 573.7 photovoltaic illumination 120°C Thinsat temperature 0.87 thrust/photovoltaic ratio 3.5 μm filter wavelength Midnight Flip Maneuver Observe the thinsat in the full power, maximum Night Light Pollution case. The white side facing the sun gathers full power when available, going to zero power as it passes into eclipse. But in the 90° to 150° position, and again in the 210° to 270° position, a non-zero-albedo front surface will reflect some light into the night sky. A large telescope tracking the thinsat will probably be able to detect it. What if there are a trillion thinsats? The little dab of light becomes huge. They will add a glow to the night sky, approaching the brightness of the full moon. Nature depends on a dark night sky, and many species use the light from the full moon to navigate or synchronize reproduction. Too much light, and server sky becomes a new environmental hazard, rather than a solution to old ones. The second animation, zero night light pollution, shows the thinsat turning towards the terminator (day-night boundary) as it enters the night sky. In this orientation, no light from the front of the thinsat can reach the night sky of the earth. The thinsat will keep turning when it passes into eclipse; with the right rotation rate going in, it will be oriented correctly when it comes out of eclipse. The red line on the big graph to the right shows the angle of the thinsat's "normal" (perpendicular vector from the front) relative to the center of the earth ("down"). It is not quite a straight line; the Earth's gravity field gets ever-so-slightly stronger towards the earth from the center of the thinsat's orbit, and weaker away from it That tiny tidal force is enough to accelerate the thinsat towards a 90° or 270° orientation, edge on to the center of the earth. We must add another 16.7% of rotational velocity going into eclipse to compensate. Perhaps we will never launch a trillion thinsats. Perhaps we will never use up all 4 billion IPV4 internet addresses (Oops. We did!). Best to plan ahead, and design the correct architecture into the system from the beginning, and be prepared for success. As you will see, with the right architecture, the flip protects thinsats from thermal shock. Absorbing heat from the earth during the night side flip First, let's compute the infrared we get from the 260K earth. That will be a disk with a solid angle radius \rho_E of about 30 degrees. The distance or curvature of the disk does not matter, only the solid angle does. The amount of heat deposited on the Lambertian surface of the thinsat is a function of angle, the cross product of two vectors. One unit vector \vec P is perpendicular to the surface of the thinsat, the other unit vector \vec E is pependicular to the center of the disk. The disk can be projected onto a unit sphere, and decomposed into rings with angular radius \phi and width d\phi , then further decomposed into pixel elements at angle \theta and width d\theta , an element with vector \vec \alpha( \theta, \phi ) . That vector can be decomposed into \vec \alpha( \theta, \phi )\parallel , a vector parallel to \vec D , and \vec \alpha( \theta, \phi )_\perp , perpendicular to \vec D . | \vec \alpha( \theta, \phi )_\parallel | = \cos( \phi ) and | \vec \alpha( \theta, \phi )_\perp | = \sin( \phi ) . If we sum the vectors \vec \alpha( \theta, \phi ) and \vec \alpha( (\theta + \pi), \phi ) , then the sin() perpendicular elements cancel, and so | \vec \alpha( \theta, \phi ) + \vec \alpha( (\theta + \pi), \phi ) | = 2 \cos(\phi) . The same is true for all pairs of elements around the circle, so the effective solid angle of the ring (radius 2 \pi \sin( \phi ) ) is 2 \pi \sin( \phi ) \cos( \phi ) d\phi = \pi \sin( 2 \phi ) d\phi . We can easily integrate all those rings in the disk into a solid angle vector { \pi \over 2 } ( 1 - \cos( 2 \rho ) ) in direction of \vec D . The amount of heat radiation reaching the thinsat is proportional to { \pi \over 2 } ( 1 - \cos( 2 \rho ) ) ( \vec P \cdot \vec D ) . If the angle between \vec P and \vec D is e , then \vec P \cdot \vec D = \cos( e ) . We know that if the thermally one-sided thinsat was pointed into a half-sphere, \rho = \pi / 2 and \vec P = \vec D , then the temperature of the thinsat equals the temperature of the half-sphere, T_s = T_h . The heat radiation would be proportional to \pi { T_h }^4 . If instead we are looking at the \rho_e radius earth ( temperature T_e ), at an angle of e from thinsat perpendicular, the amount of heat will be proportional to 0.5 ( 1 - \cos( 2 \rho_E ) ) \cos( e ) \approx 0.25 \cos( e ) and the temperature will be the 4th root of that quantity times T_e = 260K: T_{thinsat} = \left( 0.5 ( 1 - \cos( 2 \rho_E ) ) \cos( e ) \right) ^ {1/4} T_{earth} \rho_E = angular radius of earth ( approximately 30° for m288) e = angle of earth from thinsat perpendicular ( -60° to 60° for m288) At the equinoxes, the eclipse time is 1/6 of the 240 minute sidereal orbit, 40 minutes. At the start of eclipse, the thinsat points at the terminator, 30° away from the center or the earth, e = 60°, the heat is 1/8th of the full surround case, and the temperature drops towards 0.595 times 260K, or 155K. At the middle of the eclipse 20 minutes later, the backside points directly at the center of the earth, e = 0° and the temperature increases towards 184K. Over the next 20 minutes, the temperature drops back down towards 155K. The rate at which temperature changes will be inversely related to the small heat storage in the aluminum substrate; not very much for a thinsat! An IRfiltered thinsat in max power mode (which doesn't flip) will capture only the slight amount of heat energy from the earth with wavelengths shorter than 3.5μm wavelength, then radiate it out the backside into 2.7K space. Depending on the sharpness of the filter, the thinsat might drop below 20K. Subjecting the thinsat to that kind of thermal shock for a few percent extra power is not worth it. Version 4 thinsats with twice the radiator surface would drop temperature by another 16%, to 130K and 155K respectively with the backside flip. A version 4 thinsat in max power mode will range from 149K to 155K. So in every circumstance, the version 5 IR filtered thinsats stay warmer than version 4 thinsats. Metal Grid Hot Mirror A conductive aluminum grid, cell size around 2μm. It should be conductive enough to appear as a mirror at 3μm or longer wavelengths, as well as a front side conductor for the InP solar cells. Resist may be printable with contact inking of some sort; we do not need high yield for the links or the holes, a few missing links or filled holes will not change behavior much. Problems? This is based on many assumptions: A filter that reflects wavelengths longer than 3.5μm over wide angles is possible. I've been told this but need to verify it. The flip manuever requires significant angular velocity going into eclipse, hard to do when the thinsat is perpendicular to the sun. In reality, the thinsat will have some exposed angle at start of eclipse, and some light-pollution minimizing strategy to get there. We are in the "triple lambert" zone, the power captured from the sun is proportional to the cosine of perpendicular to the sun, and the light scattered into the night sky is also. The sky into which it scatters is nearly perpendicular, with big lambert attenuation, too.
I have $n=6*10^4$ people which should be sorted into groups. Each person has a list of up to 20 preferences according to which they should be assigned, and a score for each group (a real number). There are $\approx2*10^3$ groups, each taking $5 \leq k_{group} \leq 250$ people at most. Imagine a following scenario: Each person expresses a list of preferences for groups, in order, from 1 up to 20 - sort of a wishlist to which group they'd like to be admitted to. For each group, each person has a separate score, and each group has its own scale, but points are always nonnegative (e.g. for one group it's 0-100, for the other it's 0-400). Just to be clear: preference 1 is 'higher' than preference 20, and score 100 is higher than score 0. Each group has a minimum score of $s_{group}$ and everyone whose score for that group is $\geq s_{group}$ is admitted. Admitting to the group is done in order of the preference list. Say a person has a preference list $(A, B)$ (so only two preferences) and he/she has scored $p_A$ and $p_B$ points respectively. If $p_A \geq s_A$, person is admitted to the group $A$. Else if $p_B \geq s_B$ person is admitted to the group $B$. (In case of more preferences, more else ifs go here.) Else, person is left unassigned. The problem is, how to find the highest $s_{group}$ for every group so that it will take exactly $k_{group}$ people? In case such $s_{group}$ doesn't exist because not enough people have that group on their preference lists and the group cannot possibly reach $k_{group}$ members, so $s_{group}$ remains undefined (-1 or whatever). In any other case, $s_{group}$ will be equal to someone's score, and that person will be admitted to that group. Apart from the mentioned case, assume such $s_{group}$ exists, i.e. it won't happen that two people share the same score and are exactly at the cutoff point. Have a look at the example (written in json-like syntax with stripped quotes for readability): input:{groups: [ A: {k=2} B: {k=2} C: {k=5}],people: [ person1: { preferences: [A, B, C], scores: [2, 4, 5] }, person2: { preferences: [B, A], scores: [1, 5] }, person3: { preferences: [C, A, B], scores: [1, 5, 3] }, person4: { preferences: [A, B], scores: [1, 4] }, person5: { preferences: [A, B, C], scores: [0.5, 0.1, 5] }, person6: { preferences: [B, A], scores: [0, 0.2] }]};result:{groups: [ A: { admitted: [person1, person4], s=1 //if picking s to be highest possible, valid values are 0.5<s<=1 (person5 is 'next in line') }, B: { admitted: [person2, person5], s=0.1 //if picking s to highest possible, valid values are 0<s<=0.1 (person6 is 'next in line') }, C: { admitted: [person3], s=-1 //this remains unfilled, so finding s so the group has exactly k members is impossible - there aren't enough people who have C as a preference }]}; -- Having an instance for each person-preference pair, sorting it according to the score tied to the particular reference and assigning people to groups in that order ignores the preference order, so it's not a correct solution. With an edit to disregard the previous placement in case I encounter a higher-positioned preference for the same person, I 'unadmit' previously (wrongly) admitted person and by modifying that group I might have changed the outcome for every person which has come between making the mistake and recognizing it (by taking up space where I shouldn't) and drifted from the correct solution. I have tried to reinterpret this as a variation of Stable Marriage Problem, but it seemed different enough. It's not homework nor a competition problem and I'm working with real-world data so handling of pathological cases is of no particular importance to me. Also I'm not looking for a running time measured in milliseconds, but it should be reasonable (i.e. $O(n^3)$ is probably too slow).