text
stringlengths
256
16.4k
Optimality conditions for $ E $-differentiable vector optimization problems with the multiple interval-valued objective function 1. Faculty of Mathematics and Computer Science University of Łódź, Banacha 22, 90-238 Łódź, Poland 2. Department of Mathematics, Hadhramout University, P.O. BOX : (50511-50512), Al-Mahrah, Yemen In this paper, a nonconvex vector optimization problem with multiple interval-valued objective function and both inequality and equality constraints is considered. The functions constituting it are not necessarily differentiable, but they are $ E $-differentiable. The so-called $ E $-Karush-Kuhn-Tucker necessary optimality conditions are established for the considered $ E $-differentiable vector optimization problem with the multiple interval-valued objective function. Also the sufficient optimality conditions are derived for such interval-valued vector optimization problems under appropriate (generalized) $ E $-convexity hypotheses. Keywords:$ E $-differentiable function, $ E $-differentiable vector optimization problem with multiple interval-valued objective function, $ E $-Karush-Kuhn-Tucker necessary optimality conditions, $ E $-convex function. Mathematics Subject Classification:Primary: 90C29, 90C30, 90C46, 90C26. Citation:Tadeusz Antczak, Najeeb Abdulaleem. Optimality conditions for $ E $-differentiable vector optimization problems with the multiple interval-valued objective function. Journal of Industrial & Management Optimization, doi: 10.3934/jimo.2019089 References: [1] I. Ahmad, A. Jayswal and J. Banerjee, On interval-valued optimization problems with generalized invex functions, [2] I. Ahmad, D. Singh and B. A. Dar, Optimality conditions for invex interval valued nonlinear programming problems involving generalized [3] G. Alefeld and J. Herzberger, [4] [5] S. Chanas and D. Kuchta, Multiobjective programming in optimization of interval objective functions – a generalized approach, [6] [7] X. Chen and Z. Li, On optimality conditions and duality for non-differentiable interval-valued programming problems with the generalized ( [8] [9] [10] [11] [12] [13] E. Hosseinzade and H. Hassanpour, The Karush-Kuhn-Tucker optimality conditions in interval-valued multiobjective programming problems, [14] [15] A. Jayswal, I. Stancu-Minasian and I. Ahmad, On sufficiency and duality for a class of interval-valued programming problems, [16] S. Karmakar and A. K. Bhunia, An alternative optimization technique for interval objective constrained optimization problems via multiobjective programming, [17] L. Li, S. Liu and J. Zhang, Univex interval-valued mapping with differentiability and its application in nonlinear programming, [18] L. Li, S. Liu and J. Zhang, On interval-valued invex mappings and optimality conditions for interval-valued optimization problems, [19] [20] O. L. Mangasarian, [21] A. E.-M. A. Megahed, H. G. Gomma, E. A. Youness and A.-Z. H. El-Banna, Optimality conditions of [22] F. Mirzapour, Some properties on [23] R. E. Moore, [24] [25] [26] D. Singh, B. A. Dar and D. S. Kim, KKT optimality conditions in interval valued multiobjective programming with generalized differentiable functions, [27] [28] [29] [30] H.-C. Wu, The Karush-Kuhn-Tucker optimality conditions in multiobjective programming problems with interval-valued objective functions, [31] [32] [33] [34] E. A. Youness, Characterization of efficient solution of multiobjective [35] J. K. Zhang, S. Y. Liu, L. F. Li and Q. X. Feng, The KKT optimality conditions in a class of generalized convex optimization problems with an interval-valued objective function, [36] H.-C. Zhou and Y.-J. Wang, Optimality condition and mixed duality for interval-valued optimization, in show all references References: [1] I. Ahmad, A. Jayswal and J. Banerjee, On interval-valued optimization problems with generalized invex functions, [2] I. Ahmad, D. Singh and B. A. Dar, Optimality conditions for invex interval valued nonlinear programming problems involving generalized [3] G. Alefeld and J. Herzberger, [4] [5] S. Chanas and D. Kuchta, Multiobjective programming in optimization of interval objective functions – a generalized approach, [6] [7] X. Chen and Z. Li, On optimality conditions and duality for non-differentiable interval-valued programming problems with the generalized ( [8] [9] [10] [11] [12] [13] E. Hosseinzade and H. Hassanpour, The Karush-Kuhn-Tucker optimality conditions in interval-valued multiobjective programming problems, [14] [15] A. Jayswal, I. Stancu-Minasian and I. Ahmad, On sufficiency and duality for a class of interval-valued programming problems, [16] S. Karmakar and A. K. Bhunia, An alternative optimization technique for interval objective constrained optimization problems via multiobjective programming, [17] L. Li, S. Liu and J. Zhang, Univex interval-valued mapping with differentiability and its application in nonlinear programming, [18] L. Li, S. Liu and J. Zhang, On interval-valued invex mappings and optimality conditions for interval-valued optimization problems, [19] [20] O. L. Mangasarian, [21] A. E.-M. A. Megahed, H. G. Gomma, E. A. Youness and A.-Z. H. El-Banna, Optimality conditions of [22] F. Mirzapour, Some properties on [23] R. E. Moore, [24] [25] [26] D. Singh, B. A. Dar and D. S. Kim, KKT optimality conditions in interval valued multiobjective programming with generalized differentiable functions, [27] [28] [29] [30] H.-C. Wu, The Karush-Kuhn-Tucker optimality conditions in multiobjective programming problems with interval-valued objective functions, [31] [32] [33] [34] E. A. Youness, Characterization of efficient solution of multiobjective [35] J. K. Zhang, S. Y. Liu, L. F. Li and Q. X. Feng, The KKT optimality conditions in a class of generalized convex optimization problems with an interval-valued objective function, [36] H.-C. Zhou and Y.-J. Wang, Optimality condition and mixed duality for interval-valued optimization, in [1] Anurag Jayswala, Tadeusz Antczakb, Shalini Jha. Second order modified objective function method for twice differentiable vector optimization problems over cone constraints. [2] Xiuhong Chen, Zhihua Li. On optimality conditions and duality for non-differentiable interval-valued programming problems with the generalized ( [3] Kequan Zhao, Xinmin Yang. Characterizations of the $E$-Benson proper efficiency in vector optimization problems. [4] Zhiang Zhou, Xinmin Yang, Kequan Zhao. $E$-super efficiency of set-valued optimization problems involving improvement sets. [5] Francesco Mainardi. On some properties of the Mittag-Leffler function $\mathbf{E_\alpha(-t^\alpha)}$, completely monotone for $\mathbf{t> 0}$ with $\mathbf{0<\alpha<1}$. [6] [7] Alfonso Castro, Guillermo Reyes. Existence of multiple solutions for a semilinear problem and a counterexample by E. N. Dancer. [8] Yuhua Sun, Laisheng Wang. Optimality conditions and duality in nondifferentiable interval-valued programming. [9] [10] Tao Chen, Yunping Jiang, Gaofei Zhang. No invariant line fields on escaping sets of the family $\lambda e^{iz}+\gamma e^{-iz}$. [11] Zutong Wang, Jiansheng Guo, Mingfa Zheng, Youshe Yang. A new approach for uncertain multiobjective programming problem based on $\mathcal{P}_{E}$ principle. [12] [13] [14] Alejandro Cataldo, Juan-Carlos Ferrer, Pablo A. Rey, Antoine Sauré. Design of a single window system for e-government services: the chilean case. [15] [16] Vladimir V. Marchenko, Klavdii V. Maslov, Dmitry Shepelsky, V. V. Zhikov. E.Ya.Khruslov. On the occasion of his 70th birthday. [17] [18] [19] [20] 2018 Impact Factor: 1.025 Tools Metrics Other articles by authors [Back to Top]
Table of Contents: Electric Charge Gauss Law Coulombs law Electric Field Intensity Equipotential Surface Electric Potential Energy What is Electrostatics? Study of stationary electric charges at rest is known as electrostatics. An electroscope is used to detect the charge on a body. Pith ball electroscope is used to detect a charge and to know the nature of the charge. Gold leaf electroscope which was invented by Bennet detects a charge and the nature of the charge and determines the quantity of the charge. Conductors, Insulators, and Semiconductors A body in which electric charge can easily flow through is called conductor(e.g. metals). A body in which electric charge cannot flow is called insulatoror dielectric. (e.g. glass, wool, rubber, plastic, etc.) Substances which are intermediate between conductors and insulators are called semiconductors. (e.g. silicon, germanium, etc) Dielectric Strength: It is the minimum field intensity that should be applied to break down the insulating property of insulator. Dielectric strength of air = 3 10 6V/m Dielectric strength of Teflon = 60 × 10 6Vm –1 The maximum charge a sphere can hold depends on the size and dielectric strength of the medium in which sphere is placed. The maximum charge a sphere of radius ‘r’ can hold in air = 4 ε 0r 2dielectric strength of air. When the electric field in air exceeds its dielectric strength air molecules become ionized and are accelerated by fields and the air becomes conducting. Watch this Video for More Reference Surface Charge Density (\(\sigma\)) The charge per unit area of a conductor is defined as surface charge density. \(\sigma\) = \(\frac{q}{A}=\frac{total\,\,charge}{area}\) . When A=1 m 2 then \(\sigma\) = q. Its unit is coulomb/ meter and its dimensions are ATL –2. It is used in the formula for the charged disc, charged conductor and an infinite sheet of charge etc. Surface Charge Density depends on the shape of the conductor and presence of other conductors and insulators in the vicinity of the conductor. \(\sigma\alpha \frac{1}{r^{2}}\;i.e.\frac{\sigma_1}{\sigma_2} = \frac{r_{2}^{2}}{r_{1}^{2}}\) \(\sigma\) is maximum at pointed surfaces and for plane surfaces it is minimum. Surface Charge Density is maximum at the corners of rectangular laminas and at the vertex of the conical conductor. Electric Flux The number of electric lines of force crossing a surface normal to the area gives electric flux \(\phi _E\). Electric flux through an elementary area ds is defined as the scalar product of area and field. d\(\phi _E\) = Eds \(\cos \theta\) Or, \(\phi _E = \int \vec{E}.\vec{ds}\) Electric Flux will be maximum when electric field is normal to the area (\((d\phi = Eds)\)) Electric Flux will be minimum when field is parallel to area (\((d\phi = t0)\)) For a closed surface, outward flux is positive and inward flux is negative. Electric potential (V) The electric potential at a point in a field is the amount of work done in bringing a unit +ve charge from infinity to the point. It is equal to the Electric potential energy of unit + ve charge at that point. It is a scalar S.I unit is volt Electric Potential at a distance ‘d’ due to a point charge q in air or vacuum is V = \(\frac{1}{4\pi {{\varepsilon }_{0}}}.\frac{q}{d}\) Electric Potential (V) = – \(\int{\vec{E}.\vec{d}x}\) \(\vec{E}=-\frac{dv}{dx}\) (or) V = Ed A positive charge in a field moves from high potential to low potential whereas electron moves from low potential to high potential when left free. Work done in moving a charge q through a potential difference V is W = q V joule Gain in the Kinetic energy:\(\frac{1}{2}m{{v}^{2}}=qV\) Gain in the velocity:\(v\,\,=\sqrt{\frac{2qV}{m}}\) Equipotential Surface A surface on which all points are at the same potential. Electric field is perpendicular to the equipotential surface Work done in moving a charge on the equipotential surface is zero. In the Case of a Hollow Charged Sphere. Intensity at any point inside the sphere is zero. Intensity at any point on the surface is same and it is maximum \(\frac{1}{4\pi {{\varepsilon }_{0}}}.\frac{q}{{{r}^{2}}}\) Outside the sphere \(\frac{1}{4\pi {{\varepsilon }_{0}}}.\frac{q}{{{d}^{2}}}\) i.e.d = distance from the centre. It behaves as if the whole charge is at its centre. Electric field Intensity in vector form \(\vec{E}=\,\,\;\frac{1}{4\pi {{\varepsilon }_{0}}}.\frac{q}{{{d}^{3}}}\vec{d} \;or\; \vec{E}\,\,=\,\,\;\;\frac{1}{4\pi {{\varepsilon }_{0}}}.\frac{q}{{{d}^{2}}}\hat{d}\) The resultant electric field intensity obey’s the principle of superposition.\(\vec{E}\,\,=\,\,{{\vec{E}}_{1\,\,}}+{{\vec{E}}_{2}}+{{\vec{E}}_{3}}+……………\) In the Case of Solid Charged Sphere The potential at any point inside the sphere is the same as that at any point on its surface V = \(\frac{1}{4\pi {{\varepsilon }_{0}}}.\frac{q}{r}\) It is an equipotential surface. Outside the sphere, the potential varies inversely as the distance of the point from the center. V = \(\frac{1}{4\pi {{\varepsilon }_{0}}}.\frac{q}{d}\) Note: Inside a non conducting charged sphere electric field is present. Electric intensity inside the sphere E = \(\frac{1}{4\pi {{\varepsilon }_{0}}}.\frac{Q}{{{R}^{3}}}d\) Here, d is the distance from the centre of sphere and E\(\propto\)d Electron volt This is the unit of energy in particle physics and is represented as eV. 1 eV = 1.602×10 ‑19J. Charged Particle in Electric Field When a positive test charge is fired in the direction of an electric field It accelerates, Its kinetic energy increases and hence Its potential energy decreases. A charged particle of mass m carrying a charge q and falling through a potential V acquires a speed of \(\sqrt{2Vq/m}\). Electric Dipole Two equal and opposite charges separated by a constant distance is called electric dipole. \(\vec{P}=q.2\bar{l}\) Dipole Moment It is the product of one of the charges and distance between the charges. It is a vector directed from negative charge towards the positive charge along the line joining the two charges. The torque acting on an electric dipole placed in a uniform electric field is given by the relation\(\vec{\tau }=\vec{P}\text{ }x\text{ }\vec{E}\ \text{ }i.e.,\ \tau =PE\sin \theta\) , where \(\theta\) is the angle between \(\vec{P} \;and\; \vec{E}\). ⇒ The electric intensity (E) on the axial line at a distance’d’ from the center of an electric dipole is \(E=\frac{1}{4\pi {{\varepsilon }_{0}}}\cdot \frac{2Pd}{{{({{d}^{2}}-{{l}^{2}})}^{2}}}\) and on equatorial line, the electric intensity (E) =\(\frac{1}{4\pi {{\varepsilon }_{0}}}\cdot \frac{P}{{{({{d}^{2}}+{{l}^{2}})}^{3/2}}}\). ⇒ For a short dipole i.e., if l 2 << d 2, then the electric intensity on equatorial line is given by E = \(\frac{1}{4\pi {{\varepsilon }_{0}}}\cdot \frac{P}{{{d}^{3}}}\). ⇒ The potential due to an electric dipole on the axial line is V =\(\frac{1}{4\pi {{\varepsilon }_{0}}}\cdot \frac{P}{({{d}^{2}}-{{l}^{2}})}\) and at any point on the equatorial line it is zero. When two unlike equal charges +Q and –Q are separated by a distance The net electric potential is zero on the perpendicular bisector of the line joining the charges. The bisector is equipotential and zero potential line. Work done in moving a charge on this line is zero. Electric intensity at any point on the bisector is perpendicular to the bisector. Electric intensity at any point on the bisector parallel to the bisector is zero. Combined field due to two Point Charges Due to two Similar Charges If charges q 1 and q 2 are separated by a distance ‘r’, null point ( where resulting field intensity is zero) is formed on the line joining those two charges. Null point is formed within the charges. Null point is located nearer to weak charge. If x is distance of null point from q 1, (weak charge) then \(\frac{{{\text{q}}_{\text{1}}}}{{{\text{x}}^{\text{2}}}}\,\,=\,\,\frac{{{q}_{2}}}{{{(r-x)}^{2}}}\) \(\Rightarrow \,\,x\,\,=\,\,\frac{r}{\sqrt{{{q}_{2}}/{{q}_{1}}\,}\,\,+1}\) Here q 1 and q 2 are like charges Due to two Dissimilar Charges If q 1and q 2are unlike charges then null point is formed on the line joining two charges. Null point is formed outside the charges. Null point is from nearer weak charge. x is the distance of null point from q 1(weak charge) then\(\frac{{{\text{q}}_{\text{1}}}}{{{\text{x}}^{\text{2}}}}\,\,=\,\,\frac{{{q}_{2}}}{{{(r+x)}^{2}}}\) \(\Rightarrow \,\,x\,\,=\,\,\frac{r}{\sqrt{{{q}_{2}}/{{q}_{1}}\,}\,\,-1}\) In the above formulae \({{q}_{2}}/{{q}_{1}}\)is numerical ratio of charges Zero Potential point due to two Charges If two unlike charges q 1and q 2are separated by a distance ‘r’, the net potential is zero at two points on the line joining them. One in between them and the other outside the charges. Both the points are nearer to weak charge (q 1). \(\frac{{{\text{q}}_{\text{1}}}}{{{\text{y}}^{{}}}}\,\,=\,\,\frac{{{q}_{2}}}{(r+y)}\) (for point 2,out side the charges), Here q 2is numerical value of strong charge \(\Rightarrow \,\,x\,\,=\,\,\frac{r}{\frac{{{q}_{2}}}{{{q}_{1}}}+1}\;\; ; y\,\,=\,\,\frac{r}{\frac{{{q}_{2}}}{{{q}_{1}}}-1}\) Due to two similar charges zero potential point is not formed. Electric Lines of Force The line of force is the path along which a unit +ve charge, accelerates in the electric field. The tangent at any point to the line of force gives the direction of the field at that point. Properties of Electric Lines of Force Two lines of force never intersect. The number of lines of force passing normally through a unit area around a point is numerically equal to E, the strength of the field at the point. Lines of force always leave or end normally on a charged conductor. Electric lines of force can never be closed loops. Lines of force have a tendency to contract longitudinallyand exert a force of repulsion on one another laterally. If in a region of space there is no electric field, there will be no lines of force. Inside a conductor, there cannot be any line of force. The number of lines of force passing normally through a unit area around a point is numerically equal to E. In a uniform field, lines of force are parallel to one another. Difference between electric lines of force and magnetic lines of force Electric lines of force never form closed loops while magnetic lines are always closed loops. Electric lines of force do not exist inside a conductor but magnetic lines of force may exist inside a magnetic material.
this is a mystery to me, despite having changed computers several times, despite the website rejecting the application, the very first sequence of numbers I entered into it's search window which returned the same prompt to submit them for publication appear every time, I mean ive got hundreds of them now, and it's still far too much rope to give a person like me sitting along in a bedroom the capacity to freely describe any such sequence and their meaning if there isn't any already there my maturity levels are extremely variant in time, that's just way too much rope to give me considering its only me the pursuits matter to, who knows what kind of outlandish crap I might decide to spam in each of them but still, the first one from well, almost a decade ago shows up as the default content in the search window 1,2,3,6,11,23,47,106,235 well, now there is a bunch of stuff about them pertaining to "trees" and "nodes" but that's what I mean by too much rope you cant just let a lunatic like me start inventing terminology as I go oh well "what would cotton mathers do?" the chat room unanimously ponders lol i see Secret had a comment to make, is it really a productive use of our time censoring something that is most likely not blatant hate speech? that's the only real thing that warrants censorship, even still, it has its value, in a civil society it will be ridiculed anyway? or at least inform the room as to whom is the big brother doing the censoring? No? just suggestions trying to improve site functionality good sir relax im calm we are all calm A104101 is a hilarious entry as a side note, I love that Neil had to chime in for the comment section after the big promotional message in the first part to point out the sequence is totally meaningless as far as mathematics is concerned just to save face for the websites integrity after plugging a tv series with a reference But seriously @BalarkaSen, some of the most arrogant of people will attempt to play the most innocent of roles and accuse you of arrogance yourself in the most diplomatic way imaginable, if you still feel that your point is not being heard, persist until they give up the farce please very general advice for any number of topics for someone like yourself sir assuming gender because you should hate text based adam long ago if you were female or etc if its false then I apologise for the statistical approach to human interaction So after having found the polynomial $x^6-3x^4+3x^2-3$we can just apply Eisenstein to show that this is irreducible over Q and since it is monic, it follwos that this is the minimal polynomial of $\sqrt{1+\sqrt[3]{2}}$ over $\mathbb{Q}$ ? @MatheinBoulomenos So, in Galois fields, if you have two particular elements you are multiplying, can you necessarily discern the result of the product without knowing the monic irreducible polynomial that is being used the generate the field? (I will note that I might have my definitions incorrect. I am under the impression that a Galois field is a field of the form $\mathbb{Z}/p\mathbb{Z}[x]/(M(x))$ where $M(x)$ is a monic irreducible polynomial in $\mathbb{Z}/p\mathbb{Z}[x]$.) (which is just the product of the integer and its conjugate) Note that $\alpha = a + bi$ is a unit iff $N\alpha = 1$ You might like to learn some of the properties of $N$ first, because this is useful for discussing divisibility in these kinds of rings (Plus I'm at work and am pretending I'm doing my job) Anyway, particularly useful is the fact that if $\pi \in \Bbb Z[i]$ is such that $N(\pi)$ is a rational prime then $\pi$ is a Gaussian prime (easily proved using the fact that $N$ is totally multiplicative) and so, for example $5 \in \Bbb Z$ is prime, but $5 \in \Bbb Z[i]$ is not prime because it is the norm of $1 + 2i$ and this is not a unit. @Alessandro in general if $\mathcal O_K$ is the ring of integers of $\Bbb Q(\alpha)$, then $\Delta(\mathcal O_K) [\mathcal O_K:\Bbb Z[\alpha]]^2=\Delta(\mathcal O_K)$, I'd suggest you read up on orders, the index of an order and discriminants for orders if you want to go into that rabbit hole also note that if the minimal polynomial of $\alpha$ is $p$-Eisenstein, then $p$ doesn't divide $[\mathcal{O}_K:\Bbb Z[\alpha]]$ this together with the above formula is sometimes enough to show that $[\mathcal{O}_K:\Bbb Z[\alpha]]=1$, i.e. $\mathcal{O}_K=\Bbb Z[\alpha]$ the proof of the $p$-Eisenstein thing even starts with taking a $p$-Sylow subgroup of $\mathcal{O}_K/\Bbb Z[\alpha]$ (just as a quotient of additive groups, that quotient group is finite) in particular, from what I've said, if the minimal polynomial of $\alpha$ wrt every prime that divides the discriminant of $\Bbb Z[\alpha]$ at least twice, then $\Bbb Z[\alpha]$ is a ring of integers that sounds oddly specific, I know, but you can also work with the minimal polynomial of something like $1+\alpha$ there's an interpretation of the $p$-Eisenstein results in terms of local fields, too. If the minimal polynomial of $f$ is $p$-Eisenstein, then it is irreducible over $\Bbb Q_p$ as well. Now you can apply the Führerdiskriminantenproduktformel (yes, that's an accepted English terminus technicus) @MatheinBoulomenos You once told me a group cohomology story that I forget, can you remind me again? Namely, suppose $P$ is a Sylow $p$-subgroup of a finite group $G$, then there's a covering map $BP \to BG$ which induces chain-level maps $p_\# : C_*(BP) \to C_*(BG)$ and $\tau_\# : C_*(BG) \to C_*(BP)$ (the transfer hom), with the corresponding maps in group cohomology $p : H^*(G) \to H^*(P)$ and $\tau : H^*(P) \to H^*(G)$, the restriction and corestriction respectively. $\tau \circ p$ is multiplication by $|G : P|$, so if I work with $\Bbb F_p$ coefficients that's an injection. So $H^*(G)$ injects into $H^*(P)$. I should be able to say more, right? If $P$ is normal abelian, it should be an isomorphism. There might be easier arguments, but this is what pops to mind first: By Schur-Zassenhaus theorem, $G = P \rtimes G/P$ and $G/P$ acts trivially on $P$ (the action is by inner auts, and $P$ doesn't have any), there is a fibration $BP \to BG \to B(G/P)$ whose monodromy is exactly this action induced on $H^*(P)$, which is trivial, so we run the Lyndon-Hochschild-Serre spectral sequence with coefficients in $\Bbb F_p$. The $E^2$ page is essentially zero except the bottom row since $H^*(G/P; M) = 0$ if $M$ is an $\Bbb F_p$-module by order reasons and the whole bottom row is $H^*(P; \Bbb F_p)$. This means the spectral sequence degenerates at $E^2$, which gets us $H^*(G; \Bbb F_p) \cong H^*(P; \Bbb F_p)$. @Secret that's a very lazy habit you should create a chat room for every purpose you can imagine take full advantage of the websites functionality as I do and leave the general purpose room for recommending art related to mathematics @MatheinBoulomenos No worries, thanks in advance. Just to add the final punchline, what I wanted to ask is what's the general algorithm to recover $H^*(G)$ back from $H^*(P; \Bbb F_p)$'s where $P$ runs over Sylow $p$-subgroups of $G$? Bacterial growth is the asexual reproduction, or cell division, of a bacterium into two daughter cells, in a process called binary fission. Providing no mutational event occurs, the resulting daughter cells are genetically identical to the original cell. Hence, bacterial growth occurs. Both daughter cells from the division do not necessarily survive. However, if the number surviving exceeds unity on average, the bacterial population undergoes exponential growth. The measurement of an exponential bacterial growth curve in batch culture was traditionally a part of the training of all microbiologists... As a result, there does not exists a single group which lived long enough to belong to, and hence one continue to search for new group and activity eventually, a social heat death occurred, where no groups will generate creativity and other activity anymore Had this kind of thought when I noticed how many forums etc. have a golden age, and then died away, and at the more personal level, all people who first knew me generate a lot of activity, and then destined to die away and distant roughly every 3 years Well i guess the lesson you need to learn here champ is online interaction isn't something that was inbuilt into the human emotional psyche in any natural sense, and maybe it's time you saw the value in saying hello to your next door neighbour Or more likely, we will need to start recognising machines as a new species and interact with them accordingly so covert operations AI may still exists, even as domestic AIs continue to become widespread It seems more likely sentient AI will take similar roles as humans, and then humans will need to either keep up with them with cybernetics, or be eliminated by evolutionary forces But neuroscientists and AI researchers speculate it is more likely that the two types of races are so different we end up complementing each other that is, until their processing power become so strong that they can outdo human thinking But, I am not worried of that scenario, because if the next step is a sentient AI evolution, then humans would know they will have to give way However, the major issue right now in the AI industry is not we will be replaced by machines, but that we are making machines quite widespread without really understanding how they work, and they are still not reliable enough given the mistakes they still make by them and their human owners That is, we have became over reliant on AI, and not putting enough attention on whether they have interpret the instructions correctly That's an extraordinary amount of unreferenced rhetoric statements i could find anywhere on the internet! When my mother disapproves of my proposals for subjects of discussion, she prefers to simply hold up her hand in the air in my direction for example i tried to explain to her that my inner heart chakras tell me that my spirit guide suggests that many females i have intercourse with are easily replaceable and this can be proven from historical statistical data, but she wont even let my spirit guide elaborate on that premise i feel as if its an injustice to all child mans that have a compulsive need to lie to shallow women they meet and keep up a farce that they are either fully grown men (if sober) or an incredibly wealthy trust fund kid (if drunk) that's an important binary class dismissed Chatroom troll: A person who types messages in a chatroom with the sole purpose to confuse or annoy. I was just genuinely curious How does a message like this come from someone who isn't trolling: "for example i tried to explain to her that my inner heart chakras tell me that my spirit guide suggests that many ... with are easily replaceable and this can be proven from historical statistical data, but she wont even let my spirit guide elaborate on that premise" 3 Anyway feel free to continue, it just seems strange @Adam I'm genuinely curious what makes you annoyed or confused yes I was joking in the line that you referenced but surely you cant assume me to be a simpleton of one definitive purpose that drives me each time I interact with another person? Does your mood or experiences vary from day to day? Mine too! so there may be particular moments that I fit your declared description, but only a simpleton would assume that to be the one and only facet of another's character wouldn't you agree? So, there are some weakened forms of associativity. Such as flexibility ($(xy)x=x(yx)$) or "alternativity" ($(xy)x=x(yy)$, iirc). Tough, is there a place a person could look for an exploration of the way these properties inform the nature of the operation? (In particular, I'm trying to get a sense of how a "strictly flexible" operation would behave. Ie $a(bc)=(ab)c\iff a=c$) @RyanUnger You're the guy to ask for this sort of thing I think: If I want to, by hand, compute $\langle R(\partial_1,\partial_2)\partial_2,\partial_1\rangle$, then I just want to expand out $R(\partial_1,\partial_2)\partial_2$ in terms of the connection, then use linearity of $\langle -,-\rangle$ and then use Koszul's formula? Or there is a smarter way? I realized today that the possible x inputs to Round(x^(1/2)) covers x^(1/2+epsilon). In other words we can always find an epsilon (small enough) such that x^(1/2) <> x^(1/2+epsilon) but at the same time have Round(x^(1/2))=Round(x^(1/2+epsilon)). Am I right? We have the following Simpson method $$y^{n+2}-y^n=\frac{h}{3}\left (f^{n+2}+4f^{n+1}+f^n\right ), n=0, \ldots , N-2 \\ y^0, y^1 \text{ given } $$ Show that the method is implicit and state the stability definition of that method. How can we show that the method is implicit? Do we have to try to solve $y^{n+2}$ as a function of $y^{n+1}$ ? @anakhro an energy function of a graph is something studied in spectral graph theory. You set up an adjacency matrix for the graph, find the corresponding eigenvalues of the matrix and then sum the absolute values of the eigenvalues. The energy function of the graph is defined for simple graphs by this summation of the absolute values of the eigenvalues
Fix an integer $k >0 $ and would like to know the maximum number of different ways that a number $n$ can be expressed as a sum of $k$ squares, i.e. the number of integer solutions to $$ n = x_1^2 + x_2^2 + \dots + x_k^2$$ with $x_1 \ge x_2 \ge \dots \ge x_k$ and $x_i \ge 0$ for every $i$. What I'd really like to know about is the asymptotics as $n \to \infty$. I asked a number theorist once about the case $k=2$, and if I remember correctly, he said that there are numbers $n$ which can be expressed as a sum of two squares in at least $$n^{c/\log \log n}$$ different ways, for some constant $c > 0$, and this is more-or-less best possible. This is the kind of answer I am seeking for larger $k$. Clarification: What I mean by maximum is the following. I want functions $f_k(x)$, as large as possible, such that there exists some sequence of integers $\{ a_i \}$ with $a_i \to \infty$,and such that $a_i$ can be written as the sum of $k$ squares in at least $f_k(a_i)$ ways.
I came across John Duffield Quantum Computing SE via this hot question. I was curious to see an account with 1 reputation and a question with hundreds of upvotes.It turned out that the reason why he has so little reputation despite a massively popular question is that he was suspended.May I ... @Nelimee Do we need to merge? Currently, there's just one question with "phase-estimation" and another question with "quantum-phase-estimation". Might we as well use just one tag? (say just "phase-estimation") @Blue 'merging', if I'm getting the terms right, is a specific single action that does exactly that and is generally preferable to editing tags on questions. Having said that, if it's just one question, it doesn't really matter although performing a proper merge is still probably preferable Merging is taking all the questions with a specific tag and replacing that tag with a different one, on all those questions, on a tag level, without permanently changing anything about the underlying tags @Blue yeah, you could do that. It generally requires votes, so it's probably not worth bothering when only one question has that tag @glS "Every hermitian matrix satisfy this property: more specifically, all and only Hermitian matrices have this property" ha? I though it was only a subset of the set of valid matrices ^^ Thanks for the precision :) @Nelimee if you think about it it's quite easy to see. Unitary matrices are the ones with phases as eigenvalues, while Hermitians have real eigenvalues. Therefore, if a matrix is not Hermitian (does not have real eigenvalues), then its exponential will not have eigenvalues of the form $e^{i\phi}$ with $\phi\in\mathbb R$. Although I'm not sure whether there could be exceptions for non diagonalizable matrices (if $A$ is not diagonalizable, then the above argument doesn't work) This is an elementary question, but a little subtle so I hope it is suitable for MO.Let $T$ be an $n \times n$ square matrix over $\mathbb{C}$.The characteristic polynomial $T - \lambda I$ splits into linear factors like $T - \lambda_iI$, and we have the Jordan canonical form:$$ J = \begin... @Nelimee no! unitarily diagonalizable matrices are all and only the normal ones (satisfying $AA^\dagger =A^\dagger A$). For general diagonalizability if I'm not mistaken onecharacterization is that the sum of the dimensions of the eigenspaces has to match the total dimension @Blue I actually agree with Nelimee here that it's not that easy. You get $UU^\dagger = e^{iA} e^{-iA^\dagger}$, but if $A$ and $A^\dagger$ do not commute it's not straightforward that this doesn't give you an identity I'm getting confused. I remember there being some theorem about one-to-one mappings between unitaries and hermitians provided by the exponential, but it was some time ago and may be confusing things in my head @Nelimee if there is a $0$ there then it becomes the normality condition. Otherwise it means that the matrix is not normal, therefore not unitarily diagonalizable, but still the product of exponentials is relatively easy to write @Blue you are right indeed. If $U$ is unitary then for sure you can write it as exponential of an Hermitian (time $i$). This is easily proven because $U$ is ensured to be unitarily diagonalizable, so you can simply compute it's logarithm through the eigenvalues. However, logarithms are tricky and multivalued, and there may be logarithms which are not diagonalizable at all. I've actually recently asked some questions on math.SE on related topics @Mithrandir24601 indeed, that was also what @Nelimee showed with an example above. I believe my argument holds for unitarily diagonalizable matrices. If a matrix is only generally diagonalizable (so it's not normal) then it's not true also probably even more generally without $i$ factors so, in conclusion, it does indeed seem that $e^{iA}$ unitary implies $A$ Hermitian. It therefore also seems that $e^{iA}$ unitary implies $A$ normal, so that also my argument passing through the spectra works (though one has to show that $A$ is ensured to be normal) Now what we need to look for is 1) The exact set of conditions for which the matrix exponential $e^A$ of a complex matrix $A$, is unitary 2) The exact set of conditions for which the matrix exponential $e^{iA}$ of a real matrix $A$ is unitary @Blue fair enough - as with @Semiclassical I was thinking about it with the t parameter, as that's what we care about in physics :P I can possibly come up with a number of non-Hermitian matrices that gives unitary evolution for a specific t Or rather, the exponential of which is unitary for $t+n\tau$, although I'd need to check If you're afraid of the density of diagonalizable matrices, simply triangularize $A$. You get $$A=P^{-1}UP,$$ with $U$ upper triangular and the eigenvalues $\{\lambda_j\}$ of $A$ on the diagonal.Then$$\mbox{det}\;e^A=\mbox{det}(P^{-1}e^UP)=\mbox{det}\;e^U.$$Now observe that $e^U$ is upper ... There's 15 hours left on a bountied question, but the person who offered the bounty is suspended and his suspension doesn't expire until about 2 days, meaning he may not be able to award the bounty himself?That's not fair: It's a 300 point bounty. The largest bounty ever offered on QCSE. Let h...
In our final section we place stronger restrictions on the choice of potential, we assume that they are Gaussian. Although this appears to be restrictive, we may approximate general potentials by Gaussians. When the potentials are chosen to have this nice form the problem can be rephrased in terms of linear operators, which can be handled via discrete Fourier analysis. In this section we consider Hamiltonians of the form \begin{equation} H(u)=\sum_ja_1(u_j-u_{j-1}+x)^2+4a_2\left(\frac{u_{j-1}-u_{j+1}}{2}+x\right)^2,\end{equation} where \(V_1(r)=a_1r^2\), \(V_2(r)=a_2r^2\) are Gaussian potentials, and the factor 4 is motivated by Taylor expanding a general potential. Writing in matrix form and diagonalising, we obtain the following form for the Hamiltonian \begin{equation} H = U^T Y^T\left( a_1 \Psi_1 + a_2\Psi_2 \right) Y U+ (a_1 + 4a_2)(N+1)x^2,\end{equation} where \(\Psi_1\) and \(\Psi_2\) are diagonal matrices consisting of eigenvalues of the linear operators. Spectral analysis reveals \begin{equation} H=\sum_{j=0}^N\left(a_1\lambda^{(1)}_j+a_2\lambda^{(2)}_j\right)w_j^2+(a_1+4a_2)(N+1)x^2 ,\end{equation} where \(\lambda^{(1)}_j=sin^2\left(\pi y_j\right)\), \(\lambda^{(2)}_j=sin^2\left(2\pi y_j\right)\) and \(y_j=\frac{j}{N+1}\). Substituting into free energy and simplifying we obtain the following theorem. Theorem The mixed next nearest neighbour interaction model on the torus with potentials \(V_1(r)=a_1r^2, V_2(r)=a_2r^2\), where \(min\{a_1,a_1+4a_2\}>0\), has free energy \begin{equation}f(x)=(a_1+4a_2)x^2-\frac{1}{2\beta}\log\frac{\pi}{\beta}+\frac{1}{2\beta}\lim_{N\to\infty}\sum_{j=1}^N\log\left(4a_1\sin^2\left(\pi y_j\right)+4a_2\sin^2\left(2\pi y_j\right)\right)\Delta y .\end{equation} where we set \(y_j=\frac{j}{N+1}\) for \(0 \leq j \leq N\) and \(\Delta y=\frac{1}{N+1}\). We may compute the summation explicitly by developing the summands with double angle formulae, so that we obtain quadrature forms for which we may establish convergence. Theorem For Gaussian potentials \(V_1(r)=a_1r^2\), \(V_2(r)=a_2r^2\), with \(\min\{a_1,a_1+4a_2\}>0\), the gradient model with periodic boundary conditions has free energy \begin{equation*} f(x) = (a_1+4a_2)x^2-\frac{1}{2\beta}\log\frac{\pi}{\beta}+\frac{1}{2\beta}\int_0^1\log\left(4a_1\sin^2\left(\pi t\right)+4a_2\sin^2\left(2\pi t\right)\right) dt. \end{equation*} Moreover, the rate of convergence is \(O ( N^{-1} \log N) \).
Difference between revisions of "Main Page" (→The Problem) (→Bibliography) Line 99: Line 99: == Bibliography == == Bibliography == + # H. Furstenberg, Y. Katznelson, “[http://math.stanford.edu/~katznel/hj43.pdf A density version of the Hales-Jewett theorem for k=3]“, Graph Theory and Combinatorics (Cambridge, 1988). Discrete Math. 75 (1989), no. 1-3, 227–241. # H. Furstenberg, Y. Katznelson, “[http://math.stanford.edu/~katznel/hj43.pdf A density version of the Hales-Jewett theorem for k=3]“, Graph Theory and Combinatorics (Cambridge, 1988). Discrete Math. 75 (1989), no. 1-3, 227–241. − # H. Furstenberg, Y. Katznelson, “[http://math.stanford.edu/~katznel/dhj12.pdf A density version of the Hales-Jewett theorem]“, J. Anal. Math. 57 (1991), 64–119. # H. Furstenberg, Y. Katznelson, “[http://math.stanford.edu/~katznel/dhj12.pdf A density version of the Hales-Jewett theorem]“, J. Anal. Math. 57 (1991), 64–119. − + + + == Other resources == == Other resources == * [http://meta.wikimedia.org/wiki/Help:Contents Wiki user's guide] * [http://meta.wikimedia.org/wiki/Help:Contents Wiki user's guide] Revision as of 13:44, 12 February 2009 Contents The Problem Let [math][3]^n[/math] be the set of all length [math]n[/math] strings over the alphabet [math]1, 2, 3[/math]. A combinatorial line is a set of three points in [math][3]^n[/math], formed by taking a string with one or more wildcards [math]x[/math] in it, e.g., [math]112x1xx3\ldots[/math], and replacing those wildcards by [math]1, 2[/math] and [math]3[/math], respectively. In the example given, the resulting combinatorial line is: [math]\{ 11211113\ldots, 11221223\ldots, 11231333\ldots \}[/math]. A subset of [math][3]^n[/math] is said to be line-free if it contains no lines. Let [math]c_n[/math] be the size of the largest line-free subset of [math][3]^n[/math]. [math]k=3[/math] Density Hales-Jewett (DHJ(3)) theorem: [math]\lim_{n \rightarrow \infty} c_n/3^n = 0[/math] The original proof of DHJ(3) used arguments from ergodic theory. The basic problem to be consider by the Polymath project is to explore a particular combinatorial approach to DHJ, suggested by Tim Gowers. Some background to this project can be found here, and general discussion on massively collaborative "polymath" projects can be found here. Threads (1-199) A combinatorial approach to density Hales-Jewett (inactive) (200-299) Upper and lower bounds for the density Hales-Jewett problem (active) (300-399) The triangle-removal approach (inactive) (400-499) Quasirandomness and obstructions to uniformity (final call) (500-599) TBA (600-699) A reading seminar on density Hales-Jewett (active) A spreadsheet containing the latest lower and upper bounds for [math]c_n[/math] can be found here. Unsolved questions Gowers.462: Incidentally, it occurs to me that we as a collective are doing what I as an individual mathematician do all the time: have an idea that leads to an interesting avenue to explore, get diverted by some temporarily more exciting idea, and forget about the first one. I think we should probably go through the various threads and collect together all the unsolved questions we can find (even if they are vague ones like, “Can an approach of the following kind work?”) and write them up in a single post. If this were a more massive collaboration, then we could work on the various questions in parallel, and update the post if they got answered, or reformulated, or if new questions arose. IP-Szemeredi (a weaker problem than DHJ) Solymosi.2: In this note I will try to argue that we should consider a variant of the original problem first. If the removal technique doesn’t work here, then it won’t work in the more difficult setting. If it works, then we have a nice result! Consider the Cartesian product of an IP_d set. (An IP_d set is generated by d numbers by taking all the [math]2^d[/math] possible sums. So, if the n numbers are independent then the size of the IP_d set is [math]2^d[/math]. In the following statements we will suppose that our IP_d sets have size [math]2^n[/math].) Prove that for any [math]c\gt0[/math] there is a [math]d[/math], such that any [math]c[/math]-dense subset of the Cartesian product of an IP_d set (it is a two dimensional pointset) has a corner. The statement is true. One can even prove that the dense subset of a Cartesian product contains a square, by using the density HJ for [math]k=4[/math]. (I will sketch the simple proof later) What is promising here is that one can build a not-very-large tripartite graph where we can try to prove a removal lemma. The vertex sets are the vertical, horizontal, and slope -1 lines, having intersection with the Cartesian product. Two vertices are connected by an edge if the corresponding lines meet in a point of our [math]c[/math]-dense subset. Every point defines a triangle, and if you can find another, non-degenerate, triangle then we are done. This graph is still sparse, but maybe it is well-structured for a removal lemma. Finally, let me prove that there is square if [math]d[/math] is large enough compare to [math]c[/math]. Every point of the Cartesian product has two coordinates, a 0,1 sequence of length [math]d[/math]. It has a one to one mapping to [math][4]^d[/math]; Given a point [math]( (x_1,…,x_d),(y_1,…,y_d) )[/math] where [math]x_i,y_j[/math] are 0 or 1, it maps to [math](z_1,…,z_d)[/math], where [math]z_i=0[/math] if [math]x_i=y_i=0[/math], [math]z_i=1[/math] if [math]x_i=1[/math] and [math]y_i=0, z_i=2[/math] if [math]x_i=0[/math] and [math]y_i=1[/math], and finally [math]z_i=3[/math] if [math]x_i=y_i=1[/math]. Any combinatorial line in [math][4]^d[/math] defines a square in the Cartesian product, so the density HJ implies the statement. Gowers.7: With reference to Jozsef’s comment, if we suppose that the [math]d[/math] numbers used to generate the set are indeed independent, then it’s natural to label a typical point of the Cartesian product as [math](\epsilon,\eta)[/math], where each of [math]\epsilon[/math] and [math]\eta[/math] is a 01-sequence of length [math]d[/math]. Then a corner is a triple of the form [math](\epsilon,\eta)[/math], [math](\epsilon,\eta+\delta)[/math], [math](\epsilon+\delta,\eta)[/math], where [math]\delta[/math] is a [math]\{-1,0,1\}[/math]-valued sequence of length [math]d[/math] with the property that both [math]\epsilon+\delta[/math] and [math]\eta+\delta[/math] are 01-sequences. So the question is whether corners exist in every dense subset of the original Cartesian product. This is simpler than the density Hales-Jewett problem in at least one respect: it involves 01-sequences rather than 012-sequences. But that simplicity may be slightly misleading because we are looking for corners in the Cartesian product. A possible disadvantage is that in this formulation we lose the symmetry of the corners: the horizontal and vertical lines will intersect this set in a different way from how the lines of slope -1 do. I feel that this is a promising avenue to explore, but I would also like a little more justification of the suggestion that this variant is likely to be simpler. Gowers.22: A slight variant of the problem you propose is this. Let’s take as our ground set the set of all pairs [math](U,V)[/math] of subsets of [math][n][/math], and let’s take as our definition of a corner a triple of the form [math](U,V), (U\cup D,V), (U,V\cup D)[/math], where both the unions must be disjoint unions. This is asking for more than you asked for because I insist that the difference [math]D[/math] is positive, so to speak. It seems to be a nice combination of Sperner’s theorem and the usual corners result. But perhaps it would be more sensible not to insist on that positivity and instead ask for a triple of the form [math](U,V), ((U\cup D)\setminus C,V), (U, (V\cup D)\setminus C[/math], where [math]D[/math] is disjoint from both [math]U[/math] and [math]V[/math] and [math]C[/math] is contained in both [math]U[/math] and [math]V[/math]. That is your original problem I think. I think I now understand better why your problem could be a good toy problem to look at first. Let’s quickly work out what triangle-removal statement would be needed to solve it. (You’ve already done that, so I just want to reformulate it in set-theoretic language, which I find easier to understand.) We let all of [math]X[/math], [math]Y[/math] and [math]Z[/math] equal the power set of [math][n][/math]. We join [math]U\in X[/math] to [math]V\in Y[/math] if [math](U,V)\in A[/math]. Ah, I see now that there’s a problem with what I’m suggesting, which is that in the normal corners problem we say that [math](x,y+d)[/math] and [math](x+d,y)[/math] lie in a line because both points have the same coordinate sum. When should we say that [math](U,V\cup D)[/math] and [math](U\cup D,V)[/math] lie in a line? It looks to me as though we have to treat the sets as 01-sequences and take the sum again. So it’s not really a set-theoretic reformulation after all. O'Donnell.35: Just to confirm I have the question right… There is a dense subset [math]A[/math] of [math]\{0,1\}^n x \{0,1\}^n[/math]. Is it true that it must contain three nonidentical strings [math](x,x’), (y,y’), (z,z’)[/math] such that for each [math]i = 1…n,[/math] the 6 bits [math][ x_i x'_i ][/math] [math][ y_i y'_i ][/math] [math][ z_i z'_i ][/math] are equal to one of the following: [math][ 0 0 ] [ 0 0 ] [ 0, 1 ] [ 1 0 ] [ 1 1 ] [ 1 1 ][/math] [math][ 0 0 ], [ 0 1 ], [ 0, 1 ], [ 1 0 ], [ 1 0 ], [ 1 1 ],[/math] [math][ 0 0 ] [ 1 0 ] [ 0, 1 ] [ 1 0 ] [ 0 1 ] [ 1 1 ][/math] ? McCutcheon.469: IP Roth: Just to be clear on the formulation I had in mind (with apologies for the unprocessed code): for every [math]\delta\gt0[/math] there is an [math]n[/math] such that any [math]E\subset [n]^{[n]}\times [n]^{[n]}[/math] having relative density at least [math]\delta[/math] contains a corner of the form [math]\{a, a+(\sum_{i\in \alpha} e_i ,0),a+(0, \sum_{i\in \alpha} e_i)\}[/math]. Here [math](e_i)[/math] is the coordinate basis for [math][n]^{[n]}[/math], i.e. [math]e_i(j)=\delta_{ij}[/math]. Presumably, this should be (perhaps much) simpler than DHJ, [math]k=3[/math]. High-dimensional Sperner Kalai.29: There is an analogous for Sperner but with high dimensional combinatorial spaces instead of "lines" but I do not remember the details (Kleitman(?) Katona(?) those are ususal suspects.) Fourier approach Kalai.29: A sort of generic attack one can try with Sperner is to look at [math]f=1_A[/math] and express using the Fourier expansion of [math]f[/math] the expression [math]\int f(x)f(y)1_{x\lty}[/math] where [math]x\lty[/math] is the partial order (=containment) for 0-1 vectors. Then one may hope that if [math]f[/math] does not have a large Fourier coefficient then the expression above is similar to what we get when [math]A[/math] is random and otherwise we can raise the density for subspaces. (OK, you can try it directly for the [math]k=3[/math] density HJ problem too but Sperner would be easier;)This is not unrealeted to the regularity philosophy. Gowers.31: Gil, a quick remark about Fourier expansions and the [math]k=3[/math] case. I want to explain why I got stuck several years ago when I was trying to develop some kind of Fourier approach. Maybe with your deep knowledge of this kind of thing you can get me unstuck again. The problem was that the natural Fourier basis in [math][3]^n[/math] was the basis you get by thinking of [math][3]^n[/math] as the group [math]\mathbb{Z}_3^n[/math]. And if that’s what you do, then there appear to be examples that do not behave quasirandomly, but which do not have large Fourier coefficients either. For example, suppose that [math]n[/math] is a multiple of 7, and you look at the set [math]A[/math] of all sequences where the numbers of 1s, 2s and 3s are all multiples of 7. If two such sequences lie in a combinatorial line, then the set of variable coordinates for that line must have cardinality that’s a multiple of 7, from which it follows that the third point automatically lies in the line. So this set [math]A[/math] has too many combinatorial lines. But I’m fairly sure — perhaps you can confirm this — that [math]A[/math] has no large Fourier coefficient. You can use this idea to produce lots more examples. Obviously you can replace 7 by some other small number. But you can also pick some arbitrary subset [math]W[/math] of [math][n][/math] and just ask that the numbers of 0s, 1s and 2s inside [math]W[/math] are multiples of 7. DHJ for dense subsets of a random set Tao.18: A sufficiently good Varnavides type theorem for DHJ may have a separate application from the one in this project, namely to obtain a “relative” DHJ for dense subsets of a sufficiently pseudorandom subset of [math][3]^n[/math], much as I did with Ben Green for the primes (and which now has a significantly simpler proof by Gowers and by Reingold-Trevisan-Tulsiani-Vadhan). There are other obstacles though to that task (e.g. understanding the analogue of “dual functions” for Hales-Jewett), and so this is probably a bit off-topic. Bibliography M. Elkin, "An Improved Construction of Progression-Free Sets ", preprint. H. Furstenberg, Y. Katznelson, “A density version of the Hales-Jewett theorem for k=3“, Graph Theory and Combinatorics (Cambridge, 1988). Discrete Math. 75 (1989), no. 1-3, 227–241. H. Furstenberg, Y. Katznelson, “A density version of the Hales-Jewett theorem“, J. Anal. Math. 57 (1991), 64–119. B. Green, J. Wolf, "A note on Elkin's improvement of Behrend's construction", preprint. K. O'Bryant, "Sets of integers that do not contain long arithmetic progressions", preprint. R. McCutcheon, “The conclusion of the proof of the density Hales-Jewett theorem for k=3“, unpublished.
also, if you are in the US, the next time anything important publishing-related comes up, you can let your representatives know that you care about this and that you think the existing situation is appalling @heather well, there's a spectrum so, there's things like New Journal of Physics and Physical Review X which are the open-access branch of existing academic-society publishers As far as the intensity of a single-photon goes, the relevant quantity is calculated as usual from the energy density as $I=uc$, where $c$ is the speed of light, and the energy density$$u=\frac{\hbar\omega}{V}$$is given by the photon energy $\hbar \omega$ (normally no bigger than a few eV) di... Minor terminology question. A physical state corresponds to an element of a projective Hilbert space: an equivalence class of vectors in a Hilbert space that differ by a constant multiple - in other words, a one-dimensional subspace of the Hilbert space. Wouldn't it be more natural to refer to these as "lines" in Hilbert space rather than "rays"? After all, gauging the global $U(1)$ symmetry results in the complex line bundle (not "ray bundle") of QED, and a projective space is often loosely referred to as "the set of lines [not rays] through the origin." — tparker3 mins ago > A representative of RELX Group, the official name of Elsevier since 2015, told me that it and other publishers “serve the research community by doing things that they need that they either cannot, or do not do on their own, and charge a fair price for that service” for example, I could (theoretically) argue economic duress because my job depends on getting published in certain journals, and those journals force the people that hold my job to basically force me to get published in certain journals (in other words, what you just told me is true in terms of publishing) @EmilioPisanty > for example, I could (theoretically) argue economic duress because my job depends on getting published in certain journals, and those journals force the people that hold my job to basically force me to get published in certain journals (in other words, what you just told me is true in terms of publishing) @0celo7 but the bosses are forced because they must continue purchasing journals to keep up the copyright, and they want their employees to publish in journals they own, and journals that are considered high-impact factor, which is a term basically created by the journals. @BalarkaSen I think one can cheat a little. I'm trying to solve $\Delta u=f$. In coordinates that's $$\frac{1}{\sqrt g}\partial_i(g^{ij}\partial_j u)=f.$$ Buuuuut if I write that as $$\partial_i(g^{ij}\partial_j u)=\sqrt g f,$$ I think it can work... @BalarkaSen Plan: 1. Use functional analytic techniques on global Sobolev spaces to get a weak solution. 2. Make sure the weak solution satisfies weak boundary conditions. 3. Cut up the function into local pieces that lie in local Sobolev spaces. 4. Make sure this cutting gives nice boundary conditions. 5. Show that the local Sobolev spaces can be taken to be Euclidean ones. 6. Apply Euclidean regularity theory. 7. Patch together solutions while maintaining the boundary conditions. Alternative Plan: 1. Read Vol 1 of Hormander. 2. Read Vol 2 of Hormander. 3. Read Vol 3 of Hormander. 4. Read the classic papers by Atiyah, Grubb, and Seeley. I am mostly joking. I don't actually believe in revolution as a plan of making the power dynamic between the various classes and economies better; I think of it as a want of a historical change. Personally I'm mostly opposed to the idea. @EmilioPisanty I have absolutely no idea where the name comes from, and "Killing" doesn't mean anything in modern German, so really, no idea. Googling its etymology is impossible, all I get are "killing in the name", "Kill Bill" and similar English results... Wilhelm Karl Joseph Killing (10 May 1847 – 11 February 1923) was a German mathematician who made important contributions to the theories of Lie algebras, Lie groups, and non-Euclidean geometry.Killing studied at the University of Münster and later wrote his dissertation under Karl Weierstrass and Ernst Kummer at Berlin in 1872. He taught in gymnasia (secondary schools) from 1868 to 1872. He became a professor at the seminary college Collegium Hosianum in Braunsberg (now Braniewo). He took holy orders in order to take his teaching position. He became rector of the college and chair of the town... @EmilioPisanty Apparently, it's an evolution of ~ "Focko-ing(en)", where Focko was the name of the guy who founded the city, and -ing(en) is a common suffix for places. Which...explains nothing, I admit.
I came up with this proof for my number theory class. Is it valid? Proposition: $u\in U_m \Rightarrow u^{\varphi(m)}=1$ (Where $U_m$ is the multiplicative group of integers modulo $m$) Attempted Proof: Lemma:$\forall a,b\in U_m$, $(a\cdot b)\in U_m$ Consider two numbers $a,b\in U_{m}$. Being in $U_m$, we know that they are coprime to $m$; in other words: $\exists p,q\in\mathbb{Z}: (mp+1=a)\land(mq+1=b)$. $(a\cdot b)=(mp+1)(mq+1) \tag{Substitution Equality}$ $(mp+1)(mq+1)={m}^{2}pq+mp+mq+1 \tag{Simplifying}$ ${m}^{2}pq+mp+mq+1=m(mpq+p+q)+1 \tag{Simplifying}$ $(mpq+p+q)\in\mathbb{Z} \tag{Closure}$ $\therefore [m(mpq+p+q)+1]\in U_m \tag{By definition of $U_m$}$ $\therefore (a\cdot b)\in U_m \tag{Substitution Equality}$ $$\Box$$ I know my teacher will be fine with everything up to this point. What comes next is where I would really like input. Actual Proof: $u\in U_m \tag{Given}$ $|U_m|=\varphi(m) \tag{Proven Before}$ We can, therefore write $U_m$ as the set $\{{c}_{1},{c}_{2},\ldots,{c}_{\varphi(m)}\}$, where $c$ is just a placeholder variable. $$A:=\{x|(d\in {U}_{m})\land(x=u\cdot d)\}$$ To clarify, $A$ is simply a set of every element of $U_m$ multiplied by $u$ (i.e. $A=\{uc_{1},uc_{2},\ldots,uc_{\varphi(m)}\}$). $\forall n\in A, n\in U_{m} \tag{Lemma}$ $|A|=|U_m| \tag{By Definition}$ Based upon these last two statements, we can assert that $A$ is a permutation of $U_m$. Consequently: $$\prod^{\varphi(m)}_{x\in U_m}x \pmod m \equiv \prod^{\varphi(m)}_{x\in A}x \pmod m\tag{Commutativity}$$ $$\prod^{\varphi(m)}_{x\in U_m}x \pmod m \equiv \prod^{\varphi(m)}_{x\in U_m}(u\cdot x) \pmod m \tag{Substitution Equality}$$ $$\prod^{\varphi(m)}_{x\in U_m}x \pmod m \equiv {u}^{\varphi(m)}\prod^{\varphi(m)}_{x\in U_m} x \pmod m $$ $\tag{$\uparrow$Property of Multiplication}$ $$1\pmod m \equiv {u}^{\varphi(m)}\pmod m \tag{Cancelling}$$ $$\therefore {u}^{\varphi(m)} \pmod m \equiv 1 \tag{$1\bmod m \equiv 1$}$$ $$\blacksquare$$ Does this work? Thanks for reading!
A Matrix Commuting With a Diagonal Matrix with Distinct Entries is Diagonal Problem 492 Let\[D=\begin{bmatrix}d_1 & 0 & \dots & 0 \\0 &d_2 & \dots & 0 \\\vdots & & \ddots & \vdots \\0 & 0 & \dots & d_n\end{bmatrix}\]be a diagonal matrix with distinct diagonal entries: $d_i\neq d_j$ if $i\neq j$.Let $A=(a_{ij})$ be an $n\times n$ matrix such that $A$ commutes with $D$, that is,\[AD=DA.\]Then prove that $A$ is a diagonal matrix. We prove that the $(i,j)$-entry of $A$ is $a_{ij}=0$ for $i\neq j$. We compare the $(i,j)$-entries of both sides of $AD=DA$.Let $D=(d_{ij})$. That is, $d_{ii}=d_i$ and $d_{ij}=0$ if $i\neq j$.The $(i,j)$-entry of $AD$ is\begin{align*}(AD)_{ij}=\sum_{k=1}^n a_{ik}d_{kj}=a_{ij}d_j.\end{align*} The $(i,j)$-entry of $DA$ is\[(DA)_{ij}=\sum_{k=1}^n d_{ik}a_{kj}=d_ia_{ij}.\] Hence we have\[a_{ij}d_j=a_{ij}d_i.\]Or equivalently we have\[a_{ij}(d_j-d_i)=0.\] Since $d_i\neq d_j$, we have $d_j-d_i\neq 0$.Thus, we must have $a_{ij}=0$ for $i\neq j$. Symmetric Matrices and the Product of Two MatricesLet $A$ and $B$ be $n \times n$ real symmetric matrices. Prove the followings.(a) The product $AB$ is symmetric if and only if $AB=BA$.(b) If the product $AB$ is a diagonal matrix, then $AB=BA$.Hint.A matrix $A$ is called symmetric if $A=A^{\trans}$.In […] If the Matrix Product $AB=0$, then is $BA=0$ as Well?Let $A$ and $B$ be $n\times n$ matrices. Suppose that the matrix product $AB=O$, where $O$ is the $n\times n$ zero matrix.Is it true that the matrix product with opposite order $BA$ is also the zero matrix?If so, give a proof. If not, give a […] Simple Commutative Relation on MatricesLet $A$ and $B$ are $n \times n$ matrices with real entries.Assume that $A+B$ is invertible. Then show that\[A(A+B)^{-1}B=B(A+B)^{-1}A.\](University of California, Berkeley Qualifying Exam)Proof.Let $P=A+B$. Then $B=P-A$.Using these, we express the given […]
Hey guys! I built the voltage multiplier with alternating square wave from a 555 timer as a source (which is measured 4.5V by my multimeter) but the voltage multiplier doesn't seem to work. I tried first making a voltage doubler and it showed 9V (which is correct I suppose) but when I try a quadrupler for example and the voltage starts from like 6V and starts to go down around 0.1V per second. Oh! I found a mistake in my wiring and fixed it. Now it seems to show 12V and instantly starts to go down by 0.1V per sec. But you really should ask the people in Electrical Engineering. I just had a quick peek, and there was a recent conversation about voltage multipliers. I assume there are people there who've made high voltage stuff, like rail guns, which need a lot of current, so a low current circuit like yours should be simple for them. So what did the guys in the EE chat say... The voltage multiplier should be ok on a capacitive load. It will drop the voltage on a resistive load, as mentioned in various Electrical Engineering links on the topic. I assume you have thoroughly explored the links I have been posting for you... A multimeter is basically an ammeter. To measure voltage, it puts a stable resistor into the circuit and measures the current running through it. Hi all! There is theorem that links the imaginary and the real part in a time dependent analytic function. I forgot its name. Its named after some dutch(?) scientist and is used in solid state physics, who can help? The Kramers–Kronig relations are bidirectional mathematical relations, connecting the real and imaginary parts of any complex function that is analytic in the upper half-plane. These relations are often used to calculate the real part from the imaginary part (or vice versa) of response functions in physical systems, because for stable systems, causality implies the analyticity condition, and conversely, analyticity implies causality of the corresponding stable physical system. The relation is named in honor of Ralph Kronig and Hans Kramers. In mathematics these relations are known under the names... I have a weird question: The output on an astable multivibrator will be shown on a multimeter as half the input voltage (for example we have 9V-0V-9V-0V...and the multimeter averages it out and displays 4.5V). But then if I put that output to a voltage doubler, the voltage should be 18V, not 9V right? Since the voltage doubler will output in DC. I've tried hooking up a transformer (9V to 230V, 0.5A) to an astable multivibrator (which operates at 671Hz) but something starts to smell burnt and the components of the astable multivibrator get hot. How do I fix this? I check it after that and the astable multivibrator works. I searched the whole god damn internet, asked every god damn forum and I can't find a single schematic that converts 9V DC to 1500V DC without using giant transformers and power stage devices that weight 1 billion tons.... something so "simple" turns out to be hard as duck In peskin book of QFT the sum over zero point energy modes is an infinite c-number, fortunately, it's experimental evidence doesn't appear, since experimentalists measure the difference in energy from the ground state. According to my understanding the zro pt energy is the same as the ground state, isn't it? If so, always it is possible to substract a finite number (higher exited state for e.g.) from this zero point enrgy (which is infinte), it follows that, experimentally we always obtain infinte spectrum. @AaronStevens Yeah, I had a good laugh to myself when he responded back with "Yeah, maybe they considered it and it was just too complicated". I can't even be mad at people like that. They are clearly fairly new to physics and don't quite grasp yet that most "novel" ideas have been thought of to death by someone; likely 100+ years ago if it's classical physics I have recently come up with a design of a conceptual electromagntic field propulsion system which should not violate any conservation laws, particularly the Law of Conservation of Momentum and the Law of Conservation of Energy. In fact, this system should work in conjunction with these two laws ... I rememeber that Gordon Freeman's thesis was "Observation of Einstein-Podolsky-Rosen Entanglement on Supraquantum Structures by Induction Through Nonlinear Transuranic Crystal of Extremely Long Wavelength (ELW) Pulse from Mode-Locked Source Array " In peskin book of QFT the sum over zero point energy modes is an infinite c-number, fortunately, it's experimental evidence doesn't appear, since experimentalists measure the difference in energy from the ground state. According to my understanding the zro pt energy is the same as the ground state, isn't it? If so, always it is possible to substract a finite number (higher exited state for e.g.) from this zero point enrgy (which is infinte), it follows that, experimentally we always obtain infinte spectrum. @ACuriousMind What confuses me is the interpretation of Peskin to this infinite c-number and the experimental fact He said, the second term is the sum over zero point energy modes which is infnite as you mentioned. He added," fortunately, this energy cannot be detected experm., since the experiments measure only the difference between from the ground state of H". @ACuriousMind Thank you, I understood your explanations clearly. However, regarding what Peskin mentioned in his book, there is a contradiction between what he said about the infinity of the zero point energy/ground state energy, and the fact that this energy is not detectable experimentally because the measurable quantity is the difference in energy between the ground state (which is infinite and this is the confusion) and a higher level. It's just the first encounter with something that needs to be renormalized. Renormalizable theories are not "incomplete", even though you can take the Wilsonian standpoint that renormalized QFTs are effective theories cut off at a scale. according to the author, the energy differenc is always infinite according to two fact. the first is, the ground state energy is infnite, secondly, the energy differenc is defined by substituting a higher level energy from the ground state one. @enumaris That is an unfairly pithy way of putting it. There are finite, rigorous frameworks for renormalized perturbation theories following the work of Epstein and Glaser (buzzword: Causal perturbation theory). Just like in many other areas, the physicist's math sweeps a lot of subtlety under the rug, but that is far from unique to QFT or renormalization The classical electrostatics formula $H = \int \frac{\mathbf{E}^2}{8 \pi} dV = \frac{1}{2} \sum_a e_a \phi(\mathbf{r}_a)$ with $\phi_a = \sum_b \frac{e_b}{R_{ab}}$ allows for $R_{aa} = 0$ terms i.e. dividing by zero to get infinities also, the problem stems from the fact that $R_{aa}$ can be zero due to using point particles, overall it's an infinite constant added to the particle that we throw away just as in QFT @bolbteppa I understand the idea that we need to drop such terms to be in consistency with experiments. But i cannot understand why the experiment didn't predict such infinities that arose in the theory? These $e_a/R_{aa}$ terms in the big sum are called self-energy terms, and are infinite, which means a relativistic electron would also have to have infinite mass if taken seriously, and relativity forbids the notion of a rigid body so we have to model them as point particles and can't avoid these $R_{aa} = 0$ values.
I am studying about the moduli space of a 2 monopole system from Harvey's notes, and Manton's paper. In both of these, (Harvey section 6.2), after constructing the Lagrangian for a two dyons system, the author replaces the electric charge $e$ with $\dot{\chi}$. $$e \to \dot{\chi}$$ to obtain the the equation of motion is geodesic form, from which the Taub-NUT metric can be seen. Why is this identification made? Why does the rate of change of the gauge parameter, give the electric charge? A theory describing a charged particle on a configuration space $M$ can be obtained from the reduction of a theory of a particle moving on an extended configuration space $M \times S^1$. (To be precise symplectic reduction). This is the simplest example of the Kaluza-Klein approach. Please see: Marsden and Ratiu: Introduction to mechanics and symmetry section 7.6: (page 196) treating the case of a nonrelativistic charged particle. The principle is that a metric can be chosen on the extended space such that the free geodesic equation of motion of the extended coordinate imply that it's canonical momentum is a constant of motion which can be interpreted as the electric charge on the original configuration space. In other words, when the solution of the equations of motion of the extra coordinate are substituted in the equation of motion, one gets the standard equation of motion of a particle on the original configuration space with the electric charge equal to (the constant) canonical momentum of the extended coordinate. This approach offers an explanation of the quantization of the electric charge semiclassically, because since the extra dimension is circular the corresponding momenta should be quantizaed.
After delivering a short introductory paragraph, Rubini delved right into formulas for special determinants and relations between them. Intriguingly, he did this without providing the reader with any of the basic properties of determinants or any explanation of his notation. In order for his readers to understand this "Note", it seems as if Rubini assumed they would be knowledgeable about determinants or have access to Brioschi's textbook, La teorica dei determinanti e le sue applicazioni, published in 1854, which was the first Italian work about determinants available to mathematicians in the Kingdom of Two Sicilies. Rubini opened his first section with a reference to this influential work and provided his readers with formulas which are given on page 44 of Brioschi's work [1854]. Although the modern-day reader may be perplexed upon viewing the first determinant Rubini supplied, an individual with knowledge of expansion by minors and properties of determinants would be able to understand the majority of the mathematics presented. After reading a couple of sections of Rubini's article, the reader will quickly discover that the author established few connections between the formulas he provided. Most ideas do not lead to the successive idea. Furthermore, where connections could be made within the paper, Rubini chose to format his article in such a way that these ideas were presented disjointly. For instance, in Section 3 of his article, Rubini supplied formulas (8) and (9), respectively: \[{\begin{vmatrix}1 & 1 & \ldots & 1\\ 1 & 1+x & \ldots & 1\\\vdots & \vdots & \ddots & \vdots\\1 & 1 & \ldots & 1+x \end{vmatrix}}_{n-1}= x^{n-1}\quad\quad\quad(8)\] and \[{\begin{vmatrix}1+x & 1 & \ldots & 1\\1 & 1+x & \ldots & 1\\\vdots & \vdots & \ddots & \vdots\\1 & 1 & \ldots & 1+x \end{vmatrix}} _{n} = nx^{n-1} + x^{n},\quad\quad(9)\] with the indices of \(n\) and \(n-1\) indicating the number of \(x\)'s along the main diagonal, but with both matrices of the \(n\)th order. Formulas (8) and (9) are very similar; the difference is that the matrix of formula (9) has \(x\)'s in every position on the main diagonal, while the matrix of formula (8) has its first row and column composed only of ones. Rubini's notation may seem confusing at first, but after comparing the matrices of formulas (8) and (9), it can be understood that these matrices are of the same order. As he provided no concrete examples in his article, we provide such an example here. Using \(2\times 2\) matrices with \(x = 3\) and computing these determinants by the Laplace expansion, the first formula can be illustrated by the following example: \[{\begin{vmatrix}1 & 1\\1 & 4 \end{vmatrix}}_{1} = (1)(4) - (1)(1) = 3= 3^1,\] and the second formula by the example below: \[{\begin{vmatrix}4 & 1\\1 & 4 \end{vmatrix}}_{2} = (4)(4) - (1)(1) = 15= 2\cdot{3^1} + 3^2.\] Note that, according to the first formula, for a \(2\times 2\) matrix, the determinant will always be equal to the value of \(x.\) Now, substituting \(x\) for \(1 + x\) and \(x - 1\) for \(x\) in formulas (8) and (9), we obtain formulas (10) and (11): \[{\begin{vmatrix}1 & 1 & 1 & \ldots & 1\\1 & x & 1 & \ldots & 1\\\vdots & \vdots & \vdots & \ddots & \vdots\\1 & 1 & \cdots & \cdots & x \end{vmatrix}}_{n-1}= (x-1)^{n-1}={\begin{vmatrix}x & 1\\1 & 1\end{vmatrix}}^{n-1}\quad\quad(10)\] and \[{\begin{vmatrix}x & 1 & 1 & \ldots & 1\\1 & x & 1 & \ldots & 1\\\vdots & \vdots & \vdots & \ddots & \vdots\\1 & 1 & 1 & \ldots & x \end{vmatrix}}_{n}= + n(x-1)^{n-1} + (x-1)^{n}.\quad\quad(11)\] Rubini then supplied the reader with three additional formulas, labeled (12), (13), and (14), which he never utilized further in his paper; he possibly wanted to show the reader the various results that the analytic theory of determinants permitted mathematicians to find. Rubini's equation (15) is his formula (11) written completely in terms of determinants: \[{\begin{vmatrix}x & 1 & 1 & \ldots & 1\\1 & x & 1 & \ldots & 1\\\vdots & \vdots & \vdots & \ddots & \vdots\\1 & 1 & 1 & \ldots & x \end{vmatrix}}_{n}={\begin{vmatrix}1 & 1 & 1 & \ldots & 1\\1 & x & 1 & \ldots & 1\\\vdots & \vdots & \vdots & \ddots & \vdots\\1 & 1 & 1 & \ldots & x \end{vmatrix}}_{n}+n\,{\begin{vmatrix}1 & 1 & 1 & \ldots & 1\\1 & x & 1 & \ldots & 1\\\vdots & \vdots & \vdots & \ddots & \vdots\\1 & 1 & 1 & \ldots & x \end{vmatrix}}_{n-1}.\quad(15)\] The reader would have been able to follow this section more easily had Rubini presented equation (15) sequentially after formulas (10) and (11). Furthermore, it would have been of great assistance had Rubini gave the order of these matrices, because it is not apparent that these matrices are of different orders upon looking at them for the first time. For clarity in our explanation, equation (15) will be denoted as \(A_n = A_n' + n\,A_{n - 1}.\) The term \(A_n\) (which appears on the left-hand side of the equation) is an \(n\times n\) matrix with only \(x\)'s appearing along the main diagonal. The matrix \(A_n\) is a submatrix of the \((n+1) \times (n+1)\) matrix \(A_n'\) (which appears on the right-hand side of the equation); \(A_n'\) contains an additional row and column of ones. Rubini obtained the term \(n\,A_{n - 1}\) by substituting in formula (10); this is an \(n\times n\) matrix with \(n - 1\) \(x\)'s on the main diagonal. The reader would have also been better able to see the beauty of this equation through a numerical example, like the following one with \(x = 3\): \[{\begin{vmatrix}3 & 1\\1 & 3\end{vmatrix}}_{2} = {\begin{vmatrix}1 & 1 & 1\\1 & 3 & 1\\1 & 1 & 3\end{vmatrix}}_{2} +2{\begin{vmatrix}1 & 1\\1 & 3\end{vmatrix}}_{1}.\] The reader can check that each side of the equation equals \( 8.\) This equation can be written as: \[{\begin{vmatrix}3 & 1\\1 & 3\end{vmatrix}}_{2} - 2{\begin{vmatrix}1 & 1\\1 & 3\end{vmatrix}}_{1} = {\begin{vmatrix}1 & 1 & 1\\1 & 3 & 1\\1 & 1 & 3\end{vmatrix}}_{2}.\] This version of the example illustrates that formula (15) provides the reader with a shortcut to the Laplace expansion for \(n\times n\) matrices of this form. It is unfortunate that part of this beauty is lost in Rubini's article due to the lack of structure and explanation of his formulas. Although several of the formulas he presented would have been clearer with some concrete examples, like the one supplied above, this lack may demonstrate Rubini's intention to impress rather than teach. The article's lack of structure is also seen in Rubini's decision to merely present formulas (16) and (17), the only two formulas or equations of any kind provided in Section 4, and then to reproduce these formulas in slightly different form four sections later, at the beginning of Section 8. It makes one wonder why he decided to place these formulas in their own section and why he placed them so far in advance from the section in which he utilized them. He may have done this to emphasize the equations' importance, but it would have been easier for the reader to follow if he had presented these formulas for the first time in Section 8. The disorganized structure makes Rubini's work a bit difficult to follow, but also supports the conjecture that Rubini's work is a compilation of various mathematicians' previous works on determinants.
Seminar Number Theory 摘要 报告人简介 2019-10-14 Title: Cycles on Shimura varieties via Geometric Satake Speaker: Liang Xiao (Peking University) Abstract: I will explain a joint work with Xinwen Zhu on constructing algebraic cycles on special fibers of Shimura varieties using geometric Satake theory. The talk will focus on explaining the key construction which upgrades the geometric Satake theory to a functor that relates the category of coherent sheaves on the stack [Gσ / G] to the category of sheaves on local Shtukas with cohomological correspondences as morphisms. 2019-10-15 Tue 13:30-15:00 Title: A cris-comparison of the A inf-cohomology Speaker: Zijian Yao 姚子建 (Harvard University) Abstract: A major goal of p-adic Hodge theory is to relate arithmetic structures coming from various cohomologies of p-adic varieties. Such comparisons are usually achieved by constructing intermediate cohomology theories. A recent successful theory, namely the A inf-cohomology, has been invented by Bhatt-Morrow-Scholze, originally via perfectoid spaces. In this talk, I will describe a simpler approach to prove the comparison between A inf-cohomology and absolute crystalline cohomology, using the de Rham comparison and flat descent of cotangent complexes. 2019-10-21 at Conference Room 3, Jin Chun Yuan West Bldg. Title: Orientations of MW-Motives Speaker: Nanjun Yang (YMSC, Tsinghua University) Abstract: The category of (stable) MW-motives (defined by B. Calmès, F. Déglise and J. Fasel) is a refined version of Voevodsky's big motives, which provides a better approximation to the stable homotopy category of Morel and Voevodsky. A significant characteristic of this theory is that the projective bundle theorem doesn't hold. In this talk, we introduce Milnor-Witt K-theory and Chow-Witt rings, which leads to the definition of (stable/effective) MW-motives over smooth bases. Then we discuss their quarternionic projective bundle theorem and Gysin triangles. As an application, we compute the Hom-groups between proper smooth schemes in the category of MW-motives. -------------------------------History------------------------------- 2019-9-23 Title: Introduction to the GKZ-systems Speaker: Jiangxue Fang (Capital Normal University) Abstract: In this talk, I will review the theory of GKZ-systems discovered by Gelfand, Kapranov and Zelevinsky. In particular, I will study the composition series of GKZ-systems. 2019-9-16 Title: Modularity and Cuspidality Criterions Speaker: 王崧 Wang Song (中科院) Abstract: We will survey cuspidality criterions for several cases of functoriality lifts for automorphic forms for $GL (N)$. Here is one important case we will sketch the proof: Let $\pi, \pi'$ are cuspidal automorphic representations for $GL (2), GL (3)$, and $\Pi = \pi \boxtimes \pi'$ the Kim-Shahidi lift from $GL (2) \times GL (3)$ to $GL (6)$. Then $\Pi$ is cuspidal unless two exceptional cases occur. In particular, a modular form of Galois type which is associated to an odd icosahedral Galois representation must be cuspidal. 2019-9-11 Title: The automorphic discrete spectrum of Mp(4) Speaker: Atsushi Ichino (Kyoto University) Abstract: In his 1973 paper, Shimura established a lifting from half-integral weight modular forms to integral weight modular forms. After that, Waldspurger studied this in the framework of automorphic representations and classified the automorphic discrete spectrum of the metaplectic group Mp(2), which is a nonlinear double cover of SL(2), in terms of that of PGL(2). We discuss a generalization of his classification to the metaplectic group Mp(4) of rank 2. This is joint work with Wee Teck Gan. 2019-9-9 Title: Generalized zeta integrals on real prehomogeneous vector spaces Speaker: 李文威Li Wenwei (北京大学) Abstract: The Godement-Jacquet zeta integrals and Sato's prehomogeneous zeta integrals share a common feature: they both involve Schwartz functions and Fourier transforms on prehomogeneous vector spaces. In this talk I will sketch a common generalization in the local Archimedean case. Specifically, for a reductive prehomogeneous vector space which is also a spherical variety, I will define the zeta integrals of generalized matrix coefficients of admissible representations against Schwartz functions, prove their convergence and meromorphic continuation, and establish the local functional equation. Our arguments are based on various estimates on generalized matrix coefficients and Knop's work on invariant differential operators. -------------------------------History------------------------------- 2019-7-11 Title: Arithmetic of automorphic L-functions and cohomological test vectors Speaker: 孙斌勇Sun Binyong (AMSS) Abstract: It was known to Euler that $\zeta(2k)$ is a rational multiple of $\pi^{2k}$, where $\zeta$ is the Euler-Riemann zeta function, and $k$ is a positive integer. Deligne conjectured that similar results hold for motives over number fields, and automorphic analogue of Deligne's conjecture was also expected. I will explain the automorphic conjecture, as well as some recent progresses on it. The Archimedean theory of cohomological representations and cohomological test vectors will also be explained, as they play a key role in the proof. 2019-7-4 Title: Characteristic Cycles and Semi-canonical Basis Speaker: 邓太旺Taiwang Deng (Max Planck institute for mathematics) Abstract: Twenty years ago Lusztig introduced the semi-canonical basis for the enveloping algebra U(n), where n is a maximal unipotent sub-Lie algebra of some simple Lie algebra of type A, D, E. Later on B. Leclerc found a counter-example to some conjecture of Bernstein-Zelevinsky and related it to the difference between dual canonical basis and dual semi-canonical basis. He further introduced a condition (open orbit conjecture of Geiss-Leclerc-Schoer) under which dual canonical basis and dual semi-canonical basis coincide. In this talk we explain in detail the above relations and show a relation between the two bases above through micro-local analysis. 2019-6-27 Title: Torsions in Cohomology of arithmetic groups and congruence of modular forms Speaker: 邓太旺Taiwang Deng (Max Planck institute for mathematics) Abstract: In this talk I will discuss the torsion classes in the cohomology of $SL_2(Z)$ as well as its variant with compact support. As a consequence, we show how to deduce congruences of cuspidal forms with Eisenstein classes modulo small primes. This generalizes the previous result on Ramanujan tau functions. 2019-6-25 10:00-11:30 Title: Current methods versus expectations in the asymptotic of uniform boundedness Speaker: Loïc Merel (Université de Paris) Abstract: The torsion primes for elliptic curves over algebraic number fields of degree $d$ are bounded, according to the best current knowledge, exponentially in $d$. A disappointing result as polynomial bounds are expected. We will discuss what can be expected, and see how the use of the derived modular group can help clarify the limits of the current methods. 13:30-15:00 Title: Mathematical logic and its applications in number theory Speaker: 任金波Jinbo Ren (University of Virginia) Abstract: A large family of classical problems in number theory such as: a) Finding rational solutions of the so-called trigonometric Diophantine equation $F(\cos 2\pi x_i, \sin 2\pi x_i)=0$, where $F$ is an irreducible multivariate polynomial with rational coefficients; b) Determining all $\lambda \in \mathbb{C}$ such that $(2,\sqrt{2(2-\lambda)})$ and $(3, \sqrt{6(3-\lambda)})$ are both torsion points of the elliptic curve $y^2=x(x-1)(x-\lambda)$; can be regarded as special cases of the Zilber-Pink conjecture in Diophantine geometry. In this talk, I will explain how we use tools from mathematical logic to attack this conjecture. In particular, I will present a series partial results toward the Zilber-Pink conjecture, including those proved by Christopher Daw and myself. This talk is an expanded version of the one I gave during ICCM. 2019-6-24 Title: Steenrod operations and the Artin-Tate Pairing Speaker: Tony Feng (Stanford University) Abstract: In 1966 Artin and Tate constructed a canonical pairing on the Brauer group of a surface over a finite field, and conjectured it to be alternating. This duality has analogous incarnations across arithmetic and topology, namely the Cassels-Tate pairing for a Jacobian variety, and the linking form on a 5-manifold. I will explain a proof of the conjecture, which is based on a surprising connection to Steenrod operations. 2019-6-6 Title: Slopes of modular forms Speaker: Bin Zhao (MCM, AMSS) Abstract: In this talk, I will first explain the motivation to study the slopes of modular forms. It has an intimate relation with the geometry of eigencurves. I will mention two conjectures on the geometry of eigencurves: the halo conjecture concerning the boundary behavior, and the ghost conjecture concerning the central behavior. I will then explain some known results towards these conjectures. The former one is a joint work with Rufei Ren, which generalizes a previous work of Ruochuan Liu, Daqing Wan and Liang Xiao. The latter one is a joint work in progress with Ruochuan Liu, Nha Truong and Liang Xiao. 2019-5-30 Title: Epsilon dichotomy for linear models Speaker: Hang Xue (University of Arizona) Abstract: I will explain what linear models are and their relation with automorphic forms. I will explain how to relate the existence of linear models to the local constants. This extends a classical result of Saito--Tunnell. I gave a talk last year here on the implication in one direction, I will explain my recent idea on the implication in the other direction. 2019-5-23 Title: Quadratic twists of central L-values for automorphic representations of GL(3) Speaker: Didier Lesesvre (Sun Yat-Sen University) Abstract: A cuspidal automorphic representations of GL(3) over a number field, submitted to mild extra assumptions, is determined by the quadratic twists of its central L-values. Beyond the result itself, its proof is an archetypical argument in the world of multiple Dirichlet series, and therefore a perfect excuse to introduce these objects in this talk. 2019-5-16 Title: Level-raising for automorphic forms on $GL_n$ over a CM field Speaker: Aditya Karnataki (BICMR, Peking University) Abstract: Let $E$ be a CM number field and $p$ be a prime unramified in $E$. In this talk, we explain a level-raising result at $p$ for regular algebraic conjugate self-dual cuspidal automorphic representations of $GL_n(\mathbf{A}_E)$. This generalizes previously known results of Jack Thorne. 2019-4-4 Title: Curve counting and modular forms: elliptic curve case Speaker: Jie Zhou (YMSC, Tsinghua University) Abstract: In this talk, I will start by a gentle introduction of Gromov-Witten theory which roughly is a theory of the enumeration of holomorphic maps from complex curves to a fixed target space, focusing on the elliptic curve (as the target space) example. Then I will explain some ingredients from mirror symmetry, as well as a Hodge-theoretic description of quasi-modular and modular forms and their relations to periods of elliptic curves. After that I will show how to prove the enumeration of holomorphic maps are related to modular and quasi-modular forms, following the approach developed by Yefeng Shen and myself. Finally I will discuss the Taylor expansions near elliptic points of the resulting quasi-modular forms and their enumerative meanings. If time permits, I will also talk about some interesting works by Candelas-de la Ossa-Rodriguez-Villegas regarding the counting of points on and the counting of holomorphic maps to elliptic curves over finite fields. 2019-3-28 Title: Integral period relations for base change Speaker: Eric Urban (Columbia University) Abstract: Under relatively mild and natural conditions, we establish an integral period relations for the (real or imaginary) quadratic base change of an elliptic cusp form. This answers a conjecture of Hida regarding the {\it congruence number} controlling the congruences between this base change and other eigenforms which are not base change. As a corollary, we establish the Bloch-Kato conjecture for adjoint modular Galois representations twisted by an even quadratic character. In the odd case, we formulate a conjecture linking the degree two topological period attached to the base change Bianchi modular form, the cotangent complex of the corresponding Hecke algebra and the archimedean regulator attached to some Beilinson-Flach element. This is a joint work with Jacques Tilouine. 2019-3-21 Title: Geometry of Drinfeld modular varieties Speaker: Chia-Fu Yu (Institute of mathematics, Academia Sinica) Abstract: I will describe the current status on the geometry of Drinfeld moduli schemes we know. Main part of this talk will explain the construction of the arithmetic Satake compactification, and the geometry of compactified Drinfeld period domain over finite fields due to Pink and Schieder. We also plan to explain local and global properties of the strafification of reduction modulo v of a Drinfeld moduli scheme. This is joint work with Urs Hartl.
An Example of a Linear Operator with a Closed Graph that is Unbounded Recall from The Closed Graph Theorem that if $X$ and $Y$ are Banach spaces and if $T : X \to Y$ is a linear operator then $T$ is bounded if and only if $\mathrm{Gr}(T)$ is closed, that is, if $(x_n)$ is a sequence in $X$ that converges to $x \in X$ and $(T(x_n))$ converges to $y$ in $Y$ then $T(x) = y$. Let $C[0, 1]$ denote the space of continuous real-valued functions on $[0, 1]$ and let $C^1[0, 1]$ denote the space of differentiable functions on $[0, 1]$. Equip both spaces with the supremum norm. Let $T : C[0, 1] \to C^1[0,1]$ be defined for all $f \in C[0, 1]$ by:(1) It is easy to show that $T$ is linear. Let $f, g \in C[0, 1]$ and let $\alpha \in \mathbb{R}$. Then:(2) We now show that $\mathrm{Gr}(T)$ is closed. Let $(f_n)$ be a sequence in $C[0, 1]$ that converges to $f \in C[0, 1]$ and let $(T(f_n)) = (f_n')$ converge to $g \in C^1[0,1]$. Then:(3) The convergence here is uniform convergence, and so $\lim_{n \to \infty} f_n'(t) = g(t)$ for all $t$, we have that $f' = g$. So $T(f) = f' = g$. Hence $\mathrm{Gr}(T)$ is closed. We will now show that $T$ is unbounded. Let $(f_n) = (x^n)$. For each $n \in \mathbb{N}$ we have that:(4) Now observe that $(f_n') = (nx^{n-1})$. We have that:(5) So for every $n \in \mathbb{N}$ there exists an $f_n \in C[0, 1]$ such that $\| T(f_n) \| = n$. Hence $T$ cannot be bounded! Note that this does not contradict the Closed Graph theorem since $C^1[0, 1]$ is not a Banach space!
I'm confused about the definitions of black-box zero knowledge and non-black-box zero knowledge. I have searched and found explanations but still a bit confused about it. From the context, their definition (with auxiliary input) of black-box zero knowledge proof system between the prover $P$ and (malicious) verifier $V^*$ can be described as follows: $\{\langle P(\omega), V^*(z)(x)\rangle\}_{(x,\omega) \in R, z \in \{0,1\}^{p(|x|)}}$ and $\{S^{V^*(z,x,\cdot)}(x)\}_{(x,\omega) \in R, z \in \{0,1\}^{p(|x|)}}$ are (perfectly/statistically/computationally) indistinguishable, Where $S$ is a simulator which has access to a oracle function $V^*(z,x,\cdot)$ In my opinion, the only difference between the definition of non-black-box zero knowledge and the definition of black-box zero knowledge is that non-black-box zero knowledge allows that for all $V^*$ there exists unbounded simulator $S$, is this correct?
My main problem is that I'm unsure whether I can add cosfi1 + cosfi2=cosfi. I calculated them both, one for RC and the second for RL. Here is my overall (exhaustive) attempt on this problem (please correct if you think something is wrong). Data given: $$R_1=20\Omega;R_2=3\Omega;L=12.73*10^{-3}H;C=212.2*10^{-6} F; f=50Hz, U_{R1}=40V$$ Solution:$$X_L=2\pi fL=4\Omega$$$$X_C=(2\pi fC)^{-1}=15\Omega$$$$Z_1=\sqrt {R_1^2+X_C^2}=25\Omega$$$$Z_2=\sqrt {R_1^2+X_L^2}=5\Omega$$$$I_1=\frac{U_{R1}}{R_1}=2A$$$$cos_1\phi=\frac{R_1}{Z_1}=0.8$$$$cos_2\phi=\frac{R_2}{Z_2}=0.6$$$$U_{XC}=I1*X_C=30V$$$$U_1=\sqrt {U_{R1}^2+U_{XC}^2}=50V$$ $$U_1=U_2=50V \quad (see \quad image)$$ $$I_2=\frac{U_2}{R_2+X_L}=7.14$$ And now calculating total I with total U being 50V $$I=\sqrt {I_1^2+I_2^2}=7.41A $$ $$ P=U*I*(cos_1\phi+cos_2\phi)= 50*7.14*(0.8+0.6)=499.8W$$ I get 500 W, which is not listed in the results. Can anyone pinpoint my error? EDIT:Tnx to Spehro Pefhany (answer below) I've found my answer: $$I_2=\frac{U}{Z_2}=\frac{50}{5}=10A$$$$P_1=I^2*R_1=4*20=80W$$$$P_2=I^2*R_2=100*3=300W$$
Compact Sets in Hausdorff Topological Spaces Recall from the Compactness of Sets in a Topological Space page that if $X$ is a topological space and $A \subseteq X$ then $A$ is said to be compact in $X$ if every open cover of $A$ has a finite subcover. On the Closed Sets in Compact Topological Spaces page we proved a very nice theorem which showed us that if $X$ is a compact topological space then every closed set $A$ in $X$ is also compact in $X$. We will now look at a similar theorem which says that every compact set in a Hausdorff topological space is closed. Theorem 1: Let $X$ be a Hausdorff topological space and let $A \subseteq X$. If $A$ is compact in $X$ then $A$ is closed in $X$. Proof:Let $A$ be a compact set in $X$. Let $y \in A^c$ be a fixed point. Since $X$ is a Hausdorff space, for all $x \in A$ there exists open neighbourhoods $U_x$ of $x$ and $V_{(y, x)}$ of $y$ such that: Then we have that $\{ U_x \}_{x \in A}$ is an open cover of $A$. Since $A$ is compact in $X$ there exists a finite subcover, $\{ U_{x_1}, U_{x_2}, ..., U_{x_n} \}$ where $x_i \in A$ for all $i \in \{ 1, 2, ..., n \}$ and: Correspondingly, the set $\{ V_{(y, x_1)}, V_{(y, x_2)}, ..., V_{(y, x_n)} \}$ of open sets are such that $U_{x_i} \cap V_{(y, x_i)} = \emptyset$ for all $i \in \{1, 2, ..., n \}$. Let $U$ denote the following intersection: Then $V$ is a finite intersection of open sets and is hence open. Moreover, $V$ is an open neighbourhood of $y$. Additionally, $A \cap V = \emptyset$ since $\displaystyle{\bigcup_{i=1}^{n} U_{x_i}}$ covers $A$. Since $A \cap V = \emptyset$, this implies that $V \subseteq A^c$. So $V$ is an open neighbourhood of $y$ that is fully contained in $A^c$, i.e., $y \in \mathrm{int} (A^c)$. Hence $\mathrm{int}(A^c) = A^c$ which shows that $A^c$ is open in $X$. Therefore $A$ is closed in $X$.
Assume $M$ to be a compact $n$-dimensional manifold, endowed with a complete metric. Let us consider the space $C^\infty(M)$ endowed with the standard $C^\infty$ topology, i.e. generated by the seminorms $$\sup_K\left|\frac{\partial}{\partial_x^\alpha}f\right|,$$ over the compact subsets of $M$. Now consider the following set: let $1\in C^\infty(M)$ be the constant function and $0<\varepsilon<<1$. Define $$A\doteq\left\{a\in C^\infty(M)\;\colon\;\|a-1\|_{C^1(M)}<\varepsilon\right\}\subset C^\infty(M).$$ It seems to me that Ascoli Arzelà applies here, in particular $A$ is relatively compact in $C(M)$. The question is as follows: is it possible to choose an open covering of $A$ such that all the balls are centered at smooth functions? The consequence of Ascoli Arzelà is that there exists these balls but in principle they may be centered just on continuous functions, not even differentiable, I was thinking about this stronger result because of the compactness of $M$ and (maybe) something along the lines of Stone-Weierstrass which allows to slightly modify the center of the balls of the covering so that they are indeed smooth functions. Is this possible? Thanks for the attention.
I was reading the Chapter 14 of the textbook by Philip Phillips on Advanced Solid State Physics, when he introduced the mean-field treatment of the quantum rotor model, he used the method first used in this paper: Quantum fluctuations in two-dimensional superconductors. Here my question is related to the Hubbard-Stratonovich transformation (H-S transformation) used here to decouple the "spin" interaction: $$ T_{\tau}\exp \left( \int_0^{\beta} d\tau\ \sum_k J_k \mathbf{S}_k(\tau)\cdot\mathbf{S}_k^*(\tau) \right)$$ here the $\mathbf{S}_k(\tau)$ is the operator written in the interaction picture, then he introduced the auxiliary field $\psi_k$: $$T_{\tau}\exp \left( \int_0^{\beta} d\tau\ \sum_k J_k \mathbf{S}_k(\tau)\cdot\mathbf{S}_k^*(\tau) \right)=T_{\tau}\int D \psi_k(\tau) \ e^{-\int d\tau \sum_k \psi_k^*(\tau)\psi_k(\tau)-2\int_0^{\beta} d \tau \sum_k \psi_k(\tau)\cdot\mathbf{S}_{-k}(\tau) } $$ because I learned H-S transformation in the context of path integral, where we write the partition function in terms of functional integral over the fields ("numbers"). So here I want to ask: Mathematically if it is well defined to do the H-S transformation on top of operators? It seems that the imaginary time ordering operator does not affect the auxiliary field $\psi_k(\tau)$ (can be seen from later on derivations in the book), what's the reason for that?
I actually don't think that this view of light being in a quantum superposition is anything new: what Discover magazine is describing (I believe) is the stock standard picture of how one would describe a system of cells, molecules, chloroplasts, fluorophores, whatever interacting with the quantised electomagnetic field. My simplified account here (answer to Physics SE question "How does the Ocean polarize light?") addresses a very similar question. The quantised electromagnetic field is always in superposition before the absorption happens and, as light reaches a plant, it becomes a superposition of free photons and excited matter states of many chloroplasts at once. To learn more about this kind of thing, I would recommend M. Scully and M. Zubairy, "Quantum Optics" Read the first chapter and the mathematical technology for what you are trying to describe is to be found in chapters 4, 5 and 6. The truth is, photons do not bounce from cell to cell like ping pong balls. So that theory happens to be incorrect. Further questions and Edits: But this is about the energy FROM the photon... Would whatever you are saying still work for that? Plus, I would like to see some math... Energy is simply a property of photons (or whatever is carrying it): there has to be a carrier to make any interaction happen. All interactions we see are ultimately described by this. See eq (1) and (2) here, this is for the reverse process (emission) but you are ultimately going to write equations like this. To get a handle on this quickly look into this Wikipedia article (Quantization of the electromagnetic field) and then read Chapter 1 from Scully and Zubairy. Ultimately, you're going to need to write down a one-photon Fock state, and add to the superposition excited atom states. The neater way to do this is with creation operators acting on the universal, unique quantum ground state $\left|\left.0\right>\right.$: we define $a_L^\dagger(\vec{k},\,\omega),\,a_R^\dagger(\vec{k}\,\omega)$ to be the creation operators for the quantum harmonic oscillators corresponding to left and right handed plane waves with wavenumber $\vec{k}$ and frequency $\omega$. Then a one-photon state in the oscillator corresponding to the classical solution of Maxwell's equation with complex amplitudes $A_L(\vec{k},\,\omega), A_R(\vec{k},\,\omega)$ in the left and right handed classical modes is: $$\left|\left.\psi\right>\right.=\int d^3k\,d\omega\left(A_L(\vec{k},\,\omega)\,a_L^\dagger(\vec{k},\,\omega)+A_R(\vec{k},\,\omega)\,a_R^\dagger(\vec{k}\,\omega)\right)\,\left|\left.0\right>\right.$$ To define an absorption, Scully and Zubairy show that the probability amplitude for an absorption at time $t$ and position $\vec{r}$ is proportional to: $$\left<\left.0\right.\right| \hat{E}^+(\vec{r},t)\left|\left.\psi\right>\right.$$ where $\hat{E}$ is the electric field observable and $\hat{E}^+$ its positive frequency part (the part with only annihilation operators and all the creation operators thrown away). Alternatively you can in principle model absorption by writing down the Hamiltonian which is going to look something like: $$\int d^3k\,d \omega\left(a_L^\dagger(\vec{k},\,\omega)\,a_L^\dagger(\vec{k},\,\omega)+a_R^\dagger(\vec{k}\,\omega)\,a_R(\vec{k},\,\omega) \right)+\sum\limits_{\text{all chloroplasts }j} \int d^3k\,d\omega\,\sigma^\dagger_j\left(\kappa_{j,L}(\vec{k},\,\omega)\,a_L(\vec{k},\,\omega)+\kappa_{j,L}(\vec{k},\,\omega\,a_R(\vec{k},\,\omega) \right)+\\\sum\limits_{\text{all chloroplasts }j} \int d^3k\,d\omega\,\left(\kappa_{j,L}(\vec{k},\,\omega)\,a^\dagger_L(\vec{k},\,\omega)+\kappa_{j,L}(\vec{k},\,\omega)\,a^\dagger_R(\vec{k},\,\omega) \right)\sigma_j$$ where $\sigma_j^\dagger$ is the creation operator for a raised chlorophore at site $j$ and the $\kappa$s measure the strength of coupling. This is complicated stuff and takes more than a simple tutorial to write down.
Glioblastoma, or glioblastoma multiforme, is a particularly aggressive and almost invariably fatal type of brain cancer. It is infamous for causing the deaths of U.S. Senators John McCain and Ted Kennedy, as well as former U.S. Vice President Joe Biden’s son Beau. Though glioblastoma is the second-most common type of brain tumor—affecting roughly three out of every 100,000 people—medicine has struggled to find effective remedies; the U.S. Food and Drug Administration has approved only four drugs and one device to counter the condition in 30 years of research. The median survival rate is less than two years, and only about five percent of all patients survive five years beyond the initial diagnosis. Given these terrible odds, medical researchers strive for anything that can extend the effectiveness of treatment. The nature of glioblastoma itself is responsible for many obstacles; brain tumors are difficult to monitor noninvasively, making it challenging for physicians to determine the adequacy of a particular course of therapy. Figure 1. Magnetic resonance imaging scan of the brain. Public domain image. Kristin Rae Swanson and her colleagues at the Mayo Clinic believe that mathematical models can help improve patient outcomes. Using magnetic resonance imaging (MRI) data for calibration (see Figure 1), they constructed the proliferation-invasion (PI) model — a simple deterministic equation to estimate how cancer cells divide and spread throughout the brain. Rather than pinpoint every cell’s location, the model aims to categorize the general behavior of each patient’s cancer to guide individualized treatment. During her presentation at the American Association for the Advancement of Science 2019 Annual Meeting, which took place in Washington, D.C., earlier this year, Swanson noted that every glioblastoma patient reacts differently to the same treatment. She hopes that use of the PI model might help predict patient response to a given regimen. “The model is able to provide a sort of virtual control,” Andrea Hawkins-Daarud, Swanson’s collaborator at the Mayo Clinic, said. “With a virtual control, you can consider how the size of the tumor changes over time. Then you can begin thinking through a lot of different possible response metrics.” The team discovered that absolute tumor size was a less important metric than tumor position on the growth curve. Swanson and her colleagues use the term “days gained” to describe the result: does the treatment turn back the clock on cancer proliferation and buy the patient more time? Estimating days gained requires an understanding of the time-dependent growth kinetics pertaining to the individual’s cancer, which is precisely what the PI model attempts to do. A Model for Tumor Growth As for many other tumors, neurosurgeons commonly begin glioblastoma treatment by surgically removing as much of the cancer as possible before following up with chemotherapy and radiation. However, glioblastoma is more diffuse than most cancers; because the tumor extends into healthy tissue, it is nearly impossible for surgeons to remove all cancer cells without damaging the brain. To make matters worse, the degree of diffusivity varies widely among patients, and MRI scans alone are not particularly good at distinguishing the nuances of these cases. “Doctors don’t really have a clean way of knowing the difference between one patient’s tumor being really diffuse and another patient’s tumor being really nodular, or which tumor is growing faster than another,” Hawkins-Daarud said. “MRI detects what the cancer cells have done to the environment, but it can’t specifically say ‘this is a tumor.’ It can’t identify the boundary [of the glioblastoma].” The uncertainty in measuring that boundary means that clinicians struggle to determine which treatments are working and which require adjustment. However, glioblastoma’s diffusivity also makes it amenable to a reaction-diffusion model — a common type of equation in mathematical biology. The PI model approximates the tumor’s growth in space and time by treating it as a continuous fluid [3]: \[\frac{\partial c}{\partial t}=\triangledown \cdot (D\triangledown c)+\rho c(1-c),\] where \(c\) is the tumor cell density. The free parameters \(D\) and \(\rho\) respectively quantify the cancer cells’ diffusion and rate of proliferation. Assuming a spherical tumor, the solution to the PI equation far from the tumor center takes the form of a traveling wave with velocity \(2 \sqrt{D\rho}\) and steepness \(D/ \rho\). Figure 2. Estimated radial size of a tumor before and after treatment, where the treated tumor size corresponds to an earlier stage of growth according to the model. This allows researchers to estimate the days gained with a particular treatment. Figure courtesy of [2]. These parameters are not directly measurable. To infer their values, Swanson’s team used MRI measurements for 160 glioblastoma patients [1]. They obtained an estimate of tumor growth and proliferation by comparing two MRI scans for each patient, then applied a Bayesian framework [2] to quantify uncertainties in both the data and model. These efforts yielded a means of classifying patient responses to treatment in terms of days gained (see Figure 2). “I don’t think [the PI model] is good at giving precise boundaries of tumor cell density throughout the brain,” Hawkins-Daarud said. “However, it is good with helping us conceptually ‘bin’ patients into categories.” The days-gained metric identified via the PI model proved to be a much better predictor than tumor size alone, thanks to incorporation of cancer kinetics. “The difference in overall survival for patients with a larger days-gained value was statistically significant over those who had the smaller days-gained value,” Hawkins-Daarud continued. “Our hope is that [the model] will be able to identify when a therapy is truly failing and you should change it, or when a therapy is being useful and you should stay on it — even though it looks like it may not be as good as you might expect.” Hope is a Thing with Equations The PI model is deterministic and treats tumors as continuous fluids, whereas real glioblastoma consists of discrete cells that spread more haphazardly. For this reason, Swanson, Hawkins-Daarud, and many other researchers are combining forces to create better models that incorporate cancer kinetics, machine learning, and cellular automata, along with a wider range of medical data. The preliminary results of these efforts are not yet published, but Hawkins-Daarud believes that they hold a great deal of promise. Even so, the problem can still seem insurmountable. Cancer is not a single disease, but rather a large set of conditions with many causes and a number of common features. The PI model enables better understanding of glioblastoma’s specific traits; however, this does not work for most cancers, which metastasize and are non-diffuse. Yet hope is a relative thing in cancer research — for mathematical oncologists as much as for doctors and patients. “The math isn’t going to cure the cancer,” Hawkins-Daarud said. “But I think that math can certainly help optimize the process of finding a cure. We are actually in the midst of talking to various drug companies to try and incorporate our response metrics into the clinical trials to see if we can speed up the proceedings.” Even a few months of extra time acquired from improved treatments is significant to glioblastoma patients and their loved ones. While math alone will not provide this time, the PI model shows that it can help gain some valuable days. References [1] Baldock, A.L., Ahn, S., Rockne, R., Johnston, S., Neal, M., Corwin, D., …Swanson, K.R. (2014). Patient-specific Metrics of Invasiveness Reveal Significant Prognostic Benefit of Resection in a Predictable Subset of Gliomas. PLoS One, 9(10), e99057. [2] Hawkins-Daarud, A., Johnston, S.K., & Swanson, K.R. (2019). Quantifying Uncertainty and Robustness in a Biomathematical Model-Based Patient-Specific Response Metric for Glioblastoma. JCO Clin. Cancer Inform., 3, 1-8. [3] Rockne, R., Alvord Jr., E.C., Rockhill, J.K., & Swanson, K.R. (2009). A mathematical model for brain tumor response to radiation therapy. J. Math. Biol., 58(4-5), 561-578.
I'm building regression models. As a preprocessing step, I scale my feature values to have mean 0 and standard deviation 1. Is it necessary to normalize the target values also? Let's first analyse why feature scaling is performed. Feature scaling improves the convergence of steepest descent algorithms, which do not possess the property of scale invariance. In stochastic gradient descent training examples inform the weight updates iteratively like so, $$w_{t+1} = w_t - \gamma\nabla_w \ell(f_w(x),y)$$ Where $w$ are the weights, $\gamma$ is a stepsize, $\nabla_w$ is the gradient wrt weights, $\ell$ is a loss function, $f_w$ is the function parameterized by $w$, $x$ is a training example, and $y$ is the response/label. Compare the following convex functions, representing proper scaling and improper scaling. A step through one weight update of size $\gamma$ will yield much better reduction in the error in the properly scaled case than the improperly scaled case. Shown below is the direction of $\nabla_w \ell(f_w(x),y)$ of length $\gamma$. Normalizing the output will not affect shape of $f$, so it's generally not necessary. The only situation I can imagine scaling the outputs has an impact, is if your response variable is very large and/or you're using f32 variables (which is common with GPU linear algebra). In this case it is possible to get a floating point overflow of an element of the weights. The symptom is either an Inf value or it will wrap-around to the other extreme representation. Generally, It is not necessary. Scaling inputs helps to avoid the situation, when one or several features dominate others in magnitude, as a result, the model hardly picks up the contribution of the smaller scale variables, even if they are strong. But if you scale the target, your mean squared error is automatically scaled. MSE>1 automatically means that you are doing worse than a constant (naive) prediction. No, linear transformations of the response are never necessary. They may, however, be helpful to aid in interpretation of your model. For example, if your response is given in meters but is typically very small, it may be helpful to rescale to i.e. millimeters. Note also that centering and/or scaling the inputs can be useful for the same reason. For instance, you can roughly interpret a coefficient as the effect on the response per unit change in the predictor when all other predictors are set to 0. But 0 often won't be a valid or interesting value for those variables. Centering the inputs lets you interpret the coefficient as the effect per unit change when the other predictors assume their average values. Other transformations (i.e. log or square root) may be helpful if the response is not linear in the predictors on the original scale. If this is the case, you can read about generalized linear models to see if they're suitable for you. It does affect gradient descent in a bad way. check the formula for gradient descent: $$ x_{n+1} = x_{n} - \gamma\Delta F(x_n) $$ lets say that $x_2$ is a feature that is 1000 times greater than $x_1$ for $ F(\vec{x})=\vec{x}^2 $ we have $ \Delta F(\vec{x})=2*\vec{x} $. The optimal way to reach (0,0) which is the global optimum is to move across the diagonal but if one of the features dominates the other in terms of scale that wont happen. To illustrate: If you do the transformation $\vec{z}= (x_1,1000*x_1)$, assume a uniform learning rate $ \gamma $ for both coordinates and calculate the gradient then $$ \vec{z_{n+1}} = \vec{z_{n}} - \gamma\Delta F(z_1,z_2) .$$ The functional form is the same but the learning rate for the second coordinate has to be adjusted to 1/1000 of that for the first coordinate to match it. If not coordinate two will dominate and the $\Delta$ vector will point more towards that direction. As a result it biases the delta to point across that direction only and makes the converge slower. Yes, you do need to scale the target variable. I will quote this reference: A target variable with a large spread of values, in turn, may result in large error gradient values causing weight values to change dramatically, making the learning process unstable. In the reference, there's also a demonstration on code where the model weights exploded during training given the very large errors and, in turn, error gradients calculated for weight updates also exploded. In short, if you don't scale the data and you have very large values, make sure to use very small learning rate values. This was mentioned by @drSpacy as well.
We claim that the vectors $\mathbf{v}_1, \mathbf{v}_2, \dots, \mathbf{v}_r, \mathbf{v}_{r+1}$ are linearly dependent.Since the vectors $\mathbf{v}_1, \mathbf{v}_2, \dots, \mathbf{v}_r$ are linearly dependent, there exist scalars (real numbers) $a_1, a_2, \dots, a_r$ such that\[a_1 \mathbf{v}_1+a_2\mathbf{v}_2+\cdots +a_r\mathbf{v}_r=\mathbf{0} \tag{*}\]and not all of $a_1, \dots, a_r$ are zero, that is, $(a_1, \dots, a_r) \neq (0, \dots, 0)$. Consider the equation\[x_1\mathbf{v}_1+x_2 \mathbf{v}_2+\cdots +x_r \mathbf{v}_r+x_{r+1} \mathbf{v}_{r+1}=\mathbf{0}.\]If this equation has a nonzero solution $(x_1, \dots, x_r, x_{r+1})$, then the vectors $\mathbf{v}_1, \dots, \mathbf{v}_{r+1}$ are linearly dependent. In fact,\[(x_1,x_2,\dots, x_r, x_{r+1})=(a_1, a_2, \dots, a_r, 0)\]is a nonzero solution of the above equation.To see this, first note that since not all of $a_1, a_2, \dots, a_r$ are zero, we have\[(a_1, a_2, \dots, a_r, 0)\neq (0, 0, \dots, 0, 0).\] Plug these values in the equation, we have\begin{align*}&a_1 \mathbf{v}_1+a_2\mathbf{v}_2+\cdots +a_r\mathbf{v}_r+0\mathbf{v}_{r+1}\\&=a_1 \mathbf{v}_1+a_2\mathbf{v}_2+\cdots +a_r\mathbf{v}_r=\mathbf{0} \text{ by (*).}\end{align*}Therefore, we conclude that the vectors $\mathbf{v}_1, \mathbf{v}_2, \dots, \mathbf{v}_r, \mathbf{v}_{r+1}$ are linearly dependent. Linear Dependent/Independent Vectors of PolynomialsLet $p_1(x), p_2(x), p_3(x), p_4(x)$ be (real) polynomials of degree at most $3$. Which (if any) of the following two conditions is sufficient for the conclusion that these polynomials are linearly dependent?(a) At $1$ each of the polynomials has the value $0$. Namely $p_i(1)=0$ […] Any Vector is a Linear Combination of Basis Vectors UniquelyLet $B=\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3\}$ be a basis for a vector space $V$ over a scalar field $K$. Then show that any vector $\mathbf{v}\in V$ can be written uniquely as\[\mathbf{v}=c_1\mathbf{v}_1+c_2\mathbf{v}_2+c_3\mathbf{v}_3,\]where $c_1, c_2, c_3$ are […] Linear Combination of Eigenvectors is Not an EigenvectorSuppose that $\lambda$ and $\mu$ are two distinct eigenvalues of a square matrix $A$ and let $\mathbf{x}$ and $\mathbf{y}$ be eigenvectors corresponding to $\lambda$ and $\mu$, respectively.If $a$ and $b$ are nonzero numbers, then prove that $a \mathbf{x}+b\mathbf{y}$ is not an […]
Search Now showing items 1-10 of 19 J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-02) Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ... Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-12) The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ... Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC (Springer, 2014-10) Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ... Centrality, rapidity and transverse momentum dependence of the J/$\psi$ suppression in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2014-06) The inclusive J/$\psi$ nuclear modification factor ($R_{AA}$) in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76TeV has been measured by ALICE as a function of centrality in the $e^+e^-$ decay channel at mid-rapidity |y| < 0.8 ... Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... Multiplicity Dependence of Pion, Kaon, Proton and Lambda Production in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV (Elsevier, 2014-01) In this Letter, comprehensive results on $\pi^{\pm}, K^{\pm}, K^0_S$, $p(\bar{p})$ and $\Lambda (\bar{\Lambda})$ production at mid-rapidity (0 < $y_{CMS}$ < 0.5) in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV, measured ... Multi-strange baryon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2014-01) The production of ${\rm\Xi}^-$ and ${\rm\Omega}^-$ baryons and their anti-particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been measured using the ALICE detector. The transverse momentum spectra at ... Measurement of charged jet suppression in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2014-03) A measurement of the transverse momentum spectra of jets in Pb–Pb collisions at √sNN = 2.76TeV is reported. Jets are reconstructed from charged particles using the anti-kT jet algorithm with jet resolution parameters R ... Two- and three-pion quantum statistics correlations in Pb-Pb collisions at √sNN = 2.76 TeV at the CERN Large Hadron Collider (American Physical Society, 2014-02-26) Correlations induced by quantum statistics are sensitive to the spatiotemporal extent as well as dynamics of particle-emitting sources in heavy-ion collisions. In addition, such correlations can be used to search for the ... Exclusive J /ψ photoproduction off protons in ultraperipheral p-Pb collisions at √sNN = 5.02TeV (American Physical Society, 2014-12-05) We present the first measurement at the LHC of exclusive J/ψ photoproduction off protons, in ultraperipheral proton-lead collisions at √sNN=5.02 TeV. Events are selected with a dimuon pair produced either in the rapidity ...
In this article I will try to introduce the complete derivation behind the Kalman Filter, one of the most popular filtering algorithm in noisy environments. In addition, the following article will be about the Extended Kalman Filter, how it’s used in localisation algorithms, when we have known and unknown correspondences. This article will be a mouthful, so first let’s quickly draft what we will speak about: Table of contents The history surrounding the Kalman Filter The state space Weighed Gaussian Measurements The Kalman Filter The Extended Kalman Filter Bibliography The history surrounding the Kalman Filter Rudolf E. Kálmán presented in 1960, the seminal paper presenting the homonym technique. He proposed this technique in the context of extracting the actual value of a measurement (or better said the most likely value), given a long list of noisy measurements. The state space But what is probably more important is Kalman’s heavy use of what’s called a state space. A state space is generally an abstract space. In other words, a mathematical construct which has no direct mapping to what we would call a real world space (like the 3D space). It usually arises naturally when working with differential equations. For instance, the Lotka-Volterra equations that describe predator-prey population dynamics: Lotka-Volterra equations I will briefly speak about the famous predator-prey model and how a state space naturally arises here. I find this example very palpable, in the sense we can relate it to an everyday thing surrounding the animal world. Predator-free population model Firstly, suppose we have a population \(x\), and the environment doesn’t impose a maximum limit over this population. In this scenario, the population would grow exponentially (solving \(\frac{dx}{dt} = \alpha x\) we get \(x=Ce^{\alpha t}\)). Predator population model Now, suppose we introduce a population of predators, \(y\), that feeds on \(x\). We can intuitively say that there will be a decrease in the population \(x\), proportional to the number of predators, and the population itself. Why the latter? Because in this simplistic model, any predator has the potential to catch any prey (i.e. there are \(xy\) possible interactions, thus we define a killing rate depending on that). The predators however, if left without a source of food would all die. In the same manner, we can define a birth rate in the predators depending on the number of interactions they have with the prey. After all this talk we can finally define a model of the form: Predator – Prey mathematical model $$\begin{cases}\frac{dx}{dt} = \alpha x – \beta xy \\ \frac{dy}{dt} = -\gamma y + \delta xy \end{cases} \\ \text{Looking at the variation between y and x (so we divide the equations above): } \\ \frac{dy}{dx} = -\frac{y}{x} \frac{\delta x – \gamma}{\beta y – \alpha} \\ \text{Separating the variables: } \frac{\beta y – \alpha}{y}dy = \frac{\delta x – \gamma}{x}dx \\ \text{And by integrating on both sides: } \beta y – \alpha ln(y) + const_1 = \delta x – \gamma ln(x) + const_2 \\ \text{We can now define: } V=\delta x – \gamma ln(x) + \beta y – \alpha ln(y) + const$$ \(V\) depends on the initial conditions (which affect the constant), but most importantly, it’s a constant quantity on the closed curves defined by the equation above. Some images from Wikipedia, for a clearer picture: The last picture you saw is a nice example of a STATE SPACE. State spaces are useful because they can describe graphically the dynamics of a system of differential equations. With the drawing above you can quickly see the evolution for various initial conditions, that otherwise wouldn’t have been so obvious. The thing to remember in our story: Kalman heavily used state spaces. Weighed Gaussian Measurements Before talking about Kalman filters, I want to speak a little about weighted gaussian measurements. Let’s suppose we have two independent measurements of a certain quantity. Those two measurements are made either by us, either by some sort of measuring instrument. Either way, we or the instrument, aren’t 100% sure about the value we think or display (the instrument too, because it might have some sort of intrinsic error). Encoding uncertainty with the standard deviation What both measurements have in common are a level of uncertainty. And we will encode it using the standard deviation. We can attach(conceptually at least) a normal distribution to every measurement we make. The ones that have a high degree of certainty will have a low uncertainty (i.e. a low variance!) and vice-versa. With that in mind, back to our measurements. We will write them as: \(x_1 \pm \sigma_1\) și \(x_2 \pm \sigma_2\). But now that we have them, comes the question: What’s the best estimator for \(x\)? We will try answering in the simplest manner – A linear combination between the two measurements: \(x_{12} = w_1 x_1 + w_2x_2\), cu \(w_1+w_2=1\). Now the question is: What are the best values for \(w_1\) and \(w_2\). Fortunately, there are two different ways to reach at the same result: using Lagrange multipliers or probabilistically. Finding the optimal weights with Lagrange multipliers We will choose the weights that minimise the uncertainty, \(\sigma^2 = \sum_i w_i^2\sigma_i^2\), subject to \(1 – \sum_i w_i = 0\). Leaving the constraint aside…Why this formula? Short answer: this is the actual variance of the random variable defined as \(x\). We can quickly see why if we make the assumption that the two random variables are independent – an assumption which in the real world should hold most of the times. I mean, if one odometer messes around with another odometer then we’re in big trouble…far bigger than our philosophical assumptions (-: Deriving the mean and variance of a linear combination This is Bienaymé’s formula. Let’s quickly derive it here [2]: First the mean of a linear combination of independent random variables: $$\mathbb{E}(\sum_i w_i x_i) = \sum_i w_i \mathbb{E}(x_i) = \sum_i w_i \mu_i $$ Secondly, the variance: $$\begin{align}\sigma(x)^2 &= Var(x) = \mathbb{E}[(x-\mu_x)^2]=\mathbb{E}[(\sum_i w_i x_i – \sum_i w_i \mu_i)^2 \\ &= \mathbb{E}[(\sum_i w_i(x_i – \mu_i))^2] = \mathbb{E}[(\sum_i w_i(x_i-\mu_i))(\sum_j w_j(x_j – \mu_j)]\\ &= \mathbb{E}[\sum_i\sum_j w_i w_j (x_i-\mu_i)(x_j-\mu_j)] = \sum_i\sum_j w_i w_j \mathbb{E}[(x_i-\mu_i)(x_j-\mu_j)]\end{align}$$ Using the independence property we can see that \(\mathbb{E}[(x_i-\mu_i)(x_j-\mu_j)]\) is actually \(0 \; \forall \; i\neq j\). Thus the above relation simplifies to: $$ \sigma^2 = \sum_i w_i^2 \mathbb{E}[(x_i-\mu_i)^2] = \sum_i w_i^2 \sigma_i^2$$ Solving the Lagrangian $$\mathcal{L}(\textbf{w}, \lambda) = \sum_i w_i^2\sigma_i^2 – \lambda (1 – \sum_i w_i)\\ \frac{\partial \mathcal{L}}{\partial w_i} = 2w_i\sigma_i^2 + \lambda = 0 \Rightarrow w_i \propto \frac{1}{\sigma_i^2} \\ \text{and because of the constraint: } \quad w_i = \frac{1/\sigma_i^2}{\sum_j 1/\sigma_j^2} \\ \text{for the example: } \quad \hat{x} = \frac{\sigma_2^2 x_1 + \sigma_1^2 x_2}{\sigma_2^2 + \sigma_1^2}; \quad \hat{\sigma} = \frac{\sigma_1^2 \sigma_2^2}{\sigma_1^2 + \sigma_2^2}$$ \(\textbf{^}\) is a notation for ESTIMATED values As a sidenote, look how the lagrangian formulation led us to something very intuitive: that the weights are directly proportional to the inverse of the variance, which in statistics is commonly referred as precision (in other words, it encodes the importance of the measurement). Finding the optimal weights probabilistically We interpret the measurements in the terms of a probability distribution centered around the measured values, for ex. \(x_1\) (which becomes the mean of a normal distribution). Thus, for the first measurement we can assume that \(P(x| x_1) = \mathcal{N}(x;x_1, \sigma_1^2)\). Because the measurements are independent, we can apply the same reasoning for the second measurement. $$P(x|x_1, x_2) = P(x|x_1)P(x|x_2)P(x_1)P(x_2), \text{ summing after } x_1 \text{ and } x_2 \\ P(x) \propto exp (-\frac{1}{2}(\frac{(x-x_1)^2}{\sigma_1^2} + \frac{(x-x_2)^2}{\sigma_2^2}))\\ \text{And after some algebra we reach to a form of the type: } P(x) \propto exp(-\frac{(x-\hat{x})^2}{2\hat{\sigma}^2})$$ [3] The Kalman Filter The Kalman Filter is actually a systematization brought to the method of weighted gaussian measurements, in the context of Systems theory. Practically, this filter is used in equipments that need to tune the next estimated state, based on their current internal state (or belief), along with the new information that comes due to measurements. Therefore, this filter is recursive, and thus having a feedback control loop. Deriving the Kalman Gain To capture the recursive nature of the Kalman filter, in our 1D example, we have to make some changes to the equations that describe the estimated mean and variance. $$\begin{align} \hat{x} = \dfrac{\sigma_2^2 x_1 + \sigma_1^2 x_2}{\sigma_2^2 + \sigma_1^2} &\rightarrow \hat{x} = x_1 + \frac{\sigma_1^2 (x_2-x_1)}{\sigma_1^2 + \sigma_2^2} \\ \hat{\sigma} = \frac{\sigma_1^2 \sigma_2^2}{\sigma_1^2 + \sigma_2^2} &\rightarrow \hat{\sigma} = \sigma_1^2 – \frac{\sigma_1^4}{\sigma_1^2+\sigma_2^2} \end{align}\\ \text{Introducing: } \mathbf{k} = \frac{\sigma_1^2}{\sigma_1^2+\sigma_2^2} \text{ we obtain: } \\ \hat{x}=x_1 + k(x_2 – x_1) \quad (1)\\ \hat{\sigma} = \sigma_1^2 – k\sigma_1^2 \quad (2)$$ \(\mathbf{k}\) is the 1D version of the Kalman Gain. However, Kalman didn’t write it like above. He was more general and worked with the N-dimensional version of the Kalman gain. For the sake of simplicity, I will draw a simple parallel from the 1D forms of the equations (1) and (2) and the Kalman Gain to the N-dimensional ones. Because of this, the standard deviation will become the covariance and the common division operation a multiplication with the inverse: $$x_i \rightarrow \vec x_i \land \sigma_i^2 \rightarrow \Sigma_i, i=1,2 \\ \mathbf{K} = \Sigma_1(\Sigma_1 + \Sigma_2)^{-1} \quad (4)\\ \vec{\hat{x}} = \vec x_1 + \mathbf{K}(\vec x_2 – \vec x_1) \quad (5) \\ \hat{\Sigma} = \Sigma_1 – \mathbf{K} \Sigma_1 \quad (6)$$ \(\mathbf{K}\) is the N-dimensional version of the Kalman Gain. And now, before diving in the full derivation of Kalman Filter equations we need to introduce some notations and do a quick math recap: Multiplication of the covariance matrix with another constant matrix $$\mathrm{Cov}(x) = \Sigma \\ \mathrm{Cov}(Ax) = A \Sigma A^T \\ \mathrm{Cov}(Ax) = \mathbb{E}[(Ax – \mathbb{E}[Ax])(Ax – \mathbb{E}[Ax])^T] \\ = \mathbb{E}[(Ax – A \mathbb{E}[x])(Ax – A\mathbb{E}[x])^T] \\ = \mathbb{E}[A (x – \mathbb{E}[x])(x – \mathbb{E}[x])^T A^T] \\ = A \underbrace{\mathbb{E}[(x – \mathbb{E}[x])(x – \mathbb{E}[x])^T]}_{=\Sigma} A^T$$ Deriving the Kalman Filter equations It’s important to know that the Kalman Filter works in two, distinct, phases: the estimation phase(which offers predictions regarding the next state, based on the current state and some external factors) and the correction phase, that refines the predictions using actual measurements. The estimation phase To capture the changes the current state (at time \(k-1\)) will bring to the next state prediction(at time \(k\)), we will use the matrix \(\mathbf{F_k}\). Analogously, to capture the influence of the external factors, we will use a different matrix, \(\mathbf{B_k}\), while \(\mathbf{u_k}\) for the factor’s themselves. To capture the uncertainty induced by the external factors, we will define gaussian noise, \(\mathbf{Q_k}\). We can define, using the above notations, the equations for the predicted states. The superscript minus sign “\(\textbf{-}\)” denotes we speak about predictions ( a-priori estimations). The correction phase Another key point is the sensors that make the measurements, although they may follow the same states, it’s possible they’re doing it using a different measuring scale. As a consequence, we need to bring the predictions in the sensor’s space (so everyone is on the same page). For this, we will introduce a new transformation, via the matrix \(\mathbf{H_k}\). As I’ve previously stated, the sensors have a limited precision. To express this, we will introduce a covariance matrix, \(\mathbf{R_k}\). The average of these measurements will be the final value used(the mean is the best estimator), \(\mathbf{z_k}\). In effect, by applying the transformation from the prediction space to the sensor space we obtain: $$\begin{align} \mu_{expected} = H_k \hat{\mathbf{x_k}}^\text{-} \\ \Sigma_{expected} = H_k {\Sigma_k^\text{-}}H_k^T \end{align}$$ The three images above come from a great article, also about the Kalman Filter (intuitive too) [4]. The Kalman Filter equations $$\text{Having the predictions: } (\mu_1, \Sigma_1) = (H_k \mathbf{\hat{x}_k^\text{-}}, H_k\Sigma_k^\text{-} H_k^T) \\ \text{and the measured values: } (\mu_2, \Sigma_2) = (\mathbf{z_k}, R_k)\\ \text{introducing them in equations (4), (5), (6) we obtain: } \\ H_k \mathbf{\hat{x}_k} = H_k \mathbf{\hat{x}_k}^\text{-} + K (\mathbf{z_k} – H_k \mathbf{\hat{x}_k^\text{-}})\\ H_k \Sigma_k H_k^T = H_k \Sigma_k^\text{-} H_k^T – K H_k \Sigma_k^\text{-} H_k^T \\ K = H_k \Sigma_k H_k^T ( H_k \Sigma_k H_k^T + R_k)^{-1} \Rightarrow \\ \color{blue}{K’ = \Sigma_k H_k^T ( H_k \Sigma_k H_k^T + R_k)^{-1}}\\ \color{blue}{\mathbf{\hat{x}_k} = \mathbf{\hat{x}_k^\text{-}} + K'(\mathbf{z_k} – H_k \mathbf{\hat{x}_k^\text{-}})}\\ \color{blue}{\Sigma_k = (I – K’H_k)\hat{\Sigma}_k^\text{-}}$$ The Extended Kalman Filter (EKF) You might have noticed that everything we’ve discussed so far is basically just a fancy LINEAR model. As shown above, we work only with linear transformation, transforming one state space into another and so on. Unfortunately, the vast majority of real-world problems are inherently non linear. Fortunately for us, we can do a bunch of fancy stuff with a non-linear model so that, at least on small intervals, we can approximate it with it’s simpler linear cousin. One trick is to use first order Taylor approximation (because it’s linear). We will use the generalized version of the Taylor, so the first coefficient will actually be the Jacobian matrix (instead of the common derivative). Having said that, it’s pretty simple to generalise the basic Kalman Filter to the Extended Kalman Filter version (EKF). More precisely, we change the linear transformations \(F_k, B_k, H_k\) with some nonlinear functions \(f_k, h_k\). $$\text{For the nonlinear observations}: \\ \begin{cases}&\mathbf{x_k} = f(\mathbf{x_{k-1}}, \mathbf{u_k}) + \mathbf{w_k}, \quad w_k \text{ model uncertainty} \\ & \mathbf{z_k} = h(\mathbf{x_k}) + \mathbf{v_k}, \quad v_k \text{ measurement uncertainty} \end{cases} \\ \text{We have the predictions: } \\ \mathbf{\hat{x}_k^\text{-}} = f(\mathbf{\hat{x}_{k-1}}, \mathbf{u_k}) \\ \Sigma_k^\text{-} = J_f(\mathbf{\hat{x}_{k-1}})\Sigma_{k-1}J_f^T(\mathbf{\hat{x}_{k-1}}) + Q_{k-1} \\ \text{And their improvements: } \\ \color{blue}{K_k’ = \Sigma_k J_h^T(\mathbf{\hat{x}_k^\text{-}})(J_h(\mathbf{\hat{x}_k^\text{-}})\Sigma_k J_h^T(\mathbf{\hat{x}_k^\text{-}}) + R_k)^{-1}}\\ \color{blue}{\mathbf{\hat{x}_k} \approx \mathbf{\hat{x}_k^\text{-}} + K_k'(\mathbf{z_k} – h(\mathbf{\hat{x}_k^\text{-}}))} \\ \color{blue}{\Sigma_k = (I-K_k’ J_h(\mathbf{\hat{x}_k^\text{-}}))\Sigma_k^\text{-}}$$ Conclusion So that was it! A complete derivation of the equations behind the Kalman Filter. As you might have noticed, I tried to avoid the dense mathematical formalism. For a rigorous derivation I suggest looking at the original paper, made by the master himself [5]. And if you have any suggestions, or you see any mistakes, please let me know in the comments. Bibliography https://en.wikipedia.org/wiki/Lotka–Volterra_equations https://newonlinecourses.science.psu.edu/stat414/node/166/ https://indico.cern.ch/category/6015/attachments/192/632/Statistics_Gaussian_I.pdf https://www.bzarg.com/p/how-a-kalman-filter-works-in-pictures/ https://www.cs.unc.edu/~welch/kalman/media/pdf/Kalman1960.pdf
Bernoulli actions of type III Speaker:Stefaan Vaes Time: Fri 16:00-17:00, 2019-9-13 Venue:Lecture Hall, Jin Chun Yuan West Bldg. In this lecture, I focus on ergodic theory of group actions. We consider the translation action of a discrete group G on the product space {0,1}^G equipped with the product of probability measures \mu_g on {0,1}. When all \mu_g are equal, these are the classical Bernoulli actions, which are probability measure preserving. When the \mu_g are distinct, non measure preserving actions of Krieger type III may appear. I will explain an intricate connection to L^2-cohomology. In particular, I will show that a group G admits a Bernoulli action of type III_1 if and only if G has nonzero first L^2-cohomology. I will also explain why the group of integers does not admit a Bernoulli action of type II_\infty and why type III_\lambda only arises when G has more than one end. This is joint work with J. Wahl, and with M. Björklund and Z. Kosloff. Stefaan Vaes is a full professor and the head of the Analysis section at KU Leuven (Belgium). His research focuses on operator algebras and their connections to group theory and ergodic theory. He was an invited speaker at the International Congress of Mathematicians in 2010 and at the European Congress of Mathematics in 2016. In 2015, he was awarded the Francqui Prize, which is the highest scientific distinction in Belgium. In the spring of 2017, he was a Rothschild Fellow at the Newton Institute in Cambridge. Stefaan Vaes is a member of the Royal Academy of Belgium (KVAB). He is one of the editors-in-chief of Journal of Functional Analysis. Slide: beijing-chern-lecture-4
I heard a couple of times that there is no dynamics in 3D (2+1) GR, that it's something like a topological theory. I got the argument in the 2D case (the metric is conformally flat, Einstein equations trivially satisfied and the action is just a topological number) but I don't get how it is still true or partially similar with one more space dimension. The absence of physical excitations in 3 dimensions has a simple reason: the Riemann tensor may be fully expressed via the Ricci tensor. Because the Ricci tensor vanishes in the vacuum due to Einstein's equations, the Riemann tensor vanishes (whenever the equations of motion are imposed), too: the vacuum has to be flat (no nontrivial Schwarzschild-like curved vacuum solutions). So there can't be any gravitational waves, there are no gravitons (quanta of gravitational waves). In other words, Ricci flatness implies flatness. Counting components of tensors The reason why the Riemann tensor is fully determined by the Ricci tensor is not hard to see. The Riemann tensor is $R_{abcd}$ but it is antisymmetric in $ab$ and in $cd$ and symmetric under exchange of the index pairs $ab$, $cd$. In 3 dimensions, one may dualize the antisymmetric index pairs $ab$ and $cd$ to simple indices $e,f$ using the antisymmetric $\epsilon_{abe}$ tensor and the Riemann tensor is symmetric in these new consolidated $e,f$ indices so it has 6 components, just like the Ricci tensor $R_{gh}$. Because the Riemann tensor may always be written in terms of the Ricci tensor and because they have the same number of components at each point in $D=3$, it must be true that the opposite relationship has to exist, too. It is $$ R_{abcd} = \alpha(R_{ac}g_{bd} - R_{bc}g_{ad} - R_{ad}g_{bc} + R_{bd}g_{ac} )+\beta R(g_{ac}g_{bd}-g_{ad}g_{bc}) $$ I leave it as a homework to calculate the right values of $\alpha,\beta$ from the condition that the $ac$-contraction of the object above produces $R_{bd}$, as expected from the Ricci tensor's definition. Counting polarizations of gravitons (or linearized gravitational waves) An alternative way to prove that there are no physical polarizations in $D=3$ is to count them using the usual formula. The physical polarizations in $D$ dimensions are the traceless symmetric tensor in $(D-2)$ dimensions. For $D=3$, you have $D-2=1$ so only the symmetric tensor only has a unique component, e.g. $h_{22}$, and the traceless condition eliminates this last condition, too. So just like you have 2 physical graviton polarizations in $D=4$ and 44 polarizations in $D=11$, to mention two examples, there are 0 of them in $D=3$. The general number is $(D-2)(D-1)/2-1$. In 2 dimensions, the whole Riemann tensor may be expressed in terms of the Ricci scalar curvature $R$ (best the Ricci tensor itself is $R_{ab}=Rg_{ab}/2$) which is directly imprinted to the component $R_{1212}$ etc.: Einstein's equations become vacuous in 2D. The number of components of the gravitational field is formally $(-1)$ in $D=2$; the local dynamics of the gravitational sector is not only vacuous but it imposes constraints on the remaining matter, too. Other effects of gravity in 3D While there are no gravitational waves in 3 dimensions, it doesn't mean that there are absolutely no gravitational effects. One may create point masses. Their gravitational field creates a vacuum that is Riemann-flat almost everywhere but creates a deficit angle. Approximately equivalent theories Due to the absence of local excitations, this is formally a topological theory and there are maps to other topological theories in 3D, especially the Chern-Simons theory with a gauge group. However, this equivalence only holds in some perturbative approximations and under extra assumptions, and for most purposes, it is a vacuous relationship, anyway.
Under the auspices of the Computational Complexity Foundation (CCF) We revisit the problem of hardness amplification in $\NP$, as recently studied by O'Donnell (STOC `02). We prove that if $\NP$ has a balanced function $f$ such that any circuit of size $s(n)$ fails to compute $f$ on a $1/\poly(n)$ fraction of inputs, then $\NP$ has a function $f'$ such that any circuit of size $s'(n)=s(\sqrt{n})^{\Omega(1)}$ fails to compute $f'$ on a $1/2 - 1/s'(n)$ fraction of inputs. In particular, - If $s(n)=n^{\omega(1)}$, we amplify to hardness $1/2-1/n^{\omega(1)}$. - If $s(n)=2^{n^{\Omega(1)}}$, we amplify to hardness $1/2-1/2^{n^{\Omega(1)}}$. - If $s(n)=2^{\Omega(n)}$, we amplify to hardness $1/2-1/2^{\Omega(\sqrt{n})}$. These improve the results of O'Donnell, which only amplified to $1/2-1/\sqrt{n}$. O'Donnell also proved that no construction of a certain general form could amplify beyond $1/2-1/n$. We bypass this barrier by using both {\em derandomization} and {\em nondeterminism} in the construction of $f'$. We also prove impossibility results demonstrating that both our use of nondeterminism and the hypothesis that $f$ is balanced are necessary for ``black-box'' hardness amplification procedures (such as ours).
Following the ideas in the suggested reading by Trurl, here is an outline of how to go about it in your case. The main complication in comparison with the linked random walk question is that the backward step is not $-1$. I'm going to go ahead and divide your amounts by 6 to simplify them, so if you win, you win $1$, and if you lose, you lose $m$. The specific example you give has $m = 5$, but it will work with any positive integer (I haven't tried to adapt to non-integer loss/win amount ratio). Let's say that you start with $x$, and we want to find the probability of ruin if you play forever; call that probability $f(x)$. There are two ways that can happen: either you win the first round with a probability of $p$ (in your example $p = 0.9$), followed by a ruin from the new capital of $x+1$ with probability $f(x+1)$, or losing the first round with probability $1-p$ followed by ruin from capital $x-m$ with probability $f(x-m)$. So$$ f(x) = p f(x+1) + (1-p)f(x-m) $$This is valid so long as $x > 0$ so that you can actually play the first round. For $x \leq 0$, $f(x) = 0$ (the reserve is already exhausted). If you rearrange the above equation, you can get a recursive formula for the function:$$ f(x+1) = \frac{f(x) - (1-p)f(x-m)}{p}. $$But the problem is, while we know that $f(0)=0$, we can't use that to initiate the recursion, because the formula isn't valid for $x = 0$. So we need to find $f(1)$ in some other way, and this is where the random walk comes in. Imagine starting with capital of $1$, and let $r_i$ define the probability of (eventually reaching) ruin by reaching the amount $-i$, but without reaching any value between $-i$ and $1$ before that. For example, $r_2$ is the probability of (sooner or later) getting from $1$ to exactly $-2$ (this might happen by winning a few rounds to get to $m-2$ and then losing the next round, for instance). $r_0$ is the probability of ruin by getting to $0$ exactly. So, how can that last event happen? Losing the first round would jump over $0$ straight to $1-m$, so the only possibility is winning (probability $p$), followed by either: a ruin from $2$ straight to $0$ (over possibly many rounds; "straight" here refers to never passing through $1$ on the way), which is the same as from $1$ straight to $-1$ (probability of $r_1$); or a "ruin" from $2$ to $1$ followed by ruin from $1$ to $0$ (probability of $r_0 \cdot r_0 = r_0^2$). In other words, we have this equation:$$ r_0 = p(r_1 + r_0^2) $$Similarly, by considering the possible "paths" that can take us from $1$ to $-i$ we get for each $i < m-1$$$ r_i = p(r_{i+1} + r_0r_i) $$The $i = m-1$ case is slightly different from the others, ruin from $1$ to $-(m-1)$ can happen either by losing the first round directly ($1-p$), or winning the first round to get to $2$ and then (eventually) dropping down from $2$ to $1$ and (again, eventually) ruin from $1$ to $-(m-1)$: $$ r_{m-1} = (1-p) + p \cdot r_0 \cdot r_{m-1}. $$ In principle, one can solve all these equations for the $r_i$'s simultaneously, but we can use a trick to avoid that. Let $s = \sum_{i=0}^{m-1} r_i$ (this is the total probability of ruin when starting from $1$, that is, $f(1)$, which we wanted to find). Add up all the equations, and you get $$ s = 1-p + p(s - r_0) +p r_0 s $$Solving for $s$,$$ (1-p-pr_0) s = 1-p-pr_0 $$which means that $s$ will have to be 1, unless $1-p-pr_0 = 0$. As that explanation of biased random walk proves rigorously, $s$ cannot be equal to 1 (your expectation value is positive, therefore statistically you must be moving away from $0$, not returning to it with certainty). We must conclude then that $1-p-pr_0 = 0$ which gives $r_0 = (1-p)/p$. It is easy to check that this leads to $r_i =(1-p)/p$ for all $i$ and indeed these values satisfy all the $r_i$ equations above. Thus finally,$$ f(1) = s = r_0 + r_1 + \dots + r_{m-1} = \frac{n(1-p)}{p}. $$ Using this as a a starting point, now you can use the recursive equation we got in the beginning to find $f(x)$ for any $x$. With bets of \$6, \$2000 rescales to $x=334$, so $f(334)$ gives you the risk of ruin. Or, first find which $x$ gives you a tolerable risk, and from that determine the appropriate size of the bets, $\$2000/x$.
The Real Number Line One way to represent the real numbers $\mathbb{R}$ is on the real number line as depicted below. We will now state the important geometric representation of the absolute value with respect to the real number line. Definition: If $a$ and $b$ are real numbers, then we say that the distance from $a$ to the origin $0$ is the absolute value of $a$, $\mid a \mid$. We say that the distance between $a$ and $b$ is the absolute value of their difference, namely $\mid a - b \mid$. For example consider the numbers $-2$ and $2$. There is a distance of $4$ in between these numbers because $\mid -2 - 2 \mid = \mid -4 \mid = 4$. Epsilon Neighbourhood of a Real Number Definition: Let $a$ be a real number and let $\epsilon > 0$. The $\epsilon$-neighbourhood of the number $a$ is the set denoted $V_{\epsilon} (a) := \{ x \in \mathbb{R} : \: \mid x - a \mid < \epsilon \}$. Alternatively we can define $V_{\epsilon}(a) := \{x \in \mathbb{R} : a - \epsilon < x < a + \epsilon \}$. For example, consider the point $1$, and let $\epsilon_0 = 2$. Then $V_{\epsilon_0} (1) = \{ x \in \mathbb{R} : \mid x - 1 \mid < 2 \} = (-1, 3)$. We will now look at a simple theorem regarding the epsilon-neighbourhood of a real number. Theorem 1: Let $a$ be a real number. If $\forall \epsilon > 0$, $x \in V_{\epsilon} (a)$ then $x = a$. Proof of Theorem 1:Suppose that for some $x$, $\forall \epsilon > 0$, $\mid x - a \mid < \epsilon$. We know that then $\mid x - a \mid = 0$ if and only if $x - a = 0$ and therefore $x = a$. $\blacksquare$
NTS ABSTRACTSpring2019 Return to [1] Contents Jan 23 Yunqing Tang Reductions of abelian surfaces over global function fields For a non-isotrivial ordinary abelian surface $A$ over a global function field, under mild assumptions, we prove that there are infinitely many places modulo which $A$ is geometrically isogenous to the product of two elliptic curves. This result can be viewed as a generalization of a theorem of Chai and Oort. This is joint work with Davesh Maulik and Ananth Shankar. Jan 24 Hassan-Mao-Smith--Zhu The diophantine exponent of the $\mathbb{Z}/q\mathbb{Z}$ points of $S^{d-2}\subset S^d$ Abstract: Assume a polynomial-time algorithm for factoring integers, Conjecture~\ref{conj}, $d\geq 3,$ and $q$ and $p$ prime numbers, where $p\leq q^A$ for some $A>0$. We develop a polynomial-time algorithm in $\log(q)$ that lifts every $\mathbb{Z}/q\mathbb{Z}$ point of $S^{d-2}\subset S^{d}$ to a $\mathbb{Z}[1/p]$ point of $S^d$ with the minimum height. We implement our algorithm for $d=3 \text{ and }4$. Based on our numerical results, we formulate a conjecture which can be checked in polynomial-time and gives the optimal bound on the diophantine exponent of the $\mathbb{Z}/q\mathbb{Z}$ points of $S^{d-2}\subset S^d$. Jan 31 Kyle Pratt Breaking the $\frac{1}{2}$-barrier for the twisted second moment of Dirichlet $L$-functions Abstract: I will discuss recent work, joint with Bui, Robles, and Zaharescu, on a moment problem for Dirichlet $L$-functions. By way of motivation I will spend some time discussing the Lindel\"of Hypothesis, and work of Bettin, Chandee, and Radziwi\l\l. The talk will be accessible, as I will give lots of background information and will not dwell on technicalities. Feb 7 Shamgar Gurevich Harmonic Analysis on $GL_n$ over finite fields Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the {\it character ratio}: $$trace (\rho(g))/dim (\rho),$$ for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant {\it rank}. This talk will discuss the notion of rank for GLn over finite fields, and apply the results to random walks. This is joint work with Roger Howe (TAMU). Feb 14 Tonghai Yang The Lambda invariant and its CM values Abstract: The Lambda invariant which parametrizes elliptic curves with two torsions (X_0(2)) has some interesting properties, some similar to that of the j-invariants, and some not. For example, $\lambda(\frac{d+\sqrt d}2)$ is a unit sometime. In this talk, I will briefly describe some of the properties. This is joint work with Hongbo Yin and Peng Yu. Feb 28 Brian Lawrence Diophantine problems and a p-adic period map. Abstract: I will outline a proof of Mordell's conjecture / Faltings's theorem using p-adic Hodge theory. Joint with Akshay Venkatesh. March 7 Masoud Zargar Sections of quadrics over the affine line Abstract: Abstract: Suppose we have a quadratic form Q(x) in d\geq 4 variables over F_q[t] and f(t) is a polynomial over F_q. We consider the affine variety X given by the equation Q(x)=f(t) as a family of varieties over the affine line A^1_{F_q}. Given finitely many closed points in distinct fibers of this family, we ask when there exists a section passing through these points. We study this problem using the circle method over F_q((1/t)). Time permitting, I will mention connections to Lubotzky-Phillips-Sarnak (LPS) Ramanujan graphs. Joint with Naser T. Sardari March 14 Elena Mantovan p-adic automorphic forms, differential operators and Galois representations A strategy pioneered by Serre and Katz in the 1970s yields a construction of p-adic families of modular forms via the study of Serre's weight-raising differential operator Theta. This construction is a key ingredient in Deligne-Serre's theorem associating Galois representations to modular forms of weight 1, and in the study of the weight part of Serre's conjecture. In this talk I will discuss recent progress towards generalizing this theory to automorphic forms on unitary and symplectic Shimura varieites. In particular, I will introduce certain p-adic analogues of Maass-Shimura weight-raising differential operators, and discuss their action on p-adic automorphic forms, and on the associated mod p Galois representations. In contrast with Serre's classical approach where q-expansions play a prominent role, our approach is geometric in nature and is inspired by earlier work of Katz and Gross. This talk is based joint work with Eishen, and also with Fintzen--Varma, and with Flander--Ghitza--McAndrew. March 28 Adebisi Agboola Relative K-groups and rings of integers Abstract: Suppose that F is a number field and G is a finite group. I shall discuss a conjecture in relative algebraic K-theory (in essence, a conjectural Hasse principle applied to certain relative algebraic K-groups) that implies an affirmative answer to both the inverse Galois problem for F and G and to an analogous problem concerning the Galois module structure of rings of integers in tame extensions of F. It also implies the weak Malle conjecture on counting tame G-extensions of F according to discriminant. The K-theoretic conjecture can be proved in many cases (subject to mild technical conditions), e.g. when G is of odd order, giving a partial analogue of a classical theorem of Shafarevich in this setting. While this approach does not, as yet, resolve any new cases of the inverse Galois problem, it does yield substantial new results concerning both the Galois module structure of rings of integers and the weak Malle conjecture. April 4 Wei-Lun Tsai Hecke L-functions and $\ell$ torsion in class groups Abstract: The canonical Hecke characters in the sense of Rohrlich form a set of algebraic Hecke characters with important arithmetic properties. In this talk, we will explain how one can prove quantitative nonvanishing results for the central values of their corresponding L-functions using methods of an arithmetic statistical flavor. In particular, the methods used rely crucially on recent work of Ellenberg, Pierce, and Wood concerning bounds for $\ell$-torsion in class groups of number fields. This is joint work with Byoung Du Kim and Riad Masri. April 11 Taylor Mcadam Almost-prime times in horospherical flows Abstract: Equidistribution results play an important role in dynamical systems and their applications in number theory. Often in such applications it is desirable for equidistribution to be effective (i.e. the rate of convergence is known). In this talk I will discuss some of the history of effective equidistribution results in homogeneous dynamics and give an effective result for horospherical flows on the space of lattices. I will then describe an application to studying the distribution of almost-prime times in horospherical orbits and discuss connections of this work to Sarnak’s Mobius disjointness conjecture. April 18 Ila Varma Malle's Conjecture for octic $D_4$-fields. Abstract: We consider the family of normal octic fields with Galois group $D_4$, ordered by their discriminant. In forthcoming joint work with Arul Shankar, we verify the strong Malle conjecture for this family of number fields, obtaining the order of growth as well as the constant of proportionality. In this talk, we will discuss and review the combination of techniques from analytic number theory and geometry-of-numbers methods used to prove these results. April 25 Michael Bush Interactions between group theory and number theory Abstract: I'll survey some of the ways in which group theory has helped us understand extensions of number fields with restricted ramification and why one might care about such things. Some of Nigel's contributions will be highlighted. A good portion of the talk should be accessible to those other than number theorists. April 25 Rafe Jones Eventually stable polynomials and arboreal Galois representations Abstract: Call a polynomial defined over a field K eventually stable if its nth iterate has a uniformly bounded number of irreducible factors (over K) as n grows. I’ll discuss some far-reaching conjectures on eventual stability, and recent work on various special cases. I’ll also describe some natural connections between eventual stability and arboreal Galois representations, which Nigel Boston introduced in the early 2000s. April 25 NTS Jen Berg Rational points on conic bundles over elliptic curves with positive rank Abstract: Varieties that fail to have rational points despite having local points for each prime are said to fail the Hasse principle. A systematic tool accounting for these failures is called the Brauer-Manin obstruction, which uses the Brauer group, Br X, to preclude the existence of rational points on a variety X. In this talk, we'll explore the arithmetic of conic bundles over elliptic curves of positive rank over a number field k. We'll discuss the insufficiency of the known obstructions to explain the failures of the Hasse principle for such varieties over a number field. We'll further consider questions on the distribution of the rational points of X with respect to the image of X(k) inside of the rational points of the elliptic curve E. In the process, we'll discuss results on a local-to-global principle for torsion points on elliptic curves over Q. This is joint work in progress with Masahiro Nakahara. April 25 Judy Walker Derangements of Finite Groups Abstract: In the early 1990’s, Nigel Boston taught an innovative graduate-level group theory course at the University of Illinois that focused on derangements (fixed-point-free elements) of transitive permutation groups. The course culminated in the writing of a 7-authored paper that appeared in Communications in Algebra in 1993. This paper contained a conjecture that was eventually proven by Fulman and Guralnick, with that result appearing in the Transactions of the American Mathematical Society just last year. May 2 Melanie Matchett Wood Unramified extensions of random global fields Abstract: For any finite group Gamma, I will give a "non-abelian-Cohen-Martinet Conjecture," i.e. a conjectural distribution on the "good part" of the Galois group of the maximal unramified extension of a global field K, as K varies over all Galois Gamma extensions of the rationals or rational function field over a finite field. I will explain the motivation for this conjecture based on what we know about these maximal unramified extensions (very little), and how we prove, in the function field case, as the size of the finite field goes to infinity, that the moments of the Galois groups of these maximal unramified extensions match out conjecture. This talk covers work in progress with Yuan Liu and David Zureick-Brown May 9 David Zureick-Brown Arithmetic of stacks Abstract: I'll discuss several diophantine problems that naturally lead one to study algebraic stacks, and discuss a few results.
LPC Hands-on Advanced Tutorials Session (HATS) on MET Instructions for HATS on MET at FNAL LPC in March 2014 Contact: Tai Sakuma Introduction This topic includes instructions for LPC HATS on MET, which takes place in March 2014. Hands-on 1: Check out CMSSW and MET recipe Computing Environment at FNAL LPC This exercise uses the LPC Cluster cmslpc-sl5.fnal.gov . After logging into cmslpc-sl5.fnal.gov , you need to set up the CMS environment by sourcing one of two files depending on the shell that you are using. For bash users, source /uscmst1/prod/sw/cms/shrc prod # for bash For tcsh users, source /uscmst1/prod/sw/cms/cshrc prod # for tcsh More information about cmslpc-sl5.fnal.gov can be found at CMSPublic.WorkBookRemoteSiteSpecifics . CMSSW Environment You need to move to a directory in which you would like to practice this exercise. mkdir -p ~/your/work/dir cd ~/your/work/dir Please replace ~/your/work/dir with a path to the directory of your choice. Then, you can check out a CMSSW release. This exercise uses CMSSW_5_3_15. scramv1 project CMSSW CMSSW_5_3_15 Move down two directories and enter the CMSSW runtime environment: cd CMSSW_5_3_15/src cmsenv MET Recipe In order to work out this twiki, you need to check out several extra files from github repositories. A git environment needs to be properly set up in order to be able to check out extra packages from github repositories. The instruction can be found at http://cms-sw.github.io/cmssw/faq.html . Check out extra files with the following commands: git cms-addpkg PhysicsTools/PatAlgos # PAT Recipe git cms-merge-topic cms-analysis-tools:5_3_15-addCSCTightHaloFilter # PAT Recipe git cms-merge-topic -u TaiSakuma:53X-met-140217-01 These commands are for CMSSW_5_3_15 . The commands for other CMSSW versions are listed on Build Now, you can build with the scram command: scram build -j 9 More information about scram can be found at Exercise files This exercise uses files in the branch hats_2014_mar_fnal at the github repo TaiSakuma/WorkBookMet : Clone this repo to your local directory. git clone git@github.com:TaiSakuma/WorkBookMet If the above command doesn't work, you can try instead git clone git://github.com/TaiSakuma/WorkBookMet Checkout the branch hats_2014_mar_fnal with the following commands. cd WorkBookMet/ git checkout hats_2014_mar_fnal cd .. Hands-on 2: Explore datasets Sample AOD files This exercise uses two sample AOD files stored in an EOS at FNAL LPC: 255M /eos/uscms/store/user/cmsdas/2014/MET/ZeroBias_Run2012C_22Jan2013-v1_AOD_numEvent1000.root 62M /eos/uscms/store/user/cmsdas/2014/MET/MET120_Run2012C_22Jan2013-v1_AOD_numEvent201.root 41M /eos/uscms/store/user/cmsdas/2014/MET/TTJets_AODSIM_532_numEvent100.root The 1st file contains 1000 events which were triggered by HLT_ZeroBias_* in the run 200491 in Run2012C, which were certified (json) , and which were stored in (DAS) /MinimumBias/Run2012C-22Jan2013-v1/AOD . The 2nd file contains 201 events which were triggered by HLT_MET120_* in the run 200491 in Run2012C, which were certified (json) , and which were stored in (DAS) /MET/Run2012C-22Jan2013-v1/AOD . The 3rd file contains 100 MC simulated events from (DAS) /TTJets_MassiveBinDECAY_TuneZ2star_8TeV-madgraph-tauola/Summer12_DR53X-PU_S10_START53_V7A-v1/AODSIM . These are the commands used to create the 1st and 2nd files: cmsrel CMSSW_5_3_15 cd CMSSW_5_3_15/src cmsenv git clone git@github.com:TaiSakuma/WorkBookMet git checkout hats_2014_mar_fnal cd .. cmsRun WorkBookMet/copyPickMerge_cfg.py \ inputFiles=/store/data/Run2012C/MinimumBias/AOD/22Jan2013-v1/20001/0434D6A7-FB73-E211-9655-003048F01140.root,/store/data/Run2012C/MinimumBias/AOD/22Jan2013-v1/20001/0C8412FC-BD73-E211-B041-003048F1C9DA.root \ outputFile=ZeroBias_Run2012C_22Jan2013-v1_AOD.root \ certFile=WorkBookMet/Cert_20140114_01_200491_JSON.txt \ triggerConditions=HLT_ZeroBias_\* \ maxEvents=2057 # <- chosen so that the number of events become 1000 after trigger conditions mv ZeroBias_Run2012C_22Jan2013-v1_AOD_numEvent2057.root ZeroBias_Run2012C_22Jan2013-v1_AOD_numEvent1000.root cp ZeroBias_Run2012C_22Jan2013-v1_AOD_numEvent1000.root /eos/uscms/store/user/cmsdas/2014/MET cmsRun WorkBookMet/copyPickMerge_cfg.py \ inputFiles=/store/data/Run2012C/MET/AOD/22Jan2013-v1/10000/00C31E85-BE91-E211-8AEC-003048FFD720.root,/store/data/Run2012C/MET/AOD/22Jan2013-v1/10000/08537154-A091-E211-9DA7-003048678E92.root \ outputFile=MET120_Run2012C_22Jan2013-v1_AOD.root \ certFile=WorkBookMet/Cert_20140114_01_200491_JSON.txt \ triggerConditions=HLT_MET120_\* \ maxEvents=2000 mv MET120_Run2012C_22Jan2013-v1_AOD_numEvent2000.root MET120_Run2012C_22Jan2013-v1_AOD_numEvent201.root cp MET120_Run2012C_22Jan2013-v1_AOD_numEvent201.root /eos/uscms/store/user/cmsdas/2014/MET Exercises The AOD sample files contain data collected in the run 200491. Find answers to the following questions by using CMS web services: CADI, iCMS, TWiki, DAS, WBM, PREP, DocDB, LXR, HLT Config Browser. When were the data collected? What was the HLT key used in the run? Which versions of HLT_ZeroBias_* are HLT_MET120_* were used? (What were '*'s in the HLT paths in this run?) What were the pre-scale columns for these HLT paths? Which pre-scale column was actually used? Which PDs are the data triggered by these HLT paths stored? How do you locate the data? Are the data currently stored at Fermilab? Find the same information by the command line? Hands-on 3: Access to MET objects in AOD Here, we will access to pfMet , particle-flow MET, which is the negative of the vector sum of pT of all reconstructed particle flow candidates in the event. \[{\hspace{0.1ex}\not\mathrel{\hspace{-0.1ex}\vec{E}}}_\textrm{T}^\textrm{raw}=-\sum_{i\in \textrm{all}} \vec{p}_{\textrm{T}i} \] We sometimes call it raw pfMet to distinguish from corrected pfMet , which we will produce later. Note: In CMSSW_5_3_X, which was used to produce the sample files, pfMet is the negative of the vector sum of ET instead of pT. We switched from ET to pT at CMSSW_6_0_X. We will use the python script: This script uses FWLite.Python , introduced at WorkBookFWLitePython . We will use this script to access pfMet in the MC AOD sample file introduced above: Execute the script: ./WorkBookMet/printMet_AOD.py --inputPath=/eos/uscms/store/user/cmsdas/2014/MET/ZeroBias_Run2012C_22Jan2013-v1_AOD_numEvent1000.root The script will print event contents as follows: run lumi event met.pt met.px met.py met.phi 200491 212 255053363 7.186 -1.839 6.946 104.83 200491 212 255247499 13.971 13.532 3.472 14.39 200491 212 255280539 32.086 9.459 30.660 72.85 200491 212 255343555 31.557 27.650 15.210 28.81 200491 212 255501891 23.574 19.861 12.700 32.60 200491 212 255649379 1.585 1.414 -0.716 -26.84 200491 212 255881363 18.739 13.624 -12.866 -43.36 200491 212 255988467 23.115 22.483 5.366 13.42 200491 212 256203515 13.184 -7.333 -10.956 -123.80 200491 212 256238275 27.825 -16.356 -22.511 -126.00 200491 212 256326827 31.791 -12.983 -29.019 -114.10 200491 212 256323987 20.441 19.904 4.654 13.16 200491 212 256351491 16.441 -2.489 -16.252 -98.71 200491 212 256352459 22.466 -19.864 -10.495 -152.15 200491 212 255259541 13.226 -7.177 -11.109 -122.87 200491 212 255304645 27.764 27.641 -2.611 -5.40 , run , lumi are the run number, the luminosity section, and the event id. event is the magnitude of MET. MET is, in principle, a vector on the px-py plane. However, we often casually call its magnitude MET as well. met.pt , met.px are the x and y components of MET respectively. met.py is the azimuth of MET. met.phi Exercises Modify WorkBookMet/printMet_AOD.py and make histograms of met.px, met.py, and met.pt. Why are they distributed in the way they are? What are possible distribution functions? Do met.px, met.py have Gaussian with the mean zero and the same variance? If so, what will be the distribution for met.pt? To create a file in python outFile = ROOT.TFile('filename.root', 'RECREATE') To create a histogram in python hist = ROOT.TH1D("hist_name", "hist_title", 80, -40, 40) To fill a histogram hist.Fill(value) To save the histogram in the file and close the file outFile.Write() outFile.Close() Hands-on 4: Apply MET filters Large MET is caused not only by interesting physics processes in collisions such as production of invisible particles. In fact, large MET has more often uninteresting causes such as detector noise, cosmic rays , and beam-halo particles. MET with uninteresting causes is called false MET, anomalous MET, or fake MET. For an accurate reconstruction of MET, it is, therefore, not sufficient to reconstruct all visible particles produced in collisions. We developed several algorithms to identify false MET. These algorithms, for example, use timing, pulse shape, and topology of signal. After the identified false MET is removed, the agreement of the MET spectrum with MC, in which causes of false MET are not explicitly simulated, will typically improve significantly. Here, we will apply a set of MET filters. We will use the python configuration file: In the configuration file, we use the Global Tag FT_R_53_V21::All at L25 . process.GlobalTag.globaltag = cms.string("FT_R_53_V21::All") The Global Tag specifies a set of alignment and calibration constants stored in the database to be used in cmsRun . If you use the configuration file met_filters_cfg.py other than for this HATS, you might need to find the correct Global Tag at The MET group recommends a set of MET filters to be used for physics analyses. The recommendation is documented at Run the filters on a sample AOD file, cmsRun WorkBookMet/met_filters_cfg.py inputFiles=file:/eos/uscms/store/user/cmsdas/2014/MET/MET120_Run2012C_22Jan2013-v1_AOD_numEvent201.root Exercises The sample AOD file contains 201 events triggered by . HLT_MET120_* How many events are rejected by which filters? How many events are not rejected by any filter? Look at events with . cmsShow cmsShow /eos/uscms/store/user/cmsdas/2014/MET/MET120_Run2012C_22Jan2013-v1_AOD_numEvent201.root Hands-on 5: Apply MET corrections MET objects accessed above are called raw MET . The raw MET is systematically different from true MET, i.e., the transverse momentum carried by invisible particles, for many reasons including the non-compensating nature of the calorimeters and detector misalignment. To make MET a better estimate of true MET, we will apply MET corrections with the python configuration file: cmsRun ./WorkBookMet/corrMet_cfg.py inputFiles=file:/eos/uscms/store/user/cmsdas/2014/MET/TTJets_AODSIM_532_numEvent100.root This will produce a file corrMet.root , which contains various MET collections, each with different combinations of the MET corrections as summarized in the table. module name descriptions pfMetT0rt pfMET + Type-0RT pfMetT0rtT1 pfMET + Type-0RT + Type-I pfMetT0pc pfMET + Type-0PC pfMetT0pcT1 pfMET + Type-0PC + Type-I pfMetT0rtTxy pfMET + Type-0RT + xy-Shift pfMetT0rtT1Txy pfMET + Type-0RT + Type-I + xy-Shift pfMetT0pcTxy pfMET + Type-0PC + xy-Shift pfMetT0pcT1Txy pfMET + Type-0PC + Type-I + xy-Shift pfMetT1 pfMET + Type-I pfMetT1Txy pfMET + Type-I + xy-Shift The python script printMet_corrMet.py shows an exmple how to access to the corrected METs in corrMet.root . ./WorkBookMet/printMet_corrMet.py --inputPath=./corrMet.root This will simply print the contents as follows. run lumi event module met.pt met.px met.py met.phi 1 34734 10417901 pfMet 36.837 18.857 -31.645 -59.21 1 34734 10417901 pfMetT0rt 32.819 9.912 -31.286 -72.42 1 34734 10417901 pfMetT0rtT1 37.309 -0.433 -37.307 -90.67 1 34734 10417901 pfMetT0pc 32.296 9.728 -30.796 -72.47 1 34734 10417901 pfMetT0pcT1 36.822 -0.618 -36.816 -90.96 1 34734 10417901 pfMetT1 38.615 8.511 -37.665 -77.27 1 34734 10417901 pfMetT0rtTxy 29.873 10.417 -27.998 -69.59 1 34734 10417901 pfMetT0rtT1Txy 34.018 0.072 -34.018 -89.88 1 34734 10417901 pfMetT0pcTxy 29.349 10.233 -27.508 -69.59 1 34734 10417901 pfMetT0pcT1Txy 33.528 -0.113 -33.528 -90.19 1 34734 10417901 pfMetT1Txy 35.539 9.016 -34.377 -75.30 1 34734 10417902 pfMet 19.955 -16.211 -11.636 -144.33 1 34734 10417902 pfMetT0rt 15.390 -9.634 -12.002 -128.76 1 34734 10417902 pfMetT0rtT1 12.675 -5.521 -11.410 -115.82 1 34734 10417902 pfMetT0pc 16.339 -13.267 -9.537 -144.29 1 34734 10417902 pfMetT0pcT1 12.799 -9.153 -8.945 -135.66 1 34734 10417902 pfMetT1 16.381 -12.098 -11.044 -137.61 1 34734 10417902 pfMetT0rtTxy 14.504 -9.535 -10.929 -131.10 1 34734 10417902 pfMetT0rtT1Txy 11.672 -5.421 -10.337 -117.68 1 34734 10417902 pfMetT0pcTxy 15.653 -13.167 -8.465 -147.27 1 34734 10417902 pfMetT0pcT1Txy 11.998 -9.054 -7.873 -138.99 1 34734 10417902 pfMetT1Txy 15.601 -11.998 -9.971 -140.2 Hands-on 6: Learn MET in pile-up interactions Exercises In the Exercises in Hands-on 3, we modified WorkBookMet/printMet_AOD.py and made histograms of met.px, met.py, and met.pt. Here we study how these distributions change with the number of the pile-up interactions. Modify WorkBookMet/printMet_AOD.py further and make the same histograms of met.px, met.py, and met.pt for each of the four different ranges of the number of the reconstructed vertices: [1, 9], [10, 19], [20, 26], and [27, ∞]. You can make handle for vertices as handleVertex = Handle("std::vector<reco::Vertex>") With this handle, the number of the reconstructed vertices can be obtained with the following piece of code: event.getByLabel(("offlinePrimaryVertices", "", ""), handleVertex) vertices = handleVertex.product() nvtx = len(vertices) -- TaiSakuma - 18 Mar 2014
The Fundamental Theorem of Algebra Recall from the Properties of Polynomials page that a polynomial is a function in the form $p(x) = a_0 + a_1x + a_2x^2 + ... + a_nx^n$, and that if $a_n \neq 0$ then $\mathrm{deg} (p) = n$. We will now look at some more theorems regarding polynomials, the first of which is extremely important and is known as The Fundamental Theorem of Algebra. Theorem 1 (The Fundamental Theorem of Algebra): If $p(x) = a_0 + a_1x + a_2x^2 + ... + a_nx^n$ is a polynomial with $\mathrm{deg} (p) = n$ and where $a_0, a_1, ..., a_n \in \mathbb{C}$ are complex coefficients of $p$, then $p$ has exactly $n$ roots, $\lambda_1, \lambda_2, ..., \lambda_n \in \mathbb{C}$. We will not prove The Fundamental Theorem of Algebra, however we should note that $\mathbb{R} \subset \mathbb{C}$, and as a result, the coefficients $a_0, a_1, ..., a_n \in \mathbb{C}$ may in fact be real numbers themselves as every real number is a complex number in the form $a + 0i$ where $a \in \mathbb{R}$. For example, the polynomial $p(x) = x^2 - 9$ is of degree $2$, and so by The Fundamental Theorem of Algebra, this polynomial has $2$ roots. We can easily find this roots by factoring this polynomial as $p(x) = (x - 3)(x + 3)$, and so $\lambda_1 = 3$ and $\lambda_2 = -3$ are the roots to $p$. Another example is the polynomial $q(x) = x^2 + x + 1$. Once again, this polynomial is of degree $2$ and so by The Fundamental Theorem of Algebra, this polynomial has $2$ roots. This polynomial cannot be factored nicely, so we must use the quadratic equation to find these roots, which is $\frac{-b \pm \sqrt{b^2 - 4ac}}{2a}$, where $a$ is the coefficient on the $x^2$ term, $b$ is the coefficient on the $x$ term, and $c$ is the coefficient on the constant term. Therefore we have that,(1) Therefore $\lambda_1 = \frac{-1}{2} + \frac{\sqrt{3}}{2} i$ and $\lambda_2 = \frac{-1}{2} - \frac{\sqrt{3}}{2} i$ are the two roots to $q$. Notice that in this example, neither $\lambda_1$ or $\lambda_2$ were real.
Contraction and Dilation Transformation Operators We will now begin to look at some more interesting aspects of matrices and vectors. One such use arises in linear transformations or linear maps. We will look more into these later on, but for now, we will outline a few common linear transformations before moving onto a more abstract topic of vector spaces. Definition: For any vector $\vec{x} \in \mathbb{R}^n$ and any scalar $k$ such that $0 ≤ k ≤ 1$, the transformation $T: \mathbb{R}^n \to \mathbb{R}^n$ uniformly contracts all vectors $\vec{x}$ by $k$ towards the origin. If $k ≥ 1$, the transformation $T: \mathbb{R}^n \to \mathbb{R}^n$ uniformly dilates all vectors $\vec{x}$ by $k$ away from the origin. In both cases, $T(x) = kx$ The following images illustrate both a contraction and dilation transformation $T: \mathbb{R}^2 \to \mathbb{R}^2$. Contracted vectors move towards the origin while dilated vectors move away from the origin. In the example above, we note that for any vector $\vec{x} = (x_1, x_2)$ the following equations represent the image of the contraction/dilation:(1) We can write this in matrix notation in the following manner:(2) Thus $k$ multiplied by $I$ is the standard matrix for this transformation.
Algebraic Geometry Seminar Spring 2017 The seminar meets on Fridays at 2:25 pm in Van Vleck B113. Contents Algebraic Geometry Mailing List Please join the AGS Mailing List to hear about upcoming seminars, lunches, and other algebraic geometry events in the department (it is possible you must be on a math department computer to use this link). Spring 2017 Schedule Abstracts Sam Raskin W-algebras and Whittaker categories Affine W-algebras are a somewhat complicated family of (topological) associative algebras associated with a semisimple Lie algebra, quantizing functions on the algebraic loop space of Kostant's slice. They have attracted a great deal of attention because of Feigin-Frenkel's duality theorem for them, which identifies W-algebras for a Lie algebra and for its Langlands dual through a subtle construction. The purpose of this talk is threefold: 1) to introduce a ``stratification" of the category of modules for the affine W-algebra, 2) to prove an analogue of Skryabin's equivalence in this setting, realizing the categoryof (discrete) modules over the W-algebra in a more natural way, and 3) to explain how these constructions help understand Whittaker categories in the more general setting of local geometric Langlands. These three points all rest on the same geometric observation, which provides a family of affine analogues of Bezrukavnikov-Braverman-Mirkovic. These results lead to a new understanding of the exactness properties of the quantum Drinfeld-Sokolov functor. Nick Salter Mapping class groups and the monodromy of some families of algebraic curves In this talk we will be concerned with some topological questions arising in the study of families of smooth complex algebraic curves. Associated to any such family is a monodromy representation valued in the mapping class group of the underlying topological surface. The induced action on the cohomology of the fiber has been studied for decades- the more refined topological monodromy is largely unexplored. In this talk, I will discuss some theorems concerning the topological monodromy groups of families of smooth plane curves, as well as families of curves in CP^1 x CP^1. This will involve a blend of algebraic geometry, singularity theory, and the mapping class group, particularly the Torelli subgroup. Robert Laudone The Spin-Brauer diagram algebra Schur-Weyl duality is an important result in representation theory which states that the actions of [math]\mathfrak{S}_n[/math] and [math]\mathbf{GL}(N)[/math] on [math]\mathbf{V}^{\otimes n}[/math] generate each others' commutants. Here [math]\mathfrak{S}_n[/math] is the symmetric group and [math]\mathbf{V}[/math] is the standard complex representation. In this talk, we investigate the Spin-Brauer diagram algebra, which arises from studying an analogous form of Schur-Weyl duality for the action of the spinor group on [math]\mathbf{V}^{\otimes n} \otimes \Delta[/math]. Here [math]\mathbf{V}[/math] is again the standard [math]N[/math]-dimensional complex representation of [math]{\rm Pin}(N)[/math] and [math]\Delta[/math] is the spin representation. We will give a general construction of the Spin-Brauer diagram algebra, discuss its connection to [math]{\rm End}_{{\rm Pin}(N)}(V^{\otimes n} \otimes \Delta)[/math] and time permitting we will mention some interesting properties of the algebra, in particular its cellularity. Nathan Clement Parabolic Higgs bundles and the Poincare line bundle We work with some moduli spaces of (parabolic) Higgs bundles which come in infinite families indexed by rank. I'll give some motivation for the study of parabolic Higgs bundles, but the main problem will be to describe the moduli spaces. By applying some integral transforms, most importantly the Fourier-Mukai transform associated to the Poincare line bundle, we are able to reduce the rank of the problem and eventually get a good presentation of the moduli spaces. One fun technique involved in the argument deals with the spectrum of a one-parameter family of linear operators. When such an operator degenerates to one that is diagonalizable with repeated eigenvalues, the spectrum of the operator admits a scheme-theoretic refinement in a certain blowup which carries more information than simply the eigenvalues with multiplicity. Amy Huang Equations of Kalman Varieties Given a subspace L of a vector space V, the Kalman variety consists of all matrices of V that have a nonzero eigenvector in L. We will discuss how to apply Kempf Vanishing technique with some more explicit constructions to get a long exact sequence involving coordinate ring of Kalman variety, its normalization and some other related varieties in characteristic zero. This long exact sequence is first conjectured by Sam in 2011. Time permitting we will also discuss how to extract more information from the long exact sequence including the minimal defining equations for Kalman varieties. Jie Zhou Gromov-Witten invariants of elliptic curves and moments of Weierstrass P-function I will talk about a joint work with Si Li on the computation of higher genus Gromov-Witten invariants of elliptic curves using mirror symmetry. The Gromov-Witten theory for elliptic curves is proved by Si Li, basing on the works of Bershadsky-Cecotti-Ooguri-Vafa and Costello-Li, to be equivalent to a quantum field theory on the mirror elliptic curve. Taking the Feynman graph integrals as the definition of the quantum field theory, I will explain the computations on the integrals (which are closely related to moments of the Weierstrass P-function). I will also discuss the quasi-modularity and the modular completion of the integrals. The Hodge-theoretic interpretations of all of these will also be explained. Vladimir Dokchitser Arithmetic of hyperelliptic curves over local fields Let C:y^2 = f(x) be a hyperelliptic curve over a local field K of odd residue characteristic. We show how several arithmetic invariants of the curve and its Jacobian, including its potential stable reduction, Galois representation and (in the semistable case) Tamagawa numbers, can be simply extracted from combinatorial data coming from the roots of f(x). Laurentiu Maxim Characteristic classes of complex hypersurfaces and multiplier ideals I will discuss two different ways to measure the complexity of singularities of a (globally-defined) complex hypersurface. The first is derived via (Hodge-theoretic) characteristic classes of singular complex algebraic varieties, while the second is provided by the multiplier ideals. I will also point out a natural connection between these two points of view. (Joint work with Morihiko Saito and Joerg Schuermann.) Vladimir Sotirov Cohomology of compactified Jacobians of singular curves Qingyuan Jiang Categorical Plücker formula and Homological Projective Duality The talk will be based on the joint work with Prof. Conan Leung, and Mr. Ying Xie (arXiv:1704.01050). We will be mainly interested in the question of how derived categories of coherent sheaves of two varieties behave under intersections, and how they are related to that of the original varieties. For the study of derive categories of linear sections of projective varieties, Kuznetsov introduced the concept of Homological Projective Duality (HPD). Since its introduction, the HPD theory becomes one of the most powerful frameworks in the homological study of algebraic geometry. The main result (HPD theorem) of the theory gives complete descriptions of bounded derived categories of coherent sheaves of (dual) linear sections of HP-dual varieties. For general intersections beyond linear sections, we show same type results hold. More precisely, our results are twofold: i) Decomposition part. For any two varieties $X$, $T$ with maps to projective space $\mathbb{P}$ and Lefschetz decompositions, then there is a semiorthogonal decomposition of $D(X\times_{\mathbb{P}} T$ into ‘ambient’ part (contributions from ambient product $X \times T$) and ‘primitive’ part, as long as the fiber product $X\times_{\mathbb{P}} T$ has expected dimension; ii) Comparison part. If $Y$, $S$ are the respective HP-duals of $X$, $T$, then the ‘primitive’ parts of the derived categories of the two fiber products $D(X\times_{\mathbb{P}} T$ and $D(Y \times_{\mathbb{P} S)$ are equivalent, provided that the two pairs intersect properly. In the case when one pair of HP-dual varieties (say $(S,T)$) are given by dual linear subspaces, our method provides a more direct proof of the original HPD theorem.
The bounty period lasts 7 days. Bounties must have a minimum duration of at least 1 day. After the bounty ends, there is a grace period of 24 hours to manually award the bounty. Simply click the bounty award icon next to each answer to permanently award your bounty to the answerer. (You cannot award a bounty to your own answer.) @Mathphile I found no prime of the form $$n^{n+1}+(n+1)^{n+2}$$ for $n>392$ yet and neither a reason why the expression cannot be prime for odd n, although there are far more even cases without a known factor than odd cases. @TheSimpliFire That´s what I´m thinking about, I had some "vague feeling" that there must be some elementary proof, so I decided to find it, and then I found it, it is really "too elementary", but I like surprises, if they´re good. It is in fact difficult, I did not understand all the details either. But the ECM-method is analogue to the p-1-method which works well, then there is a factor p such that p-1 is smooth (has only small prime factors) Brocard's problem is a problem in mathematics that asks to find integer values of n and m for whichn!+1=m2,{\displaystyle n!+1=m^{2},}where n! is the factorial. It was posed by Henri Brocard in a pair of articles in 1876 and 1885, and independently in 1913 by Srinivasa Ramanujan.== Brown numbers ==Pairs of the numbers (n, m) that solve Brocard's problem are called Brown numbers. There are only three known pairs of Brown numbers:(4,5), (5,11... $\textbf{Corollary.}$ No solutions to Brocard's problem (with $n>10$) occur when $n$ that satisfies either \begin{equation}n!=[2\cdot 5^{2^k}-1\pmod{10^k}]^2-1\end{equation} or \begin{equation}n!=[2\cdot 16^{5^k}-1\pmod{10^k}]^2-1\end{equation} for a positive integer $k$. These are the OEIS sequences A224473 and A224474. Proof: First, note that since $(10^k\pm1)^2-1\equiv((-1)^k\pm1)^2-1\equiv1\pm2(-1)^k\not\equiv0\pmod{11}$, $m\ne 10^k\pm1$ for $n>10$. If $k$ denotes the number of trailing zeros of $n!$, Legendre's formula implies that \begin{equation}k=\min\left\{\sum_{i=1}^\infty\left\lfloor\frac n{2^i}\right\rfloor,\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\right\}=\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\end{equation} where $\lfloor\cdot\rfloor$ denotes the floor function. The upper limit can be replaced by $\lfloor\log_5n\rfloor$ since for $i>\lfloor\log_5n\rfloor$, $\left\lfloor\frac n{5^i}\right\rfloor=0$. An upper bound can be found using geometric series and the fact that $\lfloor x\rfloor\le x$: \begin{equation}k=\sum_{i=1}^{\lfloor\log_5n\rfloor}\left\lfloor\frac n{5^i}\right\rfloor\le\sum_{i=1}^{\lfloor\log_5n\rfloor}\frac n{5^i}=\frac n4\left(1-\frac1{5^{\lfloor\log_5n\rfloor}}\right)<\frac n4.\end{equation} Thus $n!$ has $k$ zeroes for some $n\in(4k,\infty)$. Since $m=2\cdot5^{2^k}-1\pmod{10^k}$ and $2\cdot16^{5^k}-1\pmod{10^k}$ has at most $k$ digits, $m^2-1$ has only at most $2k$ digits by the conditions in the Corollary. The Corollary if $n!$ has more than $2k$ digits for $n>10$. From equation $(4)$, $n!$ has at least the same number of digits as $(4k)!$. Stirling's formula implies that \begin{equation}(4k)!>\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\end{equation} Since the number of digits of an integer $t$ is $1+\lfloor\log t\rfloor$ where $\log$ denotes the logarithm in base $10$, the number of digits of $n!$ is at least \begin{equation}1+\left\lfloor\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)\right\rfloor\ge\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right).\end{equation} Therefore it suffices to show that for $k\ge2$ (since $n>10$ and $k<n/4$), \begin{equation}\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)>2k\iff8\pi k\left(\frac{4k}e\right)^{8k}>10^{4k}\end{equation} which holds if and only if \begin{equation}\left(\frac{10}{\left(\frac{4k}e\right)}\right)^{4k}<8\pi k\iff k^2(8\pi k)^{\frac1{4k}}>\frac58e^2.\end{equation} Now consider the function $f(x)=x^2(8\pi x)^{\frac1{4x}}$ over the domain $\Bbb R^+$, which is clearly positive there. Then after considerable algebra it is found that \begin{align*}f'(x)&=2x(8\pi x)^{\frac1{4x}}+\frac14(8\pi x)^{\frac1{4x}}(1-\ln(8\pi x))\\\implies f'(x)&=\frac{2f(x)}{x^2}\left(x-\frac18\ln(8\pi x)\right)>0\end{align*} for $x>0$ as $\min\{x-\frac18\ln(8\pi x)\}>0$ in the domain. Thus $f$ is monotonically increasing in $(0,\infty)$, and since $2^2(8\pi\cdot2)^{\frac18}>\frac58e^2$, the inequality in equation $(8)$ holds. This means that the number of digits of $n!$ exceeds $2k$, proving the Corollary. $\square$ We get $n^n+3\equiv 0\pmod 4$ for odd $n$, so we can see from here that it is even (or, we could have used @TheSimpliFire's one-or-two-step method to derive this without any contradiction - which is better) @TheSimpliFire Hey! with $4\pmod {10}$ and $0\pmod 4$ then this is the same as $10m_1+4$ and $4m_2$. If we set them equal to each other, we have that $5m_1=2(m_2-m_1)$ which means $m_1$ is even. We get $4\pmod {20}$ now :P Yet again a conjecture!Motivated by Catalan's conjecture and a recent question of mine, I conjecture thatFor distinct, positive integers $a,b$, the only solution to this equation $$a^b-b^a=a+b\tag1$$ is $(a,b)=(2,5).$It is of anticipation that there will be much fewer solutions for incr...
Having had enough of the different forms of Riemann Curvature Tensor, I finally decided to recommend the “most correct” one here. In mathematics books, we say \(\text{Operator }R:\Gamma(TM)\times\Gamma(TM)\times\Gamma(TM)\rightarrow\Gamma(TM)\) is \(\text{the curvature of connection } D\) (Here connection is a general one which may not be torsion-free), if it satisfies: $$R(X,Y)Z=D_{X}D_{Y}Z-D_{Y}D_{X}Z-D_{[X,Y]}Z.$$ In the definition of \(\text{Affine Connection Coefficient}\), we have \(D_{e_{i}}e_{j}=\Gamma_{ij}^{k}e_{k}\) for any tangent frame field \(e_{1},\cdots,e_{m}\), thus \begin{align*}R\left(\dfrac{\partial}{\partial x^{i}},\dfrac{\partial}{\partial x^{j}}\right)\dfrac{\partial}{\partial x^{k}}&=:R_{kij}^{l}\dfrac{\partial}{\partial x^{l}}\\&=D_{\partial/\partial x^{i}}\left(\Gamma_{jk}^{l}\dfrac{\partial}{\partial x^{l}}\right)-D_{\partial/\partial x^{j}}\left(\Gamma_{ik}^{l}\dfrac{\partial}{\partial x^{l}}\right)\\&=\left(\partial_{i}\Gamma_{jk}^{l}-\partial_{j}\Gamma_{ik}^{l}\right)\dfrac{\partial}{\partial x^{l}}+\Gamma_{jk}^{m}\Gamma_{im}^{n}\delta_{n}^{l}\dfrac{\partial}{\partial x^{l}}-\Gamma_{ik}^{m}\Gamma_{jm}^{n}\delta_{n}^{l}\dfrac{\partial}{\partial x^{l}}\\&=\left(\partial_{i}\Gamma_{jk}^{l}-\partial_{j}\Gamma_{ik}^{l}+\Gamma_{jk}^{m}\Gamma_{im}^{l}-\Gamma_{ik}^{m}\Gamma_{jm}^{l}\right)\dfrac{\partial}{\partial x^{l}}\end{align*} Here I list some typically torion-free Reimann Curvature Tensor for readers to compare and memorize: Lu(卢建新)&Chen(陈维恒): $$R_{\text{(陈.) }kij}^{l}=\partial_{i}\Gamma_{kj}^{i}-\partial_{j}\Gamma_{ki}^{l}+\Gamma_{kj}^{h}\Gamma_{kh}^{l}-\Gamma_{ki}^{h}\Gamma_{kj}^{l}.$$ S.Weinberg: $$R_{\text{(W.) }\mu\nu k}^{\lambda}=-R_{\text{(陈.) }\mu\nu k}^{\lambda}$$
Background When labels are shown with feynmp, they extend beyond the box of the graph. One can use \fmfframe to allocate an invisible frame to accommodate these, but it seems like you need to manually adjust all of the parameters. Question Is there an easy way of automatically placing ones Feynman graphs in an appropriate frame? Specifically, I would like to be able to use graphs naturally in equations with minimal manual intervention. The following example works, but has the following issues: I use \raisebox{-0.5\height}{}to vertically center the graph, but this does not quite work properly. How do I fix this? One can use \fmfframe(<left>,<top>)(<right>,<bottom>){}to "pad" the graph, but the only way I can see to get these values is to manually adjust (using a frame with \fboxso I can see what I am doing). Note: All the numbers in ()are in terms of the \unitlengthdefined by feynmp(default is 1pt). One ugly "feature" of \fmfframeis that is precludes blank lines in its argument (which can be useful for spacing out one's graph definitions). Any easy fix? Here is a minimal working example: \documentclass{minimal}\usepackage{feynmp}% Needed to interpret generated *.1, *.2 etc. as ps files.\DeclareGraphicsRule{*}{mps}{*}{} \setlength{\fboxsep}{0pt}\begin{document}\begin{fmffile}{fgraphs}\begin{equation} 5\times \raisebox{-0.5\height}{ % 1: center vertically -- does not quite work. \fbox{ % Draw fram so we can tweak the fmfframe \fmfframe(5,17)(20,17){ % 2: Had to manually guess these. \begin{fmfgraph*}(40,30) % Note that the size is given in normal parentheses % instead of curly brackets in units of \unitlength % (1pt by default) \fmfleft{i1,i2} % Define external vertices from bottom to top \fmfright{o1,o2} \fmf{fermion}{i1,v1,o1} \fmf{fermion}{i2,v2,o2} \fmf{photon,tension=0.3}{v1,v2} % 3: Blank lines not allowed in fmfframe! \fmflabel{$\vec{p}$}{i1} \fmflabel{$\vec{q}$}{i2} \fmflabel{$\vec{p}+\vec{k}$}{o1} \fmflabel{$\vec{p}-\vec{k}$}{o2} \end{fmfgraph*} } } } = 5i V_k.\end{equation}\end{fmffile}\end{document} Compile this with: pdflatex tstmpost fgraphspdflatex tstpdflatex tst to get:
Given that for a known mean $\mu$ and unknown variance $\sigma^2$ the normal distribution is $$X_i|\sigma^2 \sim \mathcal{N}(\mu, \sigma^2) = \frac{1}{\displaystyle\sigma\sqrt{2\pi}}\exp\left[-\displaystyle\frac{1}{2}\left(\frac{x_i - \mu}{\sigma}\right)^2\right]$$ $$\sigma^2 \sim \text{Inv-}\chi^2(\nu_p, \sigma_p^2).$$ Since it is a double parameter $\chi^2$ distribution, I used the one given below for calculations. $$\text{Inv-}\chi^2(x;\nu_p,\sigma_p^2) = \frac{(\sigma_p^2\ \nu_p/2)^{\nu_p/2}}{\Gamma(\nu_p/2)}x^{-(\nu_p/2\ \ + \ \ 1)}\exp\left[-\frac{\nu_p\sigma^2}{2x}\right].$$ Here's what I've done so far, Calculated the likelihood as, $L(x|\mu, \sigma^2) \propto (\sigma^2)^{-n/2}\exp\left[-\displaystyle\frac{ns^2}{2\sigma^2}\right]$ and $p(\sigma^2) \propto (\sigma^2)^{-(\nu_p/2\ \ +\ \ 1)}\exp\left[-\displaystyle\frac{\nu_p\sigma_p^2}{2\sigma^2}\right] $ which should give me a posterior of $posterior \propto (\sigma^2)^{-(\nu_p/2\ \ +\ \ 1\ \ +\ \ n/2)}\exp\left[-\displaystyle\frac{ns^2 + \nu_p\sigma_p^2}{2\sigma^2}\right]$ Which should close to $\text{Inv-}\chi^2(\nu_p+\frac{n}{2},\frac{ns^2 + \nu_p\sigma_p^2}{2\sigma^2})$ if I'm not wrong. But, what I'm asked for is $$\sigma^2 | x_1, x_2, \cdots, x_n \sim \text{Inv-}\chi^2 \sim \text{Inv-}\chi^2\left(\nu_p + n, \frac{\nu_p\sigma_p^2 + ns^2}{\nu_p+n}\right),$$ where $ns^2 = \sum_i^n (x_i - \mu)^2$. I've tried multiple times, but I get to the same closed form as I've discussed, not the one I'm asked for. So I'd like to know where am I going wrong or doing wrong. Also, I'm not quite comfortable with the exact signs/notations used with posterior/prior/likelihood so if that can also be cleared out, that would be an added benefit for me.
Evaluating Limits of Sequences We will now look at some examples of evaluating limits of sequences using various approaches and theorems that we have already looked at. More examples can be found on the Evaluating Limits of Sequences Examples 1 and Evaluating Limits of Sequences Examples 2 pages. Example 1 Evaluate the limit $\lim_{n \to \infty} \frac{n^2 - 4}{n + 5}$. This limit should be relatively easy to evaluate since all we have to do is divide each part of the fraction by $n$ to get that:(1) Example 2 Evaluate the limit $\lim_{n \to \infty} \frac{e^n - e^{-n}}{e^n + e^{-n}}$. Once again, we will divide each part of the fraction, but this time by $e^n$ and noting that $\lim_{n \to \infty} e^{-2n} = 0$, we get that:(2) Example 3 Evaluate the limit $\lim_{n \to \infty} \sqrt{n^2 + n} - \sqrt{n^2 - 1}$. This time we will multiply the equation of our sequence by the conjugate of our sequence and divide by the conjugate to obtain:(3) Now we will factor an $n$ out of the radicals of the denominator, divide each term by $n$, and note that $\lim_{n \to \infty} \frac{1}{n} = 0$, we get that:(4) Example 4 Evaluate the limit $\lim_{n \to \infty} (-1)^n \frac{n}{n^3 + 1}$. Dividing each part of the fraction by $n^3$ we get that:(5)
Why negative interest rates might not work Matthew Martin9/04/2014 01:58:00 PM Tweetable 1so that we can eliminate liquidity traps by implementing negative interest rates. Here's why I'm uncertain whether negative interest rate policy is expansionary. To start, let's go through the standard logic: the fisher equation relates nominal interest, real interest, and inflation: [$$]r_t=\tilde{r}_t+\pi_t[$$] where [$]r_t[$] is the nominal interest controlled by the Fed, [$]\tilde{r}_t[$] is the real interest rate, and [$]\pi_t[$] is the inflation rate, in period [$]t[$]. The consumption/savings tradeoff depends on the returns to saving rather than consuming, so that consumption, and therefore output and employment, is decreasing in [$]\tilde{r_t}.[$] If prices are sticky, then [$]\pi[$] will respond to policy relatively slowly, so that in the short run reducing the nominal interest rate [$]r_t[$] leads to a reduction in [$]\tilde{r}_t[$] and thus an increase in consumption and reduction in saving. This is the liquidity effect, which is expansionary. What this analysis has left out, however, are the feedback effects of inflation on output. This happens through two additional mechanisms: the New Keynesian Phillips Curve and the Euler consumption equation. We won't worry about exact analytical specifications for each, just let [$]f_t[$] describe how the household choice of [$]C_t[$] depends on the real interest rate and inflation (consumption Euler equation), and [$]g_t[$] describes the New Keynesian Phillips curve relationship between current and expected inflation, so that we have: [update] \begin{align} r_t&=\tilde{r}_t+\pi_t\\ C_t&=f_t\left(\tilde{r}_t,\pi_{t+1}\right) \\ \pi_t&=g_t\left(\pi_{t+1}\right) \end{align} where [$]g_t[$] is an increasing function and [$]f_t[$] is decreasing in [$]\tilde{r}_t[$] but increasing in [$]\pi_{t+1}[$]. 2It is now apparent that the liquidity analysis in the preceding paragraph was incomplete--inflation is actually a free variable! Conventional wisdom says that lowering the nominal interest rate causes inflation to increase, because we typically think of lowering the interest rate as being achieved by increasing the supply of money. But strictly from the mathematics, this is actually ambiguous--there are two paths to lower nominal interest rates: a monetary expansion that lowers real interest rates via sticky prices, or a monetary contraction that lowers expected inflation. Suppose we have monetary expansion, then expected inflation [$]\pi_{t+1}[$] rises via the money market (as I showed here), which induces more consumption via [$]f_t[$] and raises current inflation via the Phillips curve [$]g_t[$], which further reinforces the liquidity effect by lowering the real interest rate [$]\tilde{r}_t[$] for the given nominal rate target via the Fisher equation. However, when we instead assume that lower nominal rates are achieved via monetary contraction, all of these reinforcing effects reverse signs: expected inflation falls, which reduces current inflation via [$]g_t[$], reduces current consumption via [$]f_t[$], and puts reverse pressure on the real interest rate in the fisher equation since, holding [$]r_t[$] at the target, a decrease in [$]\pi_t[$] implies higher, not lower, real interest [$]\tilde{r}_t[$]. So despite the simplistic logic of the first paragraph, lowering the Fed funds rate can potentially be contractionary if it is associated with a decrease in monetary aggregates. For positive rates, the empirical evidence overwhelmingly suggests that lowering the Fed funds rate increases the money supply. This is not obvious from theory alone--lower nominal interest rates does reduce "money printing" in some respects, for example by reducing interest on reserves. But net effect is pretty unambiguous because the Fed typically engages in ample Open Market Operations that increase base money by far more than the reduction in interest payments. In this respect, what passes for conventional wisdom is quite wrong in saying that the Fed balance sheet doesn't matter--this neutrality is an illusion that arises from taking too many modelling shortcuts (like most New Keynesian papers, Christiano, Eichenbaum, and Rebelo (2011) for example, do not explicitly model the money market that drives their key assumptions, see footnote). But under Miles Kimball's proposal, the Fed would lower interest rates to below zero by taxing away balances of e-currency. This is a reduction in monetary base, just like the case of IOR, and by itself would be contractionary, not expansionary. The expansionary effects of Kimball's policy depend on the assumption that households will increase consumption in response to the taxing of their cash savings, rather than letting their savings depreciate. That needn't be the case--it depends on the relative magnitudes of income and substitution effects for real money balances. The substitution effect is what Kimball has in mind--raising the price of real money balances will induce substitution out of money and into consumption. But there's also an income effect, whereby the loss of wealth induces less consumption and more savings. Thus, negative interest rate policy can be contractionary even though positive interest rate policy is expansionary. Indeed, what Kimball has proposed amounts to a reverse Bernanke Helicopter--imagine a giant vacuum flying around the country sucking money out of people's pockets. Why would we assume that this would be inflationary? [1] To be clear, this is something we should do regardless of whether we also enact Kimball's negative interest rate policy. Any business with an internet connection already has everything it needs to conduct payments electronically. Paper is costly and inefficient and should be killed. [2] See Christiano, Eichenbaum, and Rebelo (2011). The consumption Euler is equation (11), while the New Keynesian Phillips Curve is equation (9). While Christiano et al do not explicitly model the money market, the NK model is equivalent to a money-in-utility model with nominal rigidities (as in this post but with monopolistic competition and Calvo pricing), where interest rate policy is enacted by targeting the money supply. This equivalence is invoked when Christiano et al assume the direction of causality from inflation to policy rate in equation (6). [update]What follows here is meant to provide intuition, not a formal proof. An earlier version omitted subscripts from [$]g_t, f_t[$] which was a bit misleading--these functions do have other time-dependent arguments that have been suppressed here for simplicity. For a proof that reversing the causal assumption embedded in the NK Taylor rule implies that lowering rates can be contractionary, see Schmitt-Grohé and Uribe (2012), which has also been covered by David Andolfatto here.
The only trick here is getting used to how discrete sums are turned into integrals. Suppose you let energy be a function of momentum $p$ and position $q$. Then you can rewrite the discrete quantum partition function as $Z_{quantum}=\sum_{p,q}e^{- \beta E(p,q)},$ where the sum is over each of the $N$ positions and $N$ momenta, and the only challenge is how to find the appropriate constants for the continuum limit. This is easiest if you take the system to be in a box of length $L$ and volume $V=L^3$. For position, you want to normalize to the size of the box, ie $$\sum_q \rightarrow \frac1V\int d^3q$$ for each particle. For $k$ notice that the spacing of wave numbers in a box is $2\pi n/L$ in each direction. This tells you that the correspondence here is$\sum_{k} \rightarrow \frac{V}{(2\pi)^3}\int dk$ for each particle. Put it all together and you get $$Z_{quantum}=\sum_{p,q}e^{- \beta E(p,q)} \rightarrow \left(\frac{V}{(2\pi)^3}\frac1V\right)^N \int \int e^{- \beta E(p,q)} dq^{3N} dk^{3N},$$ which when you substitute $p=\hbar k$ for each of the 3N k's and collect factors gives you the standard expression. Note that no special classical approximation was taken here. In fact, classical statistical mechanics is, at least in my view, a misnomer, since you need to use all sorts of things like the discretization of phase space, Planck's constant, the occasional $N!$ factor to avoid Gibbs' Paradox, etc., that make no sense without quantum physics. When using this to derive something like the ideal gas law, the only real classical assumption you make is that Fermi or Bose statistics can be neglected. (This claim seems to be quite disputed in the comments, I'll note, so I will give the disclaimer that this hinges on my personal and somewhat arbitrary consideration of what is considered a 'classical' limit and what is not). edit: a bit more on the first continuum limit... Let's take a 1-d discrete system with M sites. Then $\sum_q e^{-\beta E}$ is better written as$\sum_{i=1}^M e^{-\beta E_i}$, which sums the exponential of energy on each site. Suppose that the distance between the sites is 'a'. Then $L=Ma$. Furthermore, $\sum_{i=1}^M =\frac1L\sum_i^M a$ You can probably guess what you want to do now- take a->0 while increasing the number of sites such that L is constant. At this point we can rename a as 'dx' and replace our sum with an integral over it for the identification $\sum_q=\frac1L\int dx$ which when extended to three dimensions and N particles gives the above result. I certainly won't pretend this is rigorous, but at the same time I think that if you think along these lines you should be able to convince yourself that it couldn't be anything otherwise. Scaling arguments like this come up all over the place, both in statistical mechanics and other areas of physics. edit2: As Peter rightly points out in the comments, one cannot expand a Hamiltonian simultaneously in the basis of x and p, making it unclear how this classical correspondence should be carried out. The limit that we are taking is clear enough, I think. In real quantum mechanics, due to noncommutivity each state cannot be thought of as occupying a point in phase space, but rather a probability distribution. In our limit we are assuming that these phase space volumes are small enough to be taken as points- this is another restatement of the continuum limit above. However, one might reasonably ask for a prescription for how to expand the wavefunction in a basis that treats position and momentum equally, to take this limit. This can be done. The tool used is the Wigner function: $W_n(x,p)=\frac1h \int_{-\infty}^{\infty} \psi_n^*(x+y) \psi_n(x-y)e^{2ipy/\hbar} dy $ The expectation value of an operator in this formalism is $\int \hat{A}(x,p) W(x,p) dx dp$ So if we think of the partition function as $Z_{quantum}=tr(e^{- \beta \hat{H}(p,q)})$ with this formalism in mind and take the limit as before, I think this provides a plausible way to think of relationship between the classical and quantum partition function.
First, notice that, if you use some properties of the trace operator, \begin{align*}p(\mu, \Sigma) &\propto \lvert \Sigma\rvert^{-((\nu_0+d)/2+1)}\exp\Big(-\frac{1}{2}\text{tr}(\Lambda_0\Sigma^{-1})-\frac{\kappa_0}{2}(\mu-\mu_0)'\Sigma^{-1}(\mu-\mu_0)\Big) \\&= \lvert \Sigma\rvert^{-((\nu_0+d)/2+1)}\exp\Big(-\frac{1}{2}\text{tr}(\Lambda_0\Sigma^{-1})-\text{tr}\left(\frac{\kappa_0}{2}(\mu-\mu_0)(\mu-\mu_0)'\Sigma^{-1}\right)\Big) \\&= \lvert \Sigma\rvert^{-((\nu_0+d)/2+1)}\exp\Big(-\frac{1}{2}\text{tr}\left\{\Lambda_0 + \kappa_0(\mu-\mu_0)(\mu-\mu_0)' \right\} \Sigma^{-1}\Big) \\&= \lvert \Sigma\rvert^{-\left(\frac{[\nu_0+1] + d+1}{2}\right)}\exp\Big[-\frac{1}{2}\text{tr}\left( \Psi \Sigma^{-1} \right)\Big] \\&= \lvert \Sigma\rvert^{-\left(\frac{\nu_1 + d+1}{2}\right)}\exp\Big[-\frac{1}{2}\text{tr}\left( \Psi \Sigma^{-1} \right)\Big]\end{align*} if you set $\Psi$ matrix to $\Lambda_0 + \kappa_0(\mu-\mu_0)(\mu-\mu_0)' $ and $\nu_1 = \nu_0 + 1$. So if you integrate with respect to $d \Sigma$, which is kind of misleading because all of the elements aren't unique, you'll get $$p(\mu) \propto \int \lvert \Sigma\rvert^{-\left(\frac{\nu_1 + d+1}{2}\right)}\exp\Big[-\frac{1}{2}\text{tr}\left( \Psi \Sigma^{-1} \right)\Big] d \Sigma = \frac{\Gamma_p(\nu_1/2) 2^{\nu_1p/2} }{\text{det}[\Psi]^{\nu_1/2}}.$$ Technically, $d \Sigma$ is short hand for integrating with respect to the diagonal and the lower-half elements of $\Sigma$. This is because you're integrating over the space of positive definite, symmetric matrices. I can skip a lot of the details because I am just recognizing the Inverse-Wishart density and then looking up the normalizing constant on Wikipedia. Edit: Just re-reading your question now and it looks like you're more interested in the marginal posterior. This is the marginal prior. You can use the same move, but I'll add the details in a little bit. Edit Number 2: There are a few extra difficulties that I wanted to mention with the recognition of the $t$ density. This site doesn't have many answered questions like this, so I figured I could do it all out. \begin{align*}p(\mu) &\propto \text{det}\left(\Lambda_0 + \kappa_0(\mu-\mu_0)(\mu-\mu_0)'\right)^{-\frac{\nu_1}{2}} \\&= \text{det}[\Lambda_0]^{-\frac{\nu_1}{2}} \text{det}\left(1 + \kappa_0(\mu-\mu_0)' \Lambda_0^{-1} (\mu-\mu_0) \right)^{-\frac{\nu_1}{2}} \tag{*} \\&\propto \text{det}\left(1 + \kappa_0(\mu-\mu_0)' \Lambda_0^{-1} (\mu-\mu_0) \right)^{-\frac{\nu_1}{2}} \\&= \text{det}\left(1 + \frac{1}{\nu_1 - p}(\mu-\mu_0)' \left[\frac{1}{[\nu_1 - p] \kappa_0}\Lambda_0\right]^{-1} (\mu-\mu_0) \right)^{-\frac{[\nu_1-p]+p}{2}} \\&\propto t_{\nu_1 - p}\left(\mu_0, \frac{1}{[\nu_1 - p] \kappa_0}\Lambda_0\right).\end{align*}where the line marked by the * follows by the matrix determinant lemma. You might think that it's weird you lost a few degrees of freedom, but you can convince yourself it's true if you check the variance is what it should be. The variance of a $t_{\nu_1 - p}\left(\mu_0, \frac{1}{[\nu_1 - p] \kappa_0}\Lambda_0\right)$ is $$\frac{1}{[\nu_1 - p] \kappa_0}\Lambda_0 \frac{\nu_1 - p}{\nu_1 - p-2} = \frac{1}{[\nu_1 - p-2] \kappa_0}\Lambda_0$$which is the same as the one we find with the law of total variance:$$E\left(\text{Var}\left[\mu \mid \Sigma \right]\right) = E[\Sigma]/\kappa_0= \Lambda_0 \frac{1}{\kappa_0[\nu_0 - p - 1]}.$$
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
The term functional is used in at least two different meanings. One meaning is in the mathematical topic of functional analysis, where one in particular studies linear functionals.This meaning is not relevant for the discussion at page 299 in Ref. 1. Another meaning is in the topics of calculus of variations and (classical) field theory. This is the sense that is relevant here. Since we are only discussing the classical action $S$ and not the full path integral, let us for simplicity forget about quantum aspects, such as, e.g., $\hbar$, Hilbert spaces, expectation values, etc. Let us for simplicity assume that there is only one field $q$ (which we for semantical reasons will call a position field), and that it lives in $n$ spatial dimensions and one temporal dimension. The field $q$ is then a function $q:\mathbb{R}^{n+1}\to \mathbb{R}$. There is also a velocity field $v:\mathbb{R}^{n+1}\to \mathbb{R}$. The Lagrangian is a local functional $$L[q(\cdot,t),v(\cdot,t);t]~=~\int d^nx~{\cal L}\left(q(x,t),\partial q(x,t),\partial^2q(x,t), \ldots,\partial^Nq(x,t);\right. $$$$\left. v(x,t),\partial v(x,t),\partial^2 v(x,t), \ldots,\partial^{N-1} v(x,t);x,t\right). $$ The Lagrangian density ${\cal L}$ is a function of these variables. Here $N\in\mathbb{N}$ is some finite order. Moreover, $\partial$ denotes a partial derivative wrt. spatial variables $x$ (but not wrt. the temporal variable $t$). Time $t$ plays the role of a passive spectator parameter, i.e, we may consider a specific Cauchy surface, where time $t$ has some fixed value, and where it makes sense to specify $q(\cdot,t)$ and $v(\cdot,t)$ independently. (If we consider more than one time instant, then the $q$ and $v$ profiles are not independent. See also e.g. this and this Physics.SE posts.) Weinberg is using the word functional because of the spatial dimensions. [In particular, if Weinberg had considered just point mechanics (corresponding to $n=0$ with no spatial dimensions), then he would have called the Lagrangian $L(q(t),v(t);t)$ a function of the instantaneous position $q(t)$ and the instantaneous velocity $v(t)$.] It is important to treat $q(\cdot,t)$ (which Weinberg calls $\Psi(\cdot,t)$) and $v(\cdot,t)$ (which Weinberg calls $\dot{\Psi}(\cdot,t)$) for fixed time $t$ as two independent functions in order to make sense of the definition of the conjugate/canonical momentum $p(\cdot,t)$ (which Weinberg calls $\Pi(\cdot,t)$). The definition involves a functional/variational derivative wrt. to the velocity field, cf. eq. (7.2.1) in Ref. 1, $$\tag{7.2.1} p(x,t)~:=~\frac{\delta L[q(\cdot,t),v(\cdot,t);t]}{\delta v(x,t)}.$$ Let us finally integrate over time $t$. The action $S$ (which Weinberg calls $I$) is $$\tag{7.2.3} S[q]~:=~\int dt~ \left. L[q(\cdot,t),v(\cdot,t);t]\right|_{v=\dot{q}}$$ The corresponding Euler-Lagrange equation becomes $$\tag{7.2.2} \left.\frac{d}{dt} \left(\frac{\delta L[q(\cdot,t),v(\cdot,t);t]}{\delta v(x,t)}\right|_{v=\dot{q}}\right)~=~\left. \frac{\delta L[q(\cdot,t),v(\cdot,t);t]}{\delta q(x,t)}\right|_{v=\dot{q}}$$ References: S. Weinberg, The Quantum Theory of Fields, Vol 1, Section 7.2, p.299.
If a body is acted on by a force which obeys the inverse square law, will it mean the body follows an ellipse, if so, equating Newton's law of universal gravitation $(f=(GMm/(r^2)))$ to centripetal force $(mv^2)/r$ is erroneous. Yes, if you are considering uniform circular motion, then you can use $$ mv^2\,r^{-1}=GMm\,r^{-2}. $$ And I imagine most textbooks try to make it clear that this is valid only in that case. However, what you want to solve is the sum of the forces:$$\sum F=F_g$$That is,$$m\frac{d^2r}{dt^2}-mr\omega^2=-\frac{GMm}{r^2}\tag{1}$$where the $r\omega^2=v^2/r$ term is the rotational acceleration and $M$ the mass of the larger body (i.e., the star). If you let $L=mr^2\omega$ be the angular momentum (which is conserved here, so it is a constant in time), then (1) can become$$\frac{d^2r}{dt^2}=-\frac{GM}{r^2}+r\left(\frac{L}{mr}\right)^2$$which, with some substitutions and work (see the section labeled "Kepler's First Law"), can be reduced to,$$\frac{a\left(1-e^2\right)}{r}=1+e\cos\theta$$where $a$ is the semi-major axis of the ellipse, $e$ the eccentricity and $r,\,\theta$ are the typical polar coordinates. And this equation is the standard equation for an ellipse Let the interaction potential be of the form $V=kr^n$. The eccentricity of the orbit is given by the expression (Refer " Goldstein, Classical Mechanics, 2nd edition, chapter 3) $$e=\sqrt{1+\frac{2El^2}{mk^2}}$$ If $e>1$, the shape of the orbit is hyperbola, If $e=1$, the shape of the orbit is parabola, If $e<1$, the shape is ellipse, If $e=0$, the shape is circle, So the shape of the orbit depends on the total energy of the system, not on the form of Force. The shape of the orbit can be ellipse or hyperbola or parabola or circle depends on the total energy of the system. The total energy of the system is given by $$E=\frac{1}{2}mv^2+V_{eff}$$ Where $V_{eff}=V+\frac{l^2}{2mr^2}$, $l$ is the angular momentum which is a constant for a central force problem. From Virial's theorem, it is known that, $$<T>=\frac{-1}{2}<V>$$ So $E=\frac{-k}{2r_{0}}$, if $r_{0}=\frac{l^2}{mk}$, the orbit is circular, For Elliptical Orbit, $E=\frac{-k}{2a}$, where $a$ is the semimajor axis. Finally only for uniform cirular motion you can equate Centripetal force with Gravitational force.
What is an assumption of a statistical procedure? I am not a statistician and so this might be wrong, but I think the word "assumption" is often used quite informally and can refer to various things. To me, an "assumption" is, strictly speaking, something that only a theoretical result (theorem) can have. When people talk about assumptions of linear regression (see here for an in-depth discussion), they are usually referring to the Gauss-Markov theorem that says that under assumptions of uncorrelated, equal-variance, zero-mean errors, OLS estimate is BLUE, i.e. is unbiased and has minimum variance. Outside of the context of Gauss-Markov theorem, it is not clear to me what a "regression assumption" would even mean. Similarly, assumptions of a, say, one-sample t-test refer to the assumptions under which $t$-statistic is $t$-distributed and hence the inference is valid. It is not called a "theorem", but it is a clear mathematical result: if $n$ samples are normally distributed, then $t$-statistic will follow Student's $t$-distribution with $n-1$ degrees of freedom. Assumptions of penalized regression techniques Consider now any regularized regression technique: ridge regression, lasso, elastic net, principal components regression, partial least squares regression, etc. etc. The whole point of these methods is to make a biased estimate of regression parameters, and hoping to reduce the expected loss by exploiting the bias-variance trade-off. All of these methods include one or several regularization parameters and none of them has a definite rule for selecting the values of these parameter. The optimal value is usually found via some sort of cross-validation procedure, but there are various methods of cross-validation and they can yield somewhat different results. Moreover, it is not uncommon to invoke some additional rules of thumb in addition to cross-validation. As a result, the actual outcome $\hat \beta$ of any of these penalized regression methods is not actually fully defined by the method, but can depend on the analyst's choices. It is therefore not clear to me how there can be any theoretical optimality statement about $\hat \beta$, and so I am not sure that talking about "assumptions" (presence or absence thereof) of penalized methods such as ridge regression makes sense at all. But what about the mathematical result that ridge regression always beats OLS? Hoerl & Kennard (1970) in Ridge Regression: Biased Estimation for Nonorthogonal Problems proved that there always exists a value of regularization parameter $\lambda$ such that ridge regression estimate of $\beta$ has a strictly smaller expected loss than the OLS estimate. It is a surprising result -- see here for some discussion, but it only proves the existence of such $\lambda$, which will be dataset-dependent. This result does not actually require any assumptions and is always true, but it would be strange to claim that ridge regression does not have any assumptions. Okay, but how do I know if I can apply ridge regression or not? I would say that even if we cannot talk of assumptions, we can talk about rules of thumb. It is well-known that ridge regression tends to be most useful in case of multiple regression with correlated predictors. It is well-known that it tends to outperform OLS, often by a large margin. It will tend to outperform it even in the case of heteroscedasticity, correlated errors, or whatever else. So the simple rule of thumb says that if you have multicollinear data, ridge regression and cross-validation is a good idea. There are probably other useful rules of thumb and tricks of trade (such as e.g. what to do with gross outliers). But they are not assumptions. Note that for OLS regression one needs some assumptions for $p$-values to hold. In contrast, it is tricky to obtain $p$-values in ridge regression. If this is done at all, it is done by bootstrapping or some similar approach and again it would be hard to point at specific assumptions here because there are no mathematical guarantees.
Cauchy problem of semilinear inhomogeneous elliptic equations of Matukuma-type with multiple growth terms 1. School of Mathematics and Information Science, Shaanxi Normal University, Xi'an, Shaanxi 710062, China 2. Department of Mathematics and Computer Science, John Jay College of Criminal Justice, CUNY, New York, NY 10019, USA 3. School of Science, Hangzhou Dianzi University, Hangzhou, Zhejiang 310018, China $ \Delta u+\sum\limits_{i = 1}^{k}K_i(|x|)u^{p_i}+\mu f(|x|) = 0, \quad x\in\mathbb{R}^n, $ Keywords:Cauchy problem, positive radial solutions, stability, sub- and super-solutions, Matukuma-type equation. Mathematics Subject Classification:Primary: 35J10, 35J20; Secondary: 35J65. Citation:Yunfeng Jia, Yi Li, Jianhua Wu, Hong-Kun Xu. Cauchy problem of semilinear inhomogeneous elliptic equations of Matukuma-type with multiple growth terms. Discrete & Continuous Dynamical Systems - A, doi: 10.3934/dcds.2019227 References: [1] [2] S. Bae, Infinite multiplicity and separation structure of positive solutions for a semilinear elliptic equation in $\mathbb{R}^n$, [3] S. Bae, Separation structure of positive radial solutions of a semilinear elliptic equation in $\mathbb{R}^n$, [4] [5] [6] S.-H. Chen and G.-Z. Lu, Asymptotic behavior of radial solutions for a class of semilinear elliptic equations, [7] [8] Y. Deng, Y. Li and Y. Liu, On the stability of the positive radial steady states for a semilinear Cauchy problem, [9] Y. Deng, Y. Li and F. Yang, On the stability of the positive steady states for a nonhomogeneous semilinear Cauchy problem, [10] Y. Deng and F. Yang, Existence and asymptotic behavior of positive solutions for an inhomogeneous semilinear elliptic equation, [11] [12] [13] [14] [15] A. Friedman, [16] [17] [18] C.-F. Gui, On positive entire solutions of the elliptic equation $\Delta u+K(x)u^p = 0$ and its applications to Riemannian geometry, [19] [20] [21] C.-F. Gui, W.-M. Ni and X.-F. Wang, On the stability and instability of positive steady state of a semilinear heat equation in $\mathbb{R}^n$, [22] [23] [24] T.-Y. Lee and W.-M. Ni, Global existence, large time behavior and life span of solutions of semilinear parabolic Cauchy problems, [25] [26] Y. Li and W.-M. Ni, On the asymptotic behavior and radial symmetry of positive solutions of semilinear elliptic equations in $\mathbb{R}^n$ I. Asymptotic behavior, [27] [28] T. Matukuma, Dynamics of globular clusters, [29] M. Naito, Asymptotic behaviors of solutions of second order differential equations with integral coefficients, [30] [31] W.-M. Ni, On the elliptic equation $\Delta u+K(x)u^{\frac{(n+2)}{(n-2)}}=0$, it's generalizations and applications in geometry, [32] [33] [34] [35] [36] [37] [38] [39] [40] F. Yang and D. Zhang, Separation property of positive radial solutions for a general semilinear elliptic equation, show all references References: [1] [2] S. Bae, Infinite multiplicity and separation structure of positive solutions for a semilinear elliptic equation in $\mathbb{R}^n$, [3] S. Bae, Separation structure of positive radial solutions of a semilinear elliptic equation in $\mathbb{R}^n$, [4] [5] [6] S.-H. Chen and G.-Z. Lu, Asymptotic behavior of radial solutions for a class of semilinear elliptic equations, [7] [8] Y. Deng, Y. Li and Y. Liu, On the stability of the positive radial steady states for a semilinear Cauchy problem, [9] Y. Deng, Y. Li and F. Yang, On the stability of the positive steady states for a nonhomogeneous semilinear Cauchy problem, [10] Y. Deng and F. Yang, Existence and asymptotic behavior of positive solutions for an inhomogeneous semilinear elliptic equation, [11] [12] [13] [14] [15] A. Friedman, [16] [17] [18] C.-F. Gui, On positive entire solutions of the elliptic equation $\Delta u+K(x)u^p = 0$ and its applications to Riemannian geometry, [19] [20] [21] C.-F. Gui, W.-M. Ni and X.-F. Wang, On the stability and instability of positive steady state of a semilinear heat equation in $\mathbb{R}^n$, [22] [23] [24] T.-Y. Lee and W.-M. Ni, Global existence, large time behavior and life span of solutions of semilinear parabolic Cauchy problems, [25] [26] Y. Li and W.-M. Ni, On the asymptotic behavior and radial symmetry of positive solutions of semilinear elliptic equations in $\mathbb{R}^n$ I. Asymptotic behavior, [27] [28] T. Matukuma, Dynamics of globular clusters, [29] M. Naito, Asymptotic behaviors of solutions of second order differential equations with integral coefficients, [30] [31] W.-M. Ni, On the elliptic equation $\Delta u+K(x)u^{\frac{(n+2)}{(n-2)}}=0$, it's generalizations and applications in geometry, [32] [33] [34] [35] [36] [37] [38] [39] [40] F. Yang and D. Zhang, Separation property of positive radial solutions for a general semilinear elliptic equation, [1] Hiroshi Morishita, Eiji Yanagida, Shoji Yotsutani. Structure of positive radial solutions including singular solutions to Matukuma's equation. [2] Liping Wang. Arbitrarily many solutions for an elliptic Neumann problem with sub- or supercritical nonlinearity. [3] Jifeng Chu, Pedro J. Torres, Feng Wang. Radial stability of periodic solutions of the Gylden-Meshcherskii-type problem. [4] Patricio Cerda, Leonelo Iturriaga, Sebastián Lorca, Pedro Ubilla. Positive radial solutions of a nonlinear boundary value problem. [5] Zongming Guo, Xuefei Bai. On the global branch of positive radial solutions of an elliptic problem with singular nonlinearity. [6] [7] Ruofei Yao, Yi Li, Hongbin Chen. Uniqueness of positive radial solutions of a semilinear elliptic equation in an annulus. [8] Cunming Liu, Jianli Liu. Stability of traveling wave solutions to Cauchy problem of diagnolizable quasilinear hyperbolic systems. [9] P. Álvarez-Caudevilla, J. D. Evans, V. A. Galaktionov. The Cauchy problem for a tenth-order thin film equation II. Oscillatory source-type and fundamental similarity solutions. [10] [11] [12] Isabel Flores, Matteo Franca, Leonelo Iturriaga. Positive radial solutions involving nonlinearities with zeros. [13] Rui-Qi Liu, Chun-Lei Tang, Jia-Feng Liao, Xing-Ping Wu. Positive solutions of Kirchhoff type problem with singular and critical nonlinearities in dimension four. [14] Guozhen Lu and Juncheng Wei. On positive entire solutions to the Yamabe-type problem on the Heisenberg and stratified groups. [15] [16] Leif Arkeryd, Raffaele Esposito, Rossana Marra, Anne Nouri. Exponential stability of the solutions to the Boltzmann equation for the Benard problem. [17] Kai Zhao, Wei Cheng. On the vanishing contact structure for viscosity solutions of contact type Hamilton-Jacobi equations I: Cauchy problem. [18] [19] Dagny Butler, Eunkyung Ko, Eun Kyoung Lee, R. Shivaji. Positive radial solutions for elliptic equations on exterior domains with nonlinear boundary conditions. [20] João Marcos do Ó, Sebastián Lorca, Justino Sánchez, Pedro Ubilla. Positive radial solutions for some quasilinear elliptic systems in exterior domains. 2018 Impact Factor: 1.143 Tools Article outline [Back to Top]
Tagged: determinant of a matrix Problem 718 Let \[ A= \begin{bmatrix} 8 & 1 & 6 \\ 3 & 5 & 7 \\ 4 & 9 & 2 \end{bmatrix} . \] Notice that $A$ contains every integer from $1$ to $9$ and that the sums of each row, column, and diagonal of $A$ are equal. Such a grid is sometimes called a magic square. Compute the determinant of $A$.Add to solve later Problem 686 In each of the following cases, can we conclude that $A$ is invertible? If so, find an expression for $A^{-1}$ as a linear combination of positive powers of $A$. If $A$ is not invertible, explain why not. (a) The matrix $A$ is a $3 \times 3$ matrix with eigenvalues $\lambda=i , \lambda=-i$, and $\lambda=0$. Add to solve later (b) The matrix $A$ is a $3 \times 3$ matrix with eigenvalues $\lambda=i , \lambda=-i$, and $\lambda=-1$. Problem 582 A square matrix $A$ is called nilpotent if some power of $A$ is the zero matrix. Namely, $A$ is nilpotent if there exists a positive integer $k$ such that $A^k=O$, where $O$ is the zero matrix. Suppose that $A$ is a nilpotent matrix and let $B$ be an invertible matrix of the same size as $A$. Is the matrix $B-A$ invertible? If so prove it. Otherwise, give a counterexample. Problem 571 The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017. There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold). The time limit was 55 minutes. This post is Part 2 and contains Problem 4, 5, and 6. Check out Part 1 and Part 3 for the rest of the exam problems. Problem 4. Let \[\mathbf{a}_1=\begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix}, \mathbf{a}_2=\begin{bmatrix} 2 \\ -1 \\ 4 \end{bmatrix}, \mathbf{b}=\begin{bmatrix} 0 \\ a \\ 2 \end{bmatrix}.\] Find all the values for $a$ so that the vector $\mathbf{b}$ is a linear combination of vectors $\mathbf{a}_1$ and $\mathbf{a}_2$. Problem 5. Find the inverse matrix of \[A=\begin{bmatrix} 0 & 0 & 2 & 0 \\ 0 &1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 1 & 0 & 0 & 1 \end{bmatrix}\] if it exists. If you think there is no inverse matrix of $A$, then give a reason. Problem 6. Consider the system of linear equations \begin{align*} 3x_1+2x_2&=1\\ 5x_1+3x_2&=2. \end{align*} (a) Find the coefficient matrix $A$ of the system. (b) Find the inverse matrix of the coefficient matrix $A$. (c) Using the inverse matrix of $A$, find the solution of the system. ( Linear Algebra Midterm Exam 1, the Ohio State University) Read solution Problem 546 Let $A$ be an $n\times n$ matrix. The $(i, j)$ cofactor $C_{ij}$ of $A$ is defined to be \[C_{ij}=(-1)^{ij}\det(M_{ij}),\] where $M_{ij}$ is the $(i,j)$ minor matrix obtained from $A$ removing the $i$-th row and $j$-th column. Then consider the $n\times n$ matrix $C=(C_{ij})$, and define the $n\times n$ matrix $\Adj(A)=C^{\trans}$. The matrix $\Adj(A)$ is called the adjoint matrix of $A$. When $A$ is invertible, then its inverse can be obtained by the formula For each of the following matrices, determine whether it is invertible, and if so, then find the invertible matrix using the above formula. (a) $A=\begin{bmatrix} 1 & 5 & 2 \\ 0 &-1 &2 \\ 0 & 0 & 1 \end{bmatrix}$. (b) $B=\begin{bmatrix} 1 & 0 & 2 \\ 0 &1 &4 \\ 3 & 0 & 1 \end{bmatrix}$. Problem 509 Using the numbers appearing in \[\pi=3.1415926535897932384626433832795028841971693993751058209749\dots\] we construct the matrix \[A=\begin{bmatrix} 3 & 14 &1592& 65358\\ 97932& 38462643& 38& 32\\ 7950& 2& 8841& 9716\\ 939937510& 5820& 974& 9 \end{bmatrix}.\] Prove that the matrix $A$ is nonsingular.Add to solve later Problem 505 Let $A$ be a singular $2\times 2$ matrix such that $\tr(A)\neq -1$ and let $I$ be the $2\times 2$ identity matrix. Then prove that the inverse matrix of the matrix $I+A$ is given by the following formula: \[(I+A)^{-1}=I-\frac{1}{1+\tr(A)}A.\] Using the formula, calculate the inverse matrix of $\begin{bmatrix} 2 & 1\\ 1& 2 \end{bmatrix}$. Problem 486 Determine whether there exists a nonsingular matrix $A$ if \[A^4=ABA^2+2A^3,\] where $B$ is the following matrix. \[B=\begin{bmatrix} -1 & 1 & -1 \\ 0 &-1 &0 \\ 2 & 1 & -4 \end{bmatrix}.\] If such a nonsingular matrix $A$ exists, find the inverse matrix $A^{-1}$. ( The Ohio State University, Linear Algebra Final Exam Problem) Read solution Eigenvalues of Orthogonal Matrices Have Length 1. Every $3\times 3$ Orthogonal Matrix Has 1 as an Eigenvalue Problem 419 (a) Let $A$ be a real orthogonal $n\times n$ matrix. Prove that the length (magnitude) of each eigenvalue of $A$ is $1$. Add to solve later (b) Let $A$ be a real orthogonal $3\times 3$ matrix and suppose that the determinant of $A$ is $1$. Then prove that $A$ has $1$ as an eigenvalue.
+Corollaries to the Equivalence of Norms in a Finite-Dimensional Linear Space] Recall from the Equivalence of Norms in a Finite-Dimensional Linear Space page that if $X$ is a finite-dimensional linear space then any two norms $\| \cdot \|_1$ and $\| \cdot \|_2$ on $X$ are equivalent, that is, there exists positive numbers $C, D > 0$ such that:(1) For all $x \in X$. We now state some important corollaries to this result. Corollary 1: Let $X$ be a normed linear space and let $M$ be a linear subspace of $X$. If $M$ is finite-dimensional then $M$ is closed. Proof:Let $M$ be a finite-dimensional subspace of $X$, say $\dim (M) = n$. Let $\{ e_1, e_2, ..., e_n \}$ be a basis of $X$. Then every $m \in M$ can be uniquely written in the form $m = a_1e_1 + a_2e_2 + ... + a_ne_n$, which we will denote by $m = (a_1, a_2, ..., a_n)$. Define a function $\| \cdot \|_{\infty} : M \to [0, \infty)$ for all $m \in M$ by: Then $\| \cdot \|_{\infty}$ is clearly a norm on $M$ and $M$ is complete (if $(m_n)$ is a Cauchy sequence in $M$ then $(m_n)$ will be Cauchy in "coordinate" of $m$ and each coordinate will converge in $\mathbb{R}$ and so $(m_n)$ will converge in $M$). So every Cauchy sequence in $M$ converges in $M$. In particular, every convergent sequence in $M$ is Cauchy and so every convergent sequence converges in $M$ and so $M$ is closed. $\blacksquare$ Corollary 2:* Let $(X, \| \cdot \|_X)$ and $(Y, \| \cdot \|_Y)$ be normed linear space. If $X$ is finite-dimensional and $T : X \to Y$ is a linear map then $T$ is bounded. That is, every linear operator whose domain is a finite-dimensional normed linear space is a bounded linear operator. Proof:Let $X$ be finite-dimensional with $\dim (X) = n$ and let $\{e_1, e_2, ..., e_n \}$ be a basis of $X$. For each $x \in X$ write $x = a_1e_1 + a_2e_2 + ... + a_ne_n$. Let $\| \cdot \|_{\infty} : X \to \mathbb{R}$ be defined for all $x \in X$ by $\| x \|_{\infty} = \max \{ |a_k| : 1 \leq k \leq n \}$. Since $X$ is finite-dimensional, $\| \cdot \|_X$ and $\| \cdot \|_{\infty}$ are equivalent and so there exists constants $C, D > 0$ such that: For all $x \in X$. Let $M = \sum_{k=1}^{n} \| T(e_k) \|_Y$. Since $T : X \to Y$ is linear we have that for all $x \in X$: Therefore $T$ is a bounded linear operator. $\blacksquare$
I'm looking for strategies for evaluating the following sums for given $z$ and $m$: $$ \mathcal{S}_m(z):=\sum_{n=1}^\infty \frac{H_n^{(m)}z^n}{n}, $$ where $H_n^{(m)}$ is the generalized harmonic number, and $|z|<1$, $m \in \mathbb{R}$. Using the generating function of the generalized harmonic numbers, an equivalent problem is to evaluate the following integral: $$ \mathcal{S}_m(z) = \int_0^z \frac{\operatorname{Li}_m(t)}{t(1-t)}\,dt, $$ where $\operatorname{Li}_m(t)$ is the polylogarithm function, and $|z|<1$, $m \in \mathbb{R}$. Question 1: Are there any way to reduce the sum to Euler sum values, given by Flajolet–Salvy paper? Question 2: Are there any way to reduce the integral to integrals given by Freitas paper? The case $m=1$ and $z=1/2$ was the problem 1240 in Mathematics Magazine, Vol. 60, No. 2, pp. 118–119. (Apr., 1987) by Coffman, S. W. $$ \mathcal{S}_1\left(\tfrac12\right)=\sum_{n=1}^\infty \frac{H_n}{n2^n} = \frac{\pi^2}{12}. $$ There are several solutions in the linked paper. The more interesting case $m=2$ and $z=1/2$ is listed at Harmonic Number, MathWorld, eq. $(41)$: $$ \mathcal{S}_2\left(\tfrac12\right)=\sum_{n=1}^\infty \frac{H_n^{(2)}}{n2^n} = \frac{5}{8}\zeta(3). $$ We know less about the evaluation. At the MathWorld it is marked as "B. Cloitre (pers. comm., Oct. 4, 2004)". This value is also listed at pi314.net, eq. $(701)$. Unfortunately, I don't know about any paper/book reference for this value. It would be nice to see some. Question 3: How could we evaluate the case $m=2$, $z=1/2$? It would be nice to see a solution for the sum form, but also solutions for the integral form are welcome.
I have asked about this before and have really been struggling with identifying what makes a model parameter and what makes it a latent variable. So looking at various threads on this topic on this site, the main distinction seems to be: Latent variables are not observed but have an associated probability distribution with them as they are variables and parameters are also not observed and have no distribution associated with them which I understand as that these are constants and have a fixed but unknown value that we are trying to find. Also, we can put priors on the parameters to represent our uncertainty about these parameters even though there is only one true value associated with them or at least that is what we assume. I hope I am correct so far? Now, I have been looking at this example for Bayesian weighted linear regression from a journal paper and been really struggling to understand what is a parameter and what is a variable: $$ y_i = \beta^T x_i + \epsilon_{y_i} $$ Here $x$ and $y$ are observed but only $y$ is treated as a variable i.e. has a distribution associated with it. Now, the modelling assumptions are: $$ y \sim N(\beta^Tx_i, \sigma^2/w_i) $$ So, the variance of $y$ is weighted. There is also a prior distribution on $\beta$ and $w$, which are normal and gamma distributions respectively. So, the full log likelihood is given by: $$ \log p(y, w, \beta |x) = \Sigma \log P(y_i|w, \beta, x_i) + \log P(\beta) + \Sigma \log P(w_i) $$ Now, as I understand it both $\beta$ and $w$ are model parameters. However, in the paper they keep referring to them as latent variables. My reasoning is $\beta$ and $w$ are both part of the probability distribution for the variable $y$ and they are model parameters. However, the authors treat them as latent random variables. Is that correct? If so, what would be the model parameters? The paper can be found here (http://www.jting.net/pubs/2007/ting-ICRA2007.pdf). The paper is Automatic Outlier Detection: A Bayesian Approach by Ting et al.
I came across the following in Kevin Murphy's "a probabilistic perspective on machine learning". I am struggling to understand the derivation of the conditional probability for $z_i$. I tried different forms of Bayes' formula try to derive this line (24.10), but failed so far. any hints much appreciated $z_i$ is the indicator variable that corresponds to what Gaussan component the data point $\mathbf{x}_i$ belongs to. (24.10) is the conditional probability of assigning data point $\mathbf{x_i}$ to component $k$, given the data point (of course), the components' weight vector $\boldsymbol{\pi}$, and the components' parameters $\boldsymbol{\mu}$ and $\boldsymbol{\Sigma}$. By applying Bayes rule, the conditional probability we are seeking is: $$ p(z_i=k|\mathbf{x}_i,\boldsymbol{\pi},\boldsymbol{\mu},\boldsymbol{\Sigma}) = \frac{p(z_i=k|\pi_k)\;p(\mathbf{x}_i|z_i=k,\boldsymbol{\mu_k},\boldsymbol{\Sigma_k)}}{p(\mathbf{x}_i|\boldsymbol{\pi},\boldsymbol{\mu},\boldsymbol{\Sigma})}. $$ The denominator (as usual) does not depend on the value of $z_i$, so it's just a normalization factor. The first term of the numerator is just $\pi_k$ and the second term is the pdf of the Gaussian with parameters $(\boldsymbol{\mu}_k,\boldsymbol{\Sigma}_k)$ evaluated at $\mathbf{x}_i$, thus obtaining (24.10).
There are trivial examples that arise just by taking products of irreducible Einstein metrics with different Einstein constants. Whether an irreducible example exists is a much harder question. I do not know the answer to that, but I can think about it. ( However, see below, where I answer this question.) Of course, no such irreducible example can be Riemannian, and I guess, from your statement about the Lorentzian case, it can't be Lorentzian either, though I don't see that immediately. In light of this, I guess the first case to try would be to see whether or not there could be one in dimension $4$ that is of type $(2,2)$. ( See Addition 1.) Addition 1: Indeed, there is an irreducible example in dimension $4$ of type $(2,2)$. Consider the $6$-dimensional Lie group $G$ with a basis for left-invariant forms satisfying the structure equations$$\begin{aligned}d\omega^1 &= - \alpha\wedge\omega^1 - \beta \wedge \omega^2 \\d\omega^2 &= \phantom{-} \beta\wedge\omega^1 - \alpha \wedge \omega^2\\d\omega^3 &= \phantom{-} \alpha\wedge\omega^3 - \beta \wedge \omega^4 \\d\omega^4 &= \phantom{-} \beta\wedge\omega^3 + \alpha \wedge \omega^4\\d\alpha &= c\ \bigl(\omega^3\wedge\omega^1+\omega^4\wedge\omega^2\bigr)\\d\beta &= c\ \bigl(\omega^4\wedge\omega^1-\omega^3\wedge\omega^2\bigr)\\\end{aligned}$$where $c\not=0$ is a constant.Let $H$ be the subgroup defined by $\omega^1=\omega^2=\omega^3=\omega^4=0$ and let $M^4 = G/H$. Then the above structure equations define a torsion-free affine connection $\nabla$ on $M^4$ that satisfies $$\mathrm{Ric}(\nabla) = -4c\ (\omega^1\circ\omega^3+\omega^2\circ\omega^4)$$while both $h = \omega^1\circ\omega^3+\omega^2\circ\omega^4$ and $g = \omega^1\circ\omega^4-\omega^2\circ\omega^3$ are $\nabla$-parallel. Thus, $g + \lambda h$ is an example of the kind you want for any nonzero constant $\lambda$. Addition 2: Upon further reflection, I realized that this example points the way to a large number of other examples, all of split type, and most having irreducibly acting holonomy as soon as the (real) dimension gets bigger than $4$. The reason is that the above example is essentially a holomorphic Riemannian surface of nonzero (but real) constant curvature regarded as a real Riemannian manifold by taking the real part of the holomorphic quadratic form, i.e., $Q = (\omega^1+i\ \omega^2)\circ(\omega^3-i\ \omega^4) = h - i\ g$. (The group $G$ in the above example turns out to just be $\mathrm{SL}(2,\mathbb{C})$ and $H\simeq \mathbb{C}^\times$ is just a Cartan subgroup.) Now, the same phenomenon occurs in all dimensions: Let $(M^{2n},Q)$ be a holomorphic Einstein manifold with a nonzero real Einstein constant and write $Q = h - i\ g$ where $h$ and $g$ are real quadratic forms. Then $(M,h)$ will be an Einstein manifold of type $(n,n)$ (with a nonzero Einstein constant) and $g$ will be parallel with respect to the Levi-Civita connection of $h$. Thus, all of the split metrics $g+\lambda\ h$ for $\lambda$ real will have the same Levi-Civita connection as $h$ and none of them will be Einstein. If the (holomorphic) holonomy of $Q$ is $\mathrm{SO}(n,\mathbb{C})$, then the holonomy of $h$ will be $\mathrm{SO}(n,\mathbb{C})\subset\mathrm{SO}(n,n)$, which acts $\mathbb{R}$-irreducibly on $\mathbb{R}^{2n}=\mathbb{C}^n$ when $n\ge3$. (Constructing examples of non-split type might be interesting$\ldots$) Addition 3: By examining the Berger classification (suitably corrected by later work), one can see that, if $M$ is simply connected and if $h$ is a non-symmetric pseudo-Riemannian metric on $M$ with irreducibly acting holonomy whose space of $\nabla$-parallel symmetric $2$-tensors has dimension greater than $1$, then the dimension of $M$ must be even, say, $2n$, and the holonomy of the metric $h$ must lie in $\mathrm{SO}(n,\mathbb{C})$. Of the possible irreducible holonomies in this case, only the subgroups $\mathrm{SO}(n,\mathbb{C})$ and $\mathrm{Sp}(m,\mathbb{C})\cdot\mathrm{SL}(2,\mathbb{C})$ (when $n=2m$) can occur if the metric is to be Einstein with a nonzero Einstein constant. Both of these cases do occur, and, in each case, the space of $\nabla$-parallel symmetric $2$-tensors has dimension exactly $2$. Thus, the construction outlined in Addition 2 gives all of the examples of desired pairs $(h,g)$ for which the holonomy is irreducible and that are not locally symmetric. To make sure that we get the full list of examples with irreducible holonomy, we'd have to examine Berger's list of irreducible pseudo-Riemannian symmetric spaces for other possible candidates. (I suspect that, even there, the examples will turn out to be holomorphic metrics in disguise, but I have not yet checked Berger's list to be sure.) The case in which the metric is irreducible but the holonomy is not remains, and it may not be easy to resolve with known technology.
Search Now showing items 1-10 of 26 Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider (American Physical Society, 2016-02) The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ... Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (Elsevier, 2016-02) Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ... Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (Springer, 2016-08) The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ... Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2016-03) The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ... Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2016-03) Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ... Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV (Elsevier, 2016-07) The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ... $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2016-03) The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ... Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV (Elsevier, 2016-09) The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ... Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV (Elsevier, 2016-12) We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ... Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV (Springer, 2016-05) Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ...
GolfScript (23 chars) {:^((1${\.**2^?%}+*}:f; The sentinel result for a non-existent inverse is 0. This is a simple application of Euler's theorem. \$x^{\varphi(2^n)} \equiv 1 \pmod {2^n}\$, so \$x^{-1} \equiv x^{2^{n-1}-1} \pmod {2^n}\$ Unfortunately that's rather too big an exponential to compute directly, so we have to use a loop and do modular reduction inside the loop. The iterative step is \$x^{2^k-1} = \left(x^{2^{k-1}-1}\right)^2 \times x\$ and we have a choice of base case: either k=1 with {1\:^(@{\.**2^?%}+*}:f; or k=2 with {:^((1${\.**2^?%}+*}:f; I'm working on another approach, but the sentinel is more difficult. The key observation is that we can build the inverse up bit by bit: if \$xy \equiv 1 \pmod{2^{k-1}}\$ then \$xy \in \{ 1, 1 + 2^{k-1} \} \pmod{2^k}\$, and if \$x\$ is odd we have \$x(y + xy-1) \equiv 1 \pmod{2^k}\$. (If you're not convinced, check the two cases separately). So we can start at any suitable base case and apply the transformation \$y' = (x+1)y - 1\$ a suitable number of times. Since \$0x \equiv 1 \pmod {2^0}\$ we get, by induction \$x\left(\frac{1 - (x+1)^n}{x}\right) \equiv 1 \pmod {2^n}\$ where the inverse is the sum of a geometric sequence. I've shown the derivation to avoid the rabbit-out-of-a-hat effect: given this expression, it's easy to see that (given that the bracketed value is an integer, which follows from its derivation as a sum of an integer sequence) the product on the left must be in the right equivalence class if \$x+1\$ is even. That gives the 19-char function {1$)1$?@/~)2@?%}:f; which gives correct answers for inputs which have an inverse. However, it's not so simple when \$x\$ is even. One potentially interesting option I've found is to add x&1 rather than 1. {1$.1&+1$?@/~)2@?%}:f; This seems to give sentinel values of either \$0\$ or \$2^{n-1}\$, but I haven't yet proved that. Taking that one step further, we can ensure a sentinel of \$0\$ for even numbers by changing the expression \$1 - (x+1)^n\$ into \$1 - 1^n\$: {1$.1&*)1$?@/~)2@?%}:f; That ties with the direct application of Euler's theorem for code length, but is going to have worse performance for large \$n\$. If we take the arguments the other way round, as n x f, we can save one character and get to 22 chars: {..1&*)2$?\/~)2@?%}:f;
While computer simulations have a wide range of uses, their goals are generally similar: find the simplest model that recreates the properties of the system under investigation. For scientific systems, this involves matching observed or experimental phenomena as precisely as necessary. But what about movie simulations? Should they match the processes they replicate so closely? Computer-generated imagery (CGI) is a common feature in both animated and live-action films. For these CGI systems, creating visuals that look right is an important task. However, Joseph Teran of the University of California, Los Angeles believes that starting from physical models is still a good idea. During his invited address at the 2018 SIAM Annual Meeting, held in Portland, Ore., this July, Teran pointed out that beginning with a mathematical system is often easier than drawing from real life. Many movies model a system’s various forces and internal structures with partial differential equations (PDEs) for this reason. While solving these equations to produce CGI is computationally expensive, such methods have become powerful tools for creating realistic visual cinematic effects. Teran and his collaborators utilized a general physical model for a wide range of movie phenomena, such as smoke, sand, snow, water and other fluids, and even clothing (see Figure 1). Teran noted that modeling everyday occurrences—such as pouring coffee or the behavior of clothes on a human body—in a convincing manner is much more difficult than simulating exotic things like exploding spaceships. The very familiarity of ordinary systems frequently exposes inconsistencies; this is in contrast to esoteric things, akin to the “uncanny valley effect” wherein attempts at realistic human faces fall short. Figure 1. The coupling of elastic cloth with seven million colored grains of sand displays dazzling flow patterns. Image courtesy of [1]. From Jell-O to Snow During his presentation, Teran focused on a particular model known as “elastoplasticity,” which allows animators to treat a wide range of visual phenomena with a few equations, governed by a reasonable number of parameters that can be adjusted until things look right. Elastoplastic theory describes materials that both spring back when deformed (hence, elastic) and retain some of their altered shape (plastic). For example, snow is granular on one level because it comprises small crystals that are visible to the human eye. However, a large-scale view shows that it is an elastoplastic material, as anyone who has ever made a snowball knows. How well a snowball holds together depends on its texture and “wetness,” among other things. And how well the initial handful packs together partly depends on snow’s plasticity. The crumbliness of “dry” snow—which renders it unsuitable for snowballs—also means that it blows more readily in the wind, making for easier cleanup. The varying elastoplastic properties of snow dictate whether or not it flows, thus determining the manner in which it drifts and the dangers of possible avalanches. Based on this theoretical framework, Teran and his colleagues consulted with Walt Disney Animation Studios to generate realistic-looking snow for the computer-animated film Frozen. Animators must create movie special effects without having to produce simulations of various phenomena from scratch. This is when PDEs become useful, as does reduction of the physical model’s parameters, which can be adjusted based on a film’s visual needs. The general conservation laws for mass and momentum govern these physics-based models. For materials, these equations are PDEs that describe changes in the materials’ velocity vector field \(v (x, t)\) and density \(\rho (x, t)\): \[ \frac{Dv}{Dt} = \frac{1}{\rho} \: \bigtriangledown\:\cdot \: \underline{\sigma} + g \frac{D \rho}{Dt} + \rho \bigtriangledown\:\cdot \: v = 0,\] where \(\textbf{g}\) is the gravitational force vector, \(\underline{\sigma} (x, t)\) is the material’s stress tensor, and \[ \frac{D}{Dt} = \frac{\partial}{\partial t} + v \: \cdot \: \bigtriangledown\] is the convective derivative operator. The choice of stress tensor determines which specific physical system is described. To simulate snow, Teran and collaborators animated cubes of a Jell-O-like substance, adjusting elastic and plastic parameters to visualize how the cubes bounced or stuck together. These cubes—though very unlike snow in a broad sense—formed the basis of the mathematical description of snow’s flow, incorporating frictional forces between snow grains. Once a software PDE solver fast enough for animation became available after several years of development, the elastoplastic formulation reduced rendering time by a massive amount. Frames of the film that would have previously entailed 40 minutes of generation time with other methods required only three to four minutes using PDEs. Elastoplastic models are general enough to describe other materials that are useful for CGI. Teran showed his audience simulations of water interacting with sand to demonstrate how the water gradually wears away a sand barrier until it collapses. Like snow, sand is granular and exhibits small-scale behavior governed by moisture content, grain size, and frictional interactions between grains. Wet sand can also be packed (into sandcastles, for instance), though less durably than snow. Cloth and Deformed Potatoes Frozen aside, most movies do not require many snow scenes. However, the majority of animated films have human characters who sport hair and wear clothing. These systems are both extremely familiar and very complicated to visually simulate (compare the characters’ blocky hair in early animated films like Toy Story or Shrek to modern movies like Moana that use advanced physics models). Elastoplastic models can also visually describe these phenomena, despite hair and clothing’s dissimilarity to snow or sand. Teran noted that these systems can employ the same PDE solver as snow simulations. Unfortunately, clothes are not intrinsically granular, which makes them computationally much more expensive. If one treats them as a mesh of particles, the fabric texture constrains the relative positions of those particles. In addition, the external forces change constantly as different patches of cloth come in contact with other cloth, skin, and various objects. From a modeling perspective, cloth is almost always deformed; it creases, flaps in the wind, and clings when wet (see Figure 2). Figure 2. A sphere pushes three pieces of cloth—with approximately 1.4 million triangles—back and forth. This yields complex folds and contact. Image courtesy of [1]. Teran described the geometrical process as “mapping a potato onto a deformed potato.” The system’s physics is encapsulated in the Jacobian or “deformation gradient” \({F}\) and its determinant \({J}\): \[F(X, t) = \frac{\partial \phi}{\partial X} J(X, t) = \textrm{det}\big( F (X, t) \big).\] The conservation laws and material properties in the elastoplastic model are connected to this mapping. One can linearize the system to simplify the math during each step of the deformation. The model applies all external and internal forces acting on the cloth, mapping the motion and constraints on each grain. If the forces acting on a particle are physically unreasonable, the calculation employs constraints to restore the particle to an allowable configuration. In other words, every particle that begins in the fabric must end in the fabric in more or less the same position relative to its neighbors; this prevents unphysical deformations to the material. The resulting elastoplastic model is amazingly powerful, allowing realistic simulation of fabrics from heavy carpets and cable-knit sweaters to light silk cloth. Teran displayed animations of sand pouring over fabric that used the elastoplastic model for both materials (see Figure 3). Figure 3. Two-way coupling between a piece of elastic cloth and seven million grains of sand. Image courtesy of [1]. As with many numerical approximations to continuous systems, the accuracy of the elastoplastic simulation depends on the coarseness of the mesh that models the fabric. If the mesh is too coarse or too fine, the simulated fabric behaves incorrectly. Similarly, the types of constraints necessary to make the fabric behave appropriately are similar to the unrealistic imaginary springs that some simulations utilize for similar tasks in other animations. Nevertheless, the ability of real physics to produce more realistic animations with lower computational costs, even when the particular physics does not naïvely seem to describe the system at hand, is intriguing. With future advances in graphics processing, animators will have an even greater ability to simulate the world, paving the way for increasingly imaginative stories. The uncanny valley effect is the unsettling feeling that people experience upon encountering faces on robots or in digital art that are very nearly human in appearance but not quite convincingly realistic. 1 References [1] Jiang, C., Gast, T., & Teran, J. (2017). Anisotropic elastoplasticity for cloth, knit and hair frictional contact. ACM Trans. Graph., 36(4), 152:1-152:14.
The Radius of Convergence of a Power Series Recall from the Power Series page that we saw that a power series will converge at it's center of convergence $c$, and that it is possible that a power series can converge for all $x \in \mathbb{R}$ or on some interval centered at the center of convergence. If a power series converges on some interval centered at the center of convergence, then the distance from the center of convergence to either endpoint of that interval is known as the radius of convergence which we more precisely define below. Definition: The Radius of Convergence, $R$ is a non-negative number or $\infty$ such that the interval of convergence for the power series $\sum_{n=0}^{\infty} a_n(x - c)^n$ is $[c - R, c + R]$, $(c - R, c + R)$, $[c - R, c + R)$, $(c - R, c + R]$. For example, in the case that a power series $\sum_{n=0}^{\infty} a_n(x - c)^n$ is convergent only at $x = c$, then the radius of convergence for this power series is $R = 0$ since the interval of convergence is $[c - 0, c + 0] = [c, c]$. Similarly, if the power series is convergent for all $x \in \mathbb{R}$ then the radius of convergence of the power series is $R = \infty$ since the interval of convergence is $(-\infty, \infty)$. Determining the Radius of Convergence of a Power Series We will now look at a technique for determining the radius of convergence of a power series using The Ratio Test for Positive Series Theorem 1: If $\lim_{n \to \infty} \biggr \rvert \frac{a_{n+1}}{a_n} \biggr \rvert = L$ where $L$ is a positive real number or $L = 0$ or $L = \infty$, then the power series $\sum_{n=0}^{\infty} a_n(x - c)^n$ has a radius of convergence $R = \frac{1}{L}$ where if $L = 0$ then $R = \infty$ and if $L = \infty$ then $R = 0$. Let's now look at some examples of finding the radius of convergence of a power series. Example 1 Determine the radius of convergence of the power series $\sum_{n=0}^{\infty} \frac{1}{1 + n^3} (x + 6)^n$. We note that the center of convergence is $c = -6$. Now we want to find the radius of convergence using the ratio test. Let $a_n = \frac{1}{1 + n^3}$. Thus:(1) So $L = 1$, and so the radius of convergence is $R = \frac{1}{L} = 1$. Example 2 Determine the radius of convergence of the power series $\sum_{n=0}^{\infty} \frac{n}{n!}(x - 3)^n$. Once again we note that the center of convergence is $c = 3$. We now want to find the radius of convergence using the ratio test once again.(2) Since $L = 0$ we get that our radius of convergence $R = \infty$.
I've been struggling with a detail in Second Quantization which I really need to clear out of my head. If I expand the S-matrix of a theory with an interaction Hamiltonian $ H_I(x) $ then I have $$ S - 1= \int^{+\infty}_{-\infty} d^4x\, H_I(x) + \int^{+\infty}_{-\infty} \int^{+\infty}_{-\infty} d^4 x\, d^4 y\, T[ H_I(x) \, H_I(y) ] + ... $$ where the T operator is unnecessary in the first term. Now, if I choose a $ \overline{\psi}(x)\psi(x) $ theory for example, the first term gives some contributions which I can calculate most easily by doing the expansion $ \psi(x) = \psi^+(x) + \psi^-(x) $, which is the essence of Wick's theorem. I know the contributions in this example will be trivial, but the point is Wick's theorem is not defined for the same spacetime points and everywhere I see, what everyone says is that since the T operator in the first term can be there, then when substituting $ H_I(x) $ we simply retain the operator and use Wick's theorem like the spacetime points were different and impose x=y at the end, but this doesn't make any sense since the operators T are different. Basically it is assumed implicitly that $$ \overline{\psi}(x)\,\psi(x) = T[ \overline{\psi}(x)\,\psi(x) ] $$ in the first term of $ S-1 $, but the time ordering operators aren't even the same, since this one has a minus sign in its definition. The point is, the first term will always be $$ \int^{+\infty}_{-\infty} d^4x\, H_I(x) $$ independently if I chose to put there the T operator (without the minus sign) or not, so for fermions I should have $\overline{\psi}(x)\,\psi(x)$ there and not $T[ \overline{\psi}(x)\,\psi(x) ]$, since they are not the same as far as I can see.
The constrained CSBM definition is as follows. There is a distribution $D = D_x \times D_{\omega|x} \times D_{r|\omega,x}$, where $r: A \to [0, 1] \cup \{ -\infty \}$ takes values in the unit interval augmented with $-\infty$, and the components of $r$ which are $-\infty$-valued for a particular instance are revealed as part of the problem instance via $\omega \in \mathcal{P} (A)$ (i.e., $\omega$ is a subset of $A$). Allowed outputs in response to a problem instance are subsets of $A$ of size $m$, denoted \[ S_m = \{ S | S \subseteq A, |S| = m \}.\] The regret of a particular deterministic policy $h: X \times \mathcal{P} (A) \to S_m$ is given by \[ v (h) = E_{(x, \omega) \sim D_x \times D_{\omega|x}} \left[ \max_{s \in S_m}\; E_{r \sim D_{r|\omega,x}} \left[ \sum_{a \in s} r (a) \right] - E_{r \sim D_{r|\omega,x}} \left[ \sum_{a \in h (x, \omega)} r (a) \right] \right]. \] Note when $|A \setminus \omega| < m$, any strategy achieves zero regret (via $-\infty$ reward); therefore the ``interesting'' parts of the problem space are when $|A \setminus \omega| \geq m$. There are two plausible scenarios for CSBM with partial feedback. Only the total reward associated with the set of actions chosen is observed. There is actually a version of this problem at my current gig, since there is a page on the site whose elements are designed to act in concert to elicit a single response. The reward associated with each action chosen is observed. For instance advertisements are generally chosen in a set, but provide individualized feedback. The reduction works as follows: first the highest reward choice is chosen, then its reward is adjusted to $-\infty$, and the process is repeated until a set of size $m$ has been achieved. The individual steps are posed as constrained CSMC with partial feedback (CSMC-PF) problems, which is essentially CSBM with $m = 1$. The forfeit filter-offset tree was designed for constrained CSMC-PF, and in particular can be used as the $\mbox{Learn}$ oracle below. The forfeit filter-offset tree has the property that it always achieves finite regret, i.e., it chooses a feasible class whenever possible. In this context, it means the subproblems will never create duplicates. Algorithm:Partial Feedback Set Select Train Input:Action labels $A$, (maximum) size of set to select $m \leq |A| / 2$. Input:Constrained CSMC-PF classifier $\mbox{Learn}$. Data:Training data set $S$. Result:Trained classifiers $\{\Psi_n | n \in [1, m] \}$. Define $\gamma_0 (\cdot, \cdot) = \emptyset$. For each $n$ from 1 to $m$: $S_n = \emptyset$. For each example $\bigl(x, \omega, \mathcal{A}, \{ r (a) | a \in \mathcal{A} \}, p (\cdot | x, \omega) \bigr) \in S$ such that $|A \setminus \omega| \geq m$: Let $\gamma_{n-1} (x, \omega)$ be the predicted best set from the previous iteration. For each action $a$: If $a \in \gamma_{n-1} (x, \omega)$, $r (n, a) = -\infty$; else $r (n, a) = r (a)$. $S_n \leftarrow S_n \cup \left\{\bigl( x, \omega \cup \gamma_{n-1} (x), \mathcal{A}, \{ r (n, a) | a \in \mathcal{A} \}, p (\cdot | x, \omega) \bigr) \right\}$. Let $\Psi_n = \mbox{Learn} (S_n)$. Let $\gamma_n (x) = \Psi_n (x) \cup \gamma_{n-1} (x)$. Return $\{ \Psi_n | n \in [1, m] \}$. Comment:If $m > |A|/2$, negate all finite rewards and choose complement of size $|A| - m$. Algorithm:Set Select Test The Partial Feedback Set Select Train algorithm ignores training data where $|A \setminus \omega| < m$, but for such an input any strategy achieves negative infinite reward and zero regret, so learning is pointless. Similarly, the Set Select Test algorithm is not defined when $|A \setminus \omega| < l \leq m$, but for such an input any strategy achieves negative infinite reward and zero regret, so for the purposes of subsequent analysis I'll suppose that we pick an arbitrary element in $S_l$. Data:Class labels $A$, number of positions to populate $l \leq m \leq |A|/2$. Data:Instance feature realization $(x, \omega)$. Data:Trained classifiers $\{\Psi_n | n \in [1, m] \}$. Result:Set $h^\Psi: X \to S_l$. $\gamma_0^\Psi (x, \omega) = \emptyset$. For $n$ from 1 to $l$: $\gamma_n^\Psi (x, \omega) = \gamma_{n-1}^\Psi (x, \omega) \cup \Psi_n (x, \omega \cup \gamma_{n-1}^\Psi (x, \omega))$. If $|\gamma_l^\Psi (x, \omega)| = l$, $h^\Psi (x, \omega) = \gamma_l^\Psi (x, \omega)$; else set $h^\Psi (x, \omega)$ to an arbitrary element of $S_l$. Comment:If $l \geq m > |A|/2$, negate all finite costs, and choose complement of size $|A| - l$. My goal is to bound the average constrained CSBM regret \[ v (h) = E_{(x, \omega) \sim D_x \times D_{\omega|x}} \left[ \max_{s \in S_m}\; E_{r \sim D_{r|\omega,x}} \left[ \sum_{a \in s} r (a) \right] - E_{r \sim D_{r|\omega,x}} \left[ \sum_{a \in h (x, \omega)} r (a) \right] \right] \] in terms of the average constrained CSMC regret on the induced subproblems. Once again I'll leverage a trick from the filter tree derivation and collapse the multiple subproblems into a single subproblem by defining an induced distribution. Let $D$ be the distribution of average constrained CSBM instances $(x, \omega, r)$. Define the induced distribution $D^\prime (\Psi, l)$ where $l \leq m$ of constrained CSMC-PF instances $(x^\prime, \omega^\prime, \mathcal{A}, \{ r^\prime (a) | a \in \mathcal{A} \}, p^\prime (\cdot | x^\prime, \omega^\prime))$. Draw $(x, \omega, r)$ from $D$. Draw $n$ uniform on $[1, l]$. Let $x^\prime = (x, n)$. Let $\omega^\prime = \omega \cup \gamma_{n-1} (x, \omega)$. For each action $a$: If $a \in \gamma_{n-1} (x, \omega)$, $r^\prime (a) = -\infty$; else $r^\prime (a) = r (a)$. Let $p^\prime (\cdot | x^\prime, \omega^\prime) = p (\cdot | x, \omega)$. Create constrained CSMC-PF example $(x^\prime, \omega^\prime, \mathcal{A}, \{ r^\prime (a) | a \in \mathcal{A} \}, p^\prime (\cdot | x^\prime, \omega^\prime))$. Theorem:Regret Bound The historical policy $p$ does not enter into this theorem, because it is passed as a ``black box'' into the CSMC-PF subproblems. Of course when using the forfeit filter-offset tree for the subproblems, in order to bound the subproblem regret in terms of the regret of the induced importance-weighted binary regret on the sub-subproblem, the historical policy has to obey $E_{\mathcal{A} \sim p} [ 1_{a \in \mathcal{A}} | x, \omega ] > 0$ whenever $a \not \in \omega$. For all average constrained CSBM distributions $D$, and all average constrained CSMC classifiers $\Psi$, \[ v (h^\Psi) \leq l\, q (\Psi, l). \] Proof:See Appendix. The remarks from the previous version of this reduction still apply. The reduction still seems inefficient when comparing reduction to regression directly ($\sqrt{m} \sqrt{|A|} \sqrt{\epsilon_{L^2}}$) versus reduction to regression via CSMC ($m \sqrt{|A|} \sqrt{\epsilon_{L^2}}$). This suggests there is a way to reduce this problem which only leverages $\sqrt{m}$ CSMC subproblems. One possible source of inefficiency: the reduction is retrieving the elements in order, whereas the objective function is indifferent to order. The regret bound indicates the following property: once I have trained to select sets of size $m$, I can get a regret bound for selecting sets of size $l$ for any $l \leq m$. This suggests a variant with $m = |A|$ could be used to reduce minimax constrained CSMC-PF to average constrained CSMC-PF. I'll explore that in a future blog post. Appendix This is the proof of the regret bound. If $\Psi$ achieves infinite regret on the induced subproblem, the bound holds trivially. Thus consider a $\Psi$ that achieves finite regret. If $|A \setminus \omega| < l$, then $v = 0$ for any choice in $S_l$, and the bound conditionally holds trivially. Thus consider $|A \setminus \omega| \geq l$: since $\Psi$ achieves finite regret no duplicates are generated from any sub-classifier and $h^\Psi (x, \omega) = \gamma^\Psi_l (x, \omega)$. Consider a fixed $(x, \omega)$ with $|A \setminus \omega| \geq l$. It is convenient to talk about \[ v (h | x, \omega, n) = \max_{s \in S_m}\; E_{r \sim D_{r|\omega,x}} \left[ \sum_{a \in s} r (a) \right] - E_{r \sim D_{r|\omega,x}} \left[ \sum_{a \in \gamma^\Psi_n (x, \omega)} r (a) \right], \] the conditional regret on this instance at the $n^\mathrm{th}$ step in Partial Feedback Set Select Test. Let \[ s^* (x, \omega, n) = \underset{s \in S_m}{\operatorname{arg\,max\,}} E_{r \sim D_{r|\omega,x}} \left[ \sum_{a \in s} r (a) \right] \] be any maximizer of the first term (which is unique up to ties); note any $s^* (x, \omega, n)$ will select $n$ classes with respect to largest conditional expected reward. Proof is by demonstrating the property $v (h^\Psi | x, \omega, n) \leq \sum_{r=1}^n q_r (\Psi, l)$. The property holds with equality for $n = 1$. For $n > 1$ note \[ \begin{aligned} v (h^\Psi | x, \omega, n) - v (h^\Psi | x, \omega, n - 1) &= \max_{a \in A \setminus s^* (x, \omega, n - 1)} E_{r \sim D_{r|\omega,x}} \left[ r (a) \right] \\ &\quad - E_{r \sim D_{r|\omega,x}} \left[ r \left(\Psi_n \left(x, \omega \cup \gamma^\Psi_{n-1} (x, \omega) \right) \right) \right], \\ &\leq \max_{a \in A \setminus \gamma^\Psi_{n-1} (x, \omega)} E_{r \sim D_{r|\omega,x}} \left[ r (a) \right] \\ &\quad - E_{r \sim D_{r|\omega,x}} \left[ r \left(\Psi_n \left(x, \omega \cup \gamma^\Psi_{n-1} (x, \omega) \right) \right) \right], \\\ &\leq \max_{a \in A \setminus \gamma^\Psi_{n-1} (x, \omega)} E_{r \sim D_{r|\omega,x}} \left[ \tilde r_n (a) \right] \\ &\quad - E_{r \sim D_{r|\omega,x}} \left[ \tilde r_n \left(\Psi_n \left(x, \omega \cup \gamma^\Psi_{n-1} (x, \omega) \right) \right) \right], \\ &= q_n (\Psi, l | x, \omega), \end{aligned} \] where the second inequality is due to the optimality of $s^* (x, \omega, n - 1)$ and the third inequality is because $\tilde r_n (a) \leq r (a)$ with equality if $a \not \in \gamma^\Psi_{n-1} (x, \omega)$. Summing the telescoping series establishes \[ v (h^\Psi | x, \omega) = v (h^\Psi | x, \omega, l) \leq \sum_{r=1}^l q_r (\Psi, l | x, \omega) = l\, q (\Psi, l | x, \omega). \] Taking the expectation with respect to $D_{x, \omega}$ completes the proof.
I am a first year student and a learner of hyperbolic geometry. I was wondering if you could suggest some exciting topics to research about in this field (some people suggested fundamental polygons and areas of hyperbolic triangles). Any other exciting topics to be suggested? I am a first year student, but I don't mind having to slog through some groups and real/complex variables. What I'm looking for, is I have said above is a topic that I can research on. E.g. I could try to do something like the analogue of the euler lagrange equations in the euclidean plane, namely instead of minimising the functional defined by $\int_a^b \sqrt{1+ \Big(\frac{dy}{dx}\Big)^2} dx$, I could try and minimise the functional that defines the hyperbolic metric in $\mathbb{H}$ and see what kind of equations I get out of that. Ben
The setting is basically earth. Our planet is unfortunately on a collision course with a large asteroid. However, humans have discovered and decoded a message from an ancient, advanced alien race (our parents). These aliens have left behind technology and knowledge that could save the planet, but this technology is encased in the earth's core (so that we may only access it when we are "ready"). If the whole world worked together for the next 5-10 years, would it be possible to detonate our way down to the outer core using our nuclear arsenal? I know that lateral pressure is a problem and if you dig a cylindrical hole it will collapse, but could we dig a cone-shaped hole? Instead of telling you it's impossible, I'll make a list of the problems you need to solve: Pressure: Pressure at Earth's center is $3.65 \times 10^{11} \ \mbox{Pa}$. Whatever enclosure you build is subject to that. If you made a solid block of diamond (one of the least compressible materials, with a bulk modulus of $4.43 \times 10^{11} \ \mbox{Pa}$), you'll find that it shrinks to $82.3\ \%$ of its size. If you make it out of "steel" (say, $\sim1.50 \times 10^{11}\ \mbox{Pa}$), it becomes $33.9\ \%$ its size. That's bad news, especially since your vehicle needs to be hollow. Most humans are not happy being compacted to $34\ \%$ their volume. You can't solve this by using unobtainium because whatever atoms unobtainium is made of need to actually exist. Bond dissociation energies are the physical limit of strength. Density and Viscosity: Earth's inner core is $12.8\ \mbox{g/cm}^3$. Something like lead has $11.34\ \mbox{g/cm}^3$. Your ship is going to float, and will have to actively propel itself downward. When it reaches the inner core, it will need to move through something solid. To fix this, you need propulsion and drilling. But, both are subject to the same crushing pressures mentioned above. Temperature: Temperature at the Earth's core is at least $5\,000\ ^\circ\mbox{C}$. Because of thermodynamics, the Earth's core will try to make your vehicle the same temperature. Most humans not so much "happy" at $5\,000\ ^\circ\mbox{C}$ as they are "charred-lumps-of-their-constituent-elements". It is worth noting that if humans have difficulty solving these challenges, your aliens will have difficulty solving them too. If your aliens can solve them, this raises some serious unintended consequences. Here's a possible solution. It is in the exterior realms of possibility and undoubtedly has Problems, but perhaps another worldbuilding question could help fix them: Make a large, very long steel rod, and hollow out many small interior regions. Suspend from the foremost region your vehicle in a vacuum. Similarly, put nuclear warheads in the rear regions. The outer hull can compress, leaving inner components unharmed. After sinking through the mantle normally, the rear regions of the device successively detonate, pushing the device deeper (this is an inverted Orion nuclear pulse drive, with all attendant problems). Count on sacrificial outer hull to absorb heat and pressure for long enough to get to the center. Speculative/imaginary/magic tech that would make easier solutions (use with caution): Force fields Neutronium Teleportation of matter Teleportation of energy (heat especially) Reactionless drive Arbitrary adjustment of magnetism of nearby materials Universe editation Asking your bloody aliens to come up with a nicer plan and stop being ruddy showoffs already. No. On this scale, the Earth is not solid and rigid. It's more like extremely hot jello, with a thin and weak crust, a layer of hot floppy jello, the "mantle", a liquid outer core (actually molten iron) that's about 1,400 miles thick, and an inner core of iron about 750 miles in radius. Films and TV programmes that show journeys to the centre of the Earth are exceptionally scientifically inaccurate, even by Hollywood standards. A "cone-shaped hole" isn't possible, the Earth will just flow to fill it in once you get down a hundred miles or so. No, there isn't anything strong enough to brace the hole with. The only way to retrieve something from the Earth's core is to dismantle the planet, which will do more damage than any asteroid hit. I would agree with the NO answer already given. For comparison: The deepest humans have ever dug is only a little over 12 kilometers. And these are drilling shafts much less than a meter in diameter. Also consider that blowing a hole in the earth with all of our nuclear weapons to reach the core would most likely make the earth just as lifeless as the possible asteroid impact. David J Stevenson has proposed a method to reach the Earth's core. It requires a nuclear device of only a few megatons to crack open the crust. The planetary mission vehicle descends using a large mass of about one million tons of molten iron to sink down to the core. This journey should take roughly one week. The real technical problems your inner-earthonauts need to solve are how to survive the temperatures and pressures imposed on their vehicle during the descent for, at least, one week. We can safely assume there will be a human habitable base where the ancient alien technology is stored. So once they get there it is plain sailing. But Stevenson has solved the technical problems and has the numbers to prove it too, of reaching the Earth's core. So this problem is already been solved. His probe isn't manned. Getting humans down there safely still remains to be solved. Possibly an extremely strong and highly refrigerated capsule needs to be built. Hopefully someone else on Worldbuilding SE has the answer. I'll approach this problem from a different angle than I've seen in the current answers. The radius of the Earth is about 3959 miles. A cone shaped hole (assuming 1/10 ratio of base to height) will have a hole of almost 400 miles across at the surface. Even if the composition of the earth was "only" dirt and rock, you would have to move 163,000,000 cubic miles of material during the excavation. Using nukes can break that material up for you, but you are still going to have to move that material out of the hole. The material excavated by this project could be put into 271,666 piles, each larger than Mount Everest. Changing the ratio of the cone to 1/100 would result in a much smaller number, but you would still be talking about moving many multiples of Mount Everest. Please note that pit mines generally use much more gradual slopes (actually wider than they are deep), which would result in a continent wide hole at the surface. As another comparison, the amount of material moved is about half the volume of the entirety of the world's Oceans. TL;DR: Yes, but not the way you thought: Aliens left their message in the form a of punch card that can be read with a neutrino beam. Neutrinos are elementary particles that only interact weekly with other particles of matter and can therefore travel through the earth. Because it is so advanced, the alien civilisation could prepare a material stopping neutrinos and embedded a “punch card” made of this material at the centre of the earth. It is already possible to product “neutrino beams” and to detect them, so we can imagine reading the “punch card” by emitting an intense neutrino beam towards the center from one side of the earth and reading it from the other side. Because we generate a very intense beam, it is easily distinguished from “universe's noise”. Considering that about 1,000 above-ground nuclear tests were conducted between 1946 and 1964 by the superpowers, huge numbers of underground tests followed, and nations not subscribing to treaty conducted many more, the fact that the earth is still here and not noticeably different should dispel any notion about the power of even fusion devices for excavation purposes of such magnitude. One large volcanic explosion subsumes the power of many fusion devices (see article on Krakatoa in Wikipedia, e.g.). Add to this that once (or if) you get through the crust you hit magma. Underneath this impenetrable barrier, what you might find is purely theoretical. Perhaps more disheartening is the fact that nuclear weapons shot at an asteroid would have little to no chance of affecting it. The reaction is quite momentary, and in the vacuum of space it does nothing other than get very bright and very hot for an instant. There's no surrounding matter to create a blast effect. There's little chance of intercepting something at aggregate velocity of perhaps 60,000mph with any chance of timing the reaction properly. Perhaps if, as in movies, you could bore a (very) deep hole in the thing and detonate the device there, the thermal shock would either fracture it or at least eject enough matter to alter its course a bit. But the odds of landing on a 40,000mph object with almost no gravity and then conducting a drilling operation difficult even on earth are, to put it mildly, not encouraging. So, supposing we have alien directions that fix the location of the artifact (it's not moving relative to a location on the crust), perhaps the solution would be to drill with the intent of causing an eruption, using the pressure of the core to push out through a weakened mantle, saving us from drilling all the distance to the artifact (perhaps), thereby ejecting the nearly indestructible artifact, and recovering the item from the ejecta. Let's hope the artifact isn't buried under New York City.... Obviously, the environmental consequences would be catastrophic, if we can't engineer for them. I'm not proposing solutions to such engineering problems here - I don't have any. This is just a post to suggest a new line of reasoning if someone would like to follow up on it. Maybe the aliens put it under a mid oceanic rift? If you knew, where it was, maybe this would be better. Disassemble all those nuclear weapons and build power plants. Make a giant, focus magnet so that the magnetic forces were aimed at the alien device. Using all the electricity these power plants could muster, maybe they could pull it up. Throw is some graphene for good measure. At least with my idea you wouldn't have to deal with crazy amounts of heat and pressure.
Group Homomorphisms Definition: Let $(G, \cdot)$ and $(H, *)$ be two groups. A Homomorphism between these groups is a function $f : G \to H$ such that for all $x, y \in G$ we have that $f(x \cdot y) = f(x) * f(y)$. Definition: Let $(G, \cdot)$ and $(H, *)$ be two groups. Then: 1) A Monomorphism from $G$ to $H$ is a homomorphism $f : G \to H$ that is injective. 2) An Epimorphism from $G$ to $H$ is a homomorphism $f : G \to H$ that is surjective. 3) An Isomorphism from $G$ to $H$ is a homomorphism $f : G \to H$ that is bijective. We will now look at some examples of group homomorphisms. Example 1 Let $(G, \cdot)$ and $(H, *)$ be groups. The direct product of $G$ and $H$ is the group consisting of the set $G \times H$ with the operation defined for all $(g_1, h_1), (g_2, h_2) \in G \times H$ by:(1) Let $\pi_1 : G \times H \to G$ and $\pi_2 : G \times H \to H$ be defined by:(2) Then $\pi_1$ is a homomorphism from $G \times H$ to $G$, and $\pi_2$ is a homomorphism from $G \times H$ to $H$. Example 2 For $n \geq 3$ consider the symmetric group whose operation is function composition $(S_n, \circ)$, the the group $(\mathbb{Z}_2, +)$. Define a function $f : S_n \to \mathbb{Z}_2$ for all $\sigma \in S_n$ by:(3) We will show that $f$ is a group homomorphism between $(S_n, \circ)$ and $(\mathbb{Z}_2, +)$. Let $\sigma, \tau \in S_n$. Then either: both $\sigma$ and $\tau$ are even permutations; both $\sigma$ and $\tau$ are odd permutations; or, one of $\sigma$ or $\tau$ is an even permutation and the other is an odd permutation. Case 1: Suppose that $\sigma$ and $\tau$ are even permutations. Then $\sigma$ and $\tau$ can both be written as a product of an even number of transpositions, so $f(\sigma) + f(\tau) = 0 + 0 = 0$. But also $\sigma \circ \tau$ can be written as a product of an even number of transpositions. Therefore $f(\sigma \circ \tau) = 0$. Case 2: Suppose that $\sigma$ and $\tau$ are odd permutations. Then $\sigma$ and $\tau$ can both be written as a product of an odd number of transpositions, so $f(\sigma) + f(\tau) = 1 + 1 = 0 \pmod 2$. But also $\sigma \circ \tau$ can be written as a product of an even number of transpositions. Therefore $f(\sigma \circ \tau) = 0$. Case 3: Without loss of generality assume that $\sigma$ is an even permutation and $\tau$ is an odd permutation. Then $f(\sigma) = 0$ and $f(\tau) = 1$. So $f(\sigma) + f(\tau) = 0 + 1 = 1$. But also $\sigma \circ \tau$ can be written as a product of an odd number of transpositions. Therefore $f(\sigma \circ \tau) = 1$. In all three cases we see that for all $\sigma, \tau \in S_n$:(4) So $f$ is a homomorphism between $(S_n, \circ)$ and $(\mathbb{Z}_2, +)$. Of course $f$ is not an isomorphism though. This is because $f$ is not injective since for $n \geq 3$, $\mid S_n \mid \geq 6$, and $\mid \mathbb{Z}_2 \mid = 2$, and a function between finite groups of different sizes cannot be bijective.
Linear Independence and Dependence Examples 1 Recall from the Linear Independence and Dependence page that a set of vectors $\{ v_1, v_2, ..., v_n \}$ is said to be Linearly Independent in $V$ if the vector equation $a_1v_1 + a_2v_2 + ... + a_nv_n = 0$ implies that $a_1 = a_2 = ... = a_n = 0$, that is, the zero vector is uniquely expressed as a linear combination of the vectors in $\{ v_1, v_2, ..., v_n \}$ with the coefficients all being zero. If a set of vectors $\{ v_1, v_2, ..., v_n \}$ is not linearly independent then we say the set if Linearly Dependent and that there exists scalars $a_1, a_2, ..., a_n \in \mathbb{F}$, not all zero, such that $a_1v_1 + a_2v_2 + ... + a_nv_n = 0$. We will now look at some more examples to regarding the linear independence / dependence of a set of vectors. Example 1 Consider the set of vectors $\left \{ \begin{bmatrix} 1 & 0\\ 0 & 1 \end{bmatrix}, \begin{bmatrix} 0 & -1\\ 0 & 0 \end{bmatrix}, \begin{bmatrix} 1 & 0\\ 0 & 0 \end{bmatrix} \right \}$ from the vector space $M_{22}$ of $2 \times 2$ matrices. Determine if this set is linearly independent or linearly dependent. We first consider the following vector equation:(1) This equation equals zero if and only if $a_1 + a_3 = 0$, $-a_2 = 0$, and $a_1 = 0$, and we deduce that therefore the only set of scalars for which the above vector equation is true is $a_1 = a_2 = a_3 = 0$. We can verify this by showing the following matrix representing this system is invertible, $A = \begin{bmatrix}1 & 0 & 1\\ 0 & -1 & 0\\ 1 & 0 & 0 \end{bmatrix}$ is invertible by cofactor expansion along the third row to get that $\mathrm{det} A = 1 \cdot \begin{vmatrix}0 & 1\\ -1 & 0 \end{vmatrix} = 1$, and so the homogenous system $\begin{bmatrix} 1 & 0 & 1\\ 0 & -1 & 0\\ 1 & 0 & 0 \end{bmatrix} \begin{bmatrix} a_1\\ a_2\\ a_3 \end{bmatrix} = \begin{bmatrix} 0\\ 0\\ 0\end{bmatrix}$ has only the trivial solution $a_1 = a_2 = a_3 = 0$. Example 2 Consider the set of vectors $\{ 1 + x - x^2, 2 + x^2, x^3 - 2x^4 \}$ from the vector space $\wp_{4} (\mathbb{F})$ of polynomials of degree $4$ or less. Determine if this set is linearly independent or linearly dependent. Let's first consider the following vector equation:(2) This equation is zero if and only if:(3) And the only solution to this system is $a_1 = a_2 = a_3 = a_4 = 0$, and so this set of vectors is linearly independent. Example 3 Consider the set of vectors $\{ (1, 2, 0, 0), (0, 4, 4, 0), (2, 0, 3, 0) \}$ from the vector space $\mathbb{R}^4$ of 4-component standard vectors. Determine if this set is linearly independent or linearly dependent. Let's first look at the following vector equation:(4) We must check to see if the following homogenous system has more than just the trivial solution.(5) We will do this, once again, by determining if the following matrix representing this system is invertible $A = \begin{bmatrix}1 & 0 & 2\\ 2 & 4 & 0\\ 0 & 4 & 3 \end{bmatrix}$. Using cofactor expansion along row 1 we have that $\det A = 1 \cdot \begin{vmatrix}4 & 0\\ 4 & 3\ \end{vmatrix} + 2 \cdot \begin{vmatrix}2 & 4\\ 0 & 4\ \end{vmatrix} = 12 + 16 = 28$, and so $A$ is invertible while implies that the system $\begin{bmatrix}1 & 0 & 2\\ 2 & 4 & 0\\ 0 & 4 & 3 \end{bmatrix} \begin{bmatrix}a_1\\ a_2\\ a_3\end{bmatrix} = \begin{bmatrix} 0\\ 0\\ 0 \end{bmatrix}$ has only the trivial solution $a_1 = a_2 = a_3 = 0$.
Moscow-Beijing topology seminar Speaker Introduction 2019-10-30 Title: Stringc structures and modular invariants Speaker: 黄瑞芝 Huang Ruizhi (中科院) Abstract: Spin structure and its higher analogies play important roles in index theory and mathematical physics. In particular, Witten genera for String manifolds have nice geometric implications. As a generalization of the work of Chen-Han-Zhang (2011), we introduce the general Stringc structures based on the algebraic topology of Spinc groups. It turns out that there are infinitely many distinct universal Stringc structures indexed by the infinite cyclic group. Furthermore, we can also construct a family of the so-called generalized Witten genera for Spinc manifolds, the geometric implications of which can be exploited in the presence of Stringc structures. As in the un-twisted case studied by Witten, Liu, etc, in our context there are also integrality, modularity, and vanishing theorems for effective non-abelian group actions. We will also give some applications. This a joint work with Haibao Duan and Fei Han. 2019-10-23 Title: Classification of Links with Khovanov Homology of Minimal Rank Speaker: 谢羿 Xie Yi(北京大学) Abstract: Khovanov homology is a link invariant which categorifies the Jones polynomial. It is related to different Floer theories (Heegaard Floer, monopole Floer and instanton Floer) by spectral sequences. It is also known that Khovanov homology detects the unknot, trefoils, unlinks and Hopf links. In this talk, I will give a brief introduction to Khovanov homology and use the instanton Floer homology to prove that links with Khovanov homology of minimal rank must be iterated connected sums and disjoint unions of Hopf links and unknots. This is joint work with Boyu Zhang. 2019-10-16 Title: Spin generalization of Dijkraaf-Witten TQFTs Speaker: Pavel Putrov (ICTP) Abstract: In my talk I will consider a family of spin topological quantum field theories (spin-TQFTs) that can be considered as spin-version of Dijkgraaf-Witten TQFTs. Although relatively simple, such spin-TQFTs provide non-trivial invariants of (higher-dimensional) links and manifolds, and provide examples of categorification of such quantum invariants. 2019-10-9 Title: Virtual Knots and Links and Perfect Matchings of Trivalent Graphs Speaker: Louis H. Kauffman (UIC and NSU) Abstract: In this talk we discuss a mapping from Graphenes (oriented perfect matching structures on trivalent graphs with cyclic orders at their vertices) to Virtual Knots and Links. We show how, with an appropriate set of moves on Graphenes, our mapping K: Graphenes —> Virtual Knots and Links is an equivalence of categories. This means that we can define new invariants of graphenes by using invariants of virtual knots and links, including Khovanov homology for graphenes. The equivalence K allows us to explore problems about graphs, such as coloring problems and flow problems, in terms of knot theory. The talk will introduce many examples of this correspondence and discuss how the classical coloring problems for graphs are illuminated by the topology of virtual knot theory. 2019-9-25 Title: Simplicial structures in braid/knot theory and data analytics Speaker: 吴杰Wu Jie Abstract: In this talk, we will explain simplicial technique on braids and links as well as its applications in data science. 2019-9-18 Title: Computation of Spin cobordism groups Speaker: 万喆彦 Wan Zheyan (YMSC) Abstract: Adams spectral sequence is a powerful tool for computing homotopy groups of spectra. In particular, it was used for computing homotopy groups of sphere spectrum, which are stable homotopy groups of spheres. By the generalized Pontryagin-Thom isomorphism, the Spin cobordism group \Omega_d^{Spin}(X) is exactly the homotopy group \pi_d(MSpin\wedge X_+) where MSpin is the Thom spectrum, X_+ is the disjoint union of X and a point. In my talk, I will introduce spectra, Adams spectral sequence and compute the Spin cobordism groups for a special topological space X. These are contained in my joint work with Juven Wang (arXiv: 1812.11967). 2019-9-11 Title: Word problem in certain G_n^k groups Speaker: Denis Fedoseev (莫斯科国立大学) Abstract: In the present talk we discuss the word and, to lesser extent, the conjugacy problems in certain groups G_n^k, which were introduced by V. Manturov. We prove that the word problem is algorithmically solvable in the groups G_4^3 and G_5^4. The talk is based on the joint work with V. Manturov and A. Karpov. 2019-9-4 Title: Coxeter arrangements in three dimensions Speaker: 王军 Wang Jun (首都师范大学) Abstract: Let $\mathcal{A}$ be a finite real linear hyperplane arrangement in three dimensions. Suppose further that all the regions of $\mathcal{A}$ are isometric. The open quesion asked by C.Klivans and E.Swartz in 2001 is that does there exist a real central hyperplane arrangement with all regions isometric that is not a Coxeter arrangement. In 2016, R.Ehrenborg, C.Klivans and N.Reading proved that in three dimensions, $\mathcal{A}$ is necessarily a Coxeter arrangement. As it is well known that the regions of a Coxeter arrangement are isometric, this characterizes three-dimensional Coxeter arrangements precisely as those arrangements with isometric regions. It is an open question whether this suffices to characterize Coxeter arrangements in higher dimensions. In this talk, we will intruduce Coxeter arrangement and the proof of R.Ehrenborg, C.Klivans and N.Reading in three dimensions. 2019-8-28 Title: Pictures instead of coefficients: the label bracket Speaker: Vassily Manturov (莫斯科国立技术大学) Abstract: We shall consider skein-relations where instead of coefficients we draw small pictures. This way, starting from the Kauffman bracket formalism, we get to a knot invariant (valued in pictures with modulo relations) which dominates not only the Kauffman bracket but also the Kuperberg bracket, the HOMFLY polynomial, the arrow polynomial and the Kuperberg picture-valued invariant for virtual knots and knotoids. This is a joint work with Alyona Akimova and Louis Kauffman. https://arxiv.org/abs/1907.06502 2019-8-21 Title: Knots with identical Khovanov homology Speaker: 柏升 Bai Sheng(北京化工大学) Abstract: This is Liam Watson's paper in Algebraic & Geometric Topology 7 (2007) 1389–1407. He gives a recipe for constructing families of distinct knots that have identical Khovanov homology and give examples of pairs of prime knots, as well as infinite families, with this property. 2019-8-14 Title: A Discrete Morse Theory for Hypergraphs Speaker: 任世全Ren Shiquan(清华大学) Abstract: A hypergraph can be obtained from a simplicial complex by deleting some non-maximal simplices. By [A.D. Parks and S.L. Lipscomb, Homology and hypergraph acyclicity: a combinatorial invariant for hypergraphs. Naval Surface Warfare Center, 1991], a hypergraph gives an associated simplicial complex. By [S. Bressan, J. Li, S. Ren and J. Wu, The embedded homology of hypergraphs and applications. Asian J. Math. 23 (3) (2019), 479-500], the embedded homology of a hypergraph is the homology of the infimum chain complex, or equivalently, the homology of the supremum chain complex. In this paper, we generalize the discrete Morse theory for simplicial complexes by R. Forman [R. Forman, Morse theory for cell complexes. Adv. Math. 134 (1) (1998), 90-145], [R. Forman, A user’s guide to discrete Morse theory. Séminaire Lotharingien de Combinatoire 48 Article B48c, 2002], [R. Forman, Discrete Morse theory and the cohomology ring. Trans. Amer. Math. Soc. 354 (12) (2002), 5063-5085] and give a discrete Morse theory for hypergraphs. We use the critical simplices of the associated simplicial complex to construct a sub-chain complex of the infimum chain complex and a sub-chain complex of the supremum chain complex, then prove that the embedded homology of a hypergraph is isomorphic to the homology of the constructed chain complexes. Moreover, we define discrete Morse functions on hypergraphs and compute the embedded homology in terms of the critical hyperedges. As by-products, we derive some Morse inequalities and collapse results for hypergraphs. 2019-8-7 Title: Pre-image classes and an application in knot theory Speaker: 赵学志Zhao Xuezhi(首都师范大学) Abstract: Let $f: X\to Y$ be a map and $B$ be a non-empty closed subset of $Y$. We consider the pre-image $f^{-1}(B)$ according to Nielsen fixed theory. Can we give a lower bound for the number of components $g^{-1}(B)$, where $g$ is an arbitrary map in the homotopy class of $f$? This is a natural generalization of root theory. We shall apply this theory to give invariants for knots and links. 2019-7-31 Title: Mysteries of approximation in $\mathbb{R}^4$ Speaker: N.G.Moshchevitin(莫斯科国立大学) Abstract: We will discuss some unsolved problems in the theory of Diophantine Approximation. It turns out that certain questions related to approximation of subspaces of $\mathbb{R}^4$ by rational subspaces remain unclear. We discuss some of them as well as related topics. 2019-7-24 Title: Quandle and its applications in knot theory Speaker: 程志云Cheng Zhiyun(北京师范大学) Abstract: In this talk I will give a brief introduction to quandle theory. Several applications of quandle in knot theory will be discussed.
The degeneracy or non-degeneracy of these states depends on the problem's hamiltonian as well as on the system's Hilbert space, so this question doesn't really have an answer. However, the angular momentum states will typically be degenerate in the sense that multiple (linearly independent) states will have the same angular momentum characteristics. For a simple case, consider a single particle in 3D with a spherically symmetric potential. Then you can decompose the wavefunction into radial and angular parts and the latter can always be assumed to have well-defined total and $z$-component angular momenta: you can always write$$\Psi(\mathbf{r})=\psi(r)Y_{lm}(\theta,\phi).$$However, you still need to deal with the radial wavefunction, and that will typically will have an infinity (either discrete or discrete + continuum) of energy eigenstates. In that sense the angular momentum states are "degenerate" though of course the energies can depend on $l$. On a more general, representation theoretic sense, this is still true. If you have some system of particles in 3D then you can always decompose the total system Hilbert space into a direct sum of subspaces with well-defined $J^2$, within which the $J_3$ eigenstates are a good basis. That much is the theorem. However, this doesn't say anything about how many such subrepresentations there will be, what their total angular momentum can be, or even whether it's a good idea to make such a decomposition in the first place (which it won't if the system has other, stronger symmetries!). What that means in practice is that you need to add a third quantum "number" to your states to get uniquely defined states. This is usually done by notations of the form$$|\alpha,j,m\rangle$$where $\alpha$ stands for "all the other quantum numbers of the problem" and therefore will generally be an ordered tuple of numbers. (In the hydrogen atom, for example, it suffices to take $\alpha=n$, the principal quantum number.) This index $\alpha$ then tells you which of the many $J^2=\hbar^2j(j+1)$ representations the state belongs to. To see this notation in action see e.g. these notes on the Wigner-Eckart theorem. Edit: a word on ladder operators. Angular momentum ladder operators are linear combinations of angular momentum components ($J_\pm=J_1\pm i J_2$) and since representations are invariant under the action of $\mathbf{J}$, that means the action of $J_\pm$ on a state with well-defined $\alpha$ and $j$ will take it to a state with the same $\alpha$ and $j$ (i.e. in the same subrepresentation). What this means is that you can define the ladder operators without worrying about what subrepresentation they act on - since their action is the same on all - and then restrict your attention to a fixed subrepresentation with no consequence. When you consider superpositions of states from different representations (like you would if you have an arbitrary radial wavefunction, for instance), the ladder operators work like they should on the different $|\alpha,j,m\rangle$ states, and by linearity this is enough to see how they should behave. The take-home message is that the angular momentum algebra works fine no matter how many representations you have. If you want to find that out, though, then you do need to worry about exactly how your system looks like.
This page on Wikipedia -- Quasars mentions that the "The largest known [quasar] is estimated to consume matter equivalent to 600 Earths per minute". However, there is no citation for this comment. How can I find out where this information came from? I've commented in the Talk section for the page. Tricky to say for sure, but I would imagine it comes about from measurements of the luminosity and inference of the black hole mass in such systems. The most extreme objects radiate at the Eddington luminosity, where gravitational forces on matter falling into the black hole are balanced by radiation pressure from the heated material closer in. If infalling mass is converted to luminosity at a rate of $$ L = \epsilon \dot{M} c^2,$$ where $\dot{M}$ is the mass accretion rate, $L$ is the luminosity and $\epsilon$ is an efficiency factor, which should be of order 0.1; then the mass accretion rate at the Eddington limit is given by $$ \dot{M} = \frac{4\pi G M m_p}{\epsilon c \sigma_T} \simeq 1.4\times 10^{15}\frac{M}{M_{\odot}}\ {\rm kg/s},$$ where $M$ is the black hole mass, $m_p$ the mass of a proton and $\sigma_T$ is the Thomson scattering cross-section for free electrons (the major source of opacity in the infalling hot gas). The biggest supermassive black holes in the universe have $M \simeq 10^{10}M_{\odot}$ and thus the Eddington accretion rate for such objects is about $1.4\times 10^{25}$ kg/s or about 2.3 Earths/second or 140 Earths per minute. The difference between this estimate and the one on the wikipedia page could be what is assumed for the biggest $M$ or that $\epsilon$ is a bit smaller than 0.1 or indeed that the luminosity could exceed the Eddington luminosity (because the accretion isn't spherical). Perhaps a simpler way to get the answer is to find the most luminous quasar and divide by $\epsilon c^2$. The most luminous quasar ever seen is probably something like 3C 454.3, which reaches $\sim 5\times 10^{40}$ Watts in its highest state. Using $\epsilon = 0.1$ yields about an Earth mass per second for the accretion rate. So perhaps the number on the wikipedia page is a little exaggerated. Here is a study from 2012 for the largest recorded quasar which quotes an output of 400 times the mass of the sun per year, which is 253 earth masses per minute (133178400 M ⊕ / 525600 mins) at 2.5 percent the speed of light, located 1 billion light years away. It's the largest recorded quasar, I don't know the figure for the largest theoretical quasar, there are apparently hundreds of people theorizing and debating the theoretical maximum.
4:28 AM @MartinSleziak Here I am! Thank you for opening this chat room and all your comments on my post, Martin. They are really good feedback to this project. @MartinSleziak Yeah, using a chat room to exchange ideas and feedback makes a lot of sense compared to leaving comments in my post. BTW. Anyone finds a \oint\frac{1}{1-z^2}dz expression in old posts? Send to me and I will investigate why this issue occurs. @MartinSleziak It is OK, don't feel anything bad. As long as there is a place that comes to people's mind if they want to report some issue on Approach0, I am willing to come to that place and discuss. I am really interested in pushing Approach0 forward. 4:57 AM Hi @WeiZhong thanks for joining the room. I will write a bit more here when I have more time. For now two minor things. I just want to make sure that you know that the answer on meta is community wiki. Which means that various users are invited to edit it, you can see from revision history who added what to the question. You can see in revision history that this bullet point was added by Workaholic: "I searched for \oint $\oint$, but I only got results related to \int $\int$. I tried for \oint \frac{dz}{1-z^2} $\oint \frac{dz}{1-z^2}$ which is an integral that appears quite often but it did not yield any correct results." So if you want to make sure that this user is notified about your comments, you can simply add @Workaholic. Any of the editors can be pinged. And I noticed also this about one of the quizzes (I did not check whether some of the other quizzes have similar problem.) I suppose that the quizzes are supposed to be chosen in such way that Approach0 indeed helps to find the question. I.e., each quiz was created with some specific question in mind, which should be among the search results. Is that correct? I guess the quiz saying "Please list all positive integers $i,j,k$ such that $i^5 + j^6 = k^7$." was made with this question in mind: Find all positive integers satisfying: $x^5+y^6=z^7$. However when I try the query from this quiz, I get completely different results. I vaguely recall that I tried some quizzes, including this one, and they worked. (By which I mean that the answer to the question from the quiz could be found among the search results.) So is this perhaps due to some changes that were made since then? Or is that simply because when I tried the quiz last time, less questions were indexed. (And now that question is still somewhere among the results, but further down.) I was wondering whether to add the word to my last message, but it is probably not a bug. It is simply that search results are not exactly as I would expect. My impression from the search results is that not only x, y, z are replaced by various variables, but also 5,6,7 are replaced by various numbers. 5:40 AM I think that this implicitly contains a question whether when searching for $x^5+y^6=z^7$ also the questions containing $x^2+y^2=z^2$ or $a^3+b^3=c^3$ should be matches. For the sake of completeness I will copy here the part of quiz list which is relevant to the quiz I mentioned above: "Q": "Please list all positive integers [imath]i,j,k[/imath] such that [imath]i^5 + j^6 = k^7[/imath]. ", Hmm, I should have posted this as a single multiline message. But now I see that it is already too late to delete the above messages. Sorry for the duplication: { /* 4 */ "Q": "Please list all positive integers [imath]i,j,k[/imath] such that [imath]i^5 + j^6 = k^7[/imath]. ", "hints": [ "This should be easy, the only thing I need to do is do some calculation...", "I can use my computer to enumerate...", "... (10 minutes after) ...", "OK, I give up. Why borther list them <b>all</b>?", "Is that possible to <a href=\"#\">search it</a> on Internet?" ], "search": "all positive integers, $i^5 + j^6 = k^7$" }, "Q": "Please list all positive integers [imath]i,j,k[/imath] such that [imath]i^5 + j^6 = k^7[/imath]. ", "hints": [ "This should be easy, the only thing I need to do is do some calculation...", "I can use my computer to enumerate...", "... (10 minutes after) ...", "OK, I give up. Why borther list them <b>all</b>?", "Is that possible to <a href=\"#\">search it</a> on Internet?" ], "search": "all positive integers, $i^5 + j^6 = k^7$" }, 8 hours later… 1:19 PM @MartinSleziak OK, I get it. So next time I would definitely reply to whom actually makes the revision. @MartinSleziak Yes, remember the first time when we talk in a chat room? At that version of approach0, when a very limited posts have been indexed, you can actually get relevant posts on $i^5+j^6=k^7$. However, when I has enlarged the index (now almost the entire MSE), that piece of quiz (in fact, some quiz I selected earilier like [this one]()) does not find relevant posts anymore. I have noticed that "quiz" does not work, but I am really lazy and have not investigated it. Instead of change that "quiz", I agree to investigate on why that relevant result has gone. As far as I can guess, there can be two reasons: 1) the crawler missed that one (I did the crawling in China, the network condition is not always good, sometimes crawler fails to fetch random posts and have to skip them) 2) there is a bug in approach0 that I am not aware 1) the crawler missed that one (I did the crawling in China, the network condition is not always good, sometimes crawler fails to fetch random posts and have to skip them) 2) there is a bug in approach0 that I am not aware In order to investigate this problem, I am trying to find the original posts that you and me have seen (as you remember vaguely) which is relevant to $i^5+j^6=k^7$ quiz, if you find that post, please send me the URL. @MartinSleziak It can be a bug, but I need to know if my index does contain a relevant post, so first let us find that post we think relevant. And I will have a look whether or not it is in my index, perhaps the crawler just missed that one. If it is in our index currently, then I should spend some time to find out the reason. @MartinSleziak As for you last question, I need to illustrate it a little more. Approach0 will first find expressions that are structurallyrelevant to query. So $x^5+y^6=z^7$ will get you $x^2+y^2=z^2$ or $a^3+b^3=c^3$, because they (more specifically, their operator tree representation) are considered structurally identical. After filtering out these structurally relevant expressions, Approach0 will evaluate their symbolic relevance degree with regarding to query expression. Suppose $x^5+y^6=z^7$ gives you $x^2+y^2=z^2$, $a^3+b^3=c^3$ and also $x^5+y^6=z^7$, expression $x^5+y^6=z^7$ will be ranked higher than $x^2+y^2=z^2$ and $a^3+b^3=c^3$, this is because $x^5+y^6=z^7$ has higher symbolic score (in fact, since it has identical symbol set to query, it has the highest possible symbolic score). I am sorry, I should use "and" instead of "or". Let me repeat the message before previous one below: As for you last question, I need to illustrate it a little more. Approach0 will first find expressions that are structurallyrelevant to query. So $x^5+y^6=z^7$ will get you both$x^2+y^2=z^2$ and$a^3+b^3=c^3$, because they (more specifically, their operator tree representation) are considered structurally identical. Now the next things for me to do is to investigate some "missing results" suggested by you. 1. Try to find `\oint` expression in an old post (by old I mean at least 5 weeks old, so that it is possible been indexed) 1. Try to find `\oint` expression in an old post (by old I mean at least 5 weeks old, so that it is possible been indexed) 2:23 PM Unfortunately, I fail to find any relevant old post in neither case 1 nor case 2 after a few tries (using MSE default search). So the only thing I can do now is to do an "integrated test" (see the new code I have just pushed to Github: github.com/approach0/search-engine/commit/…) An "integrated test" means I make a minimal index with a few specified math expressions and search a specified query, and see if the results is expected. For example, the test case tests/cases/math-rank/oint.txt specified the query $\oint \frac{dz}{1-z^2}$, and the entire index has just two expressions: $\oint \frac{dz}{1-z^2}$ and $\oint \frac{dx}{1-x^2}$, and the expected search result is both these two expressions are HIT (i.e. they should appear in search result) 10 hours ago, by Martin Sleziak I guess the quiz saying "Please list all positive integers $i,j,k$ such that $i^5 + j^6 = k^7$." was made with this question in mind: Find all positive integers satisfying: $x^5+y^6=z^7$. 2:39 PM For anyone interested, I post the screenshot of integrated test results here: imgur.com/a/xYBD5 3:04 PM For example like this: chat.stackexchange.com/transcript/message/32711761#32711761 You get the link by clicking on the little arrow next to the message and then clicking on "permalink". I am mentioning this because (hypothetically) if Workaholic only sees your comment a few days later and then they come here to see what the message you refer to, they might have problem with finding it if there are plenty of newer messages. However, this room does not have that much traffic, so very likely this is not going to be a problem in this specific case. Another possible way to linke to a specific set of messages is to go to the transcript and then choose a specific day, like this: chat.stackexchange.com/transcript/46148/2016/10/1 Or to bookmark a conversation. This can be done from the room menu on the right. This question on meta.SE even has some pictures. This is also briefly mentioned in chat help: chat.stackexchange.com/faq#permalink 3:25 PM @MartinSleziak Good to learn this. I just posted another comment with permalink in that meta post for Workaholic to refer. I just checked the index on server, yes, that post is indeed indexed. (for my own reference, docID = 249331) 2 hours later… 5:13 PM Update: I have fixed that quiz problem. See: approach0.xyz/search/… That is not strictly a bug, it is because I put a restriction on the number of document to be searched in one posting list (not trying to be very technical). I have pushed my new code to GitHub (see commit github.com/approach0/search-engine/commit/…), this change gets rid of that restriction and now that relevant post is shown as the 2nd search result. 2 hours later… 6:57 PM « first day (2 days earlier) next day → last day (1104 days later) »
Let $X$ be a Banach space and $B(X)$ be its space of all (bounded) operators. A nuclear functional on $B(X)$ is a linear functional $u:B(X)\to{\mathbb C}$ that can be represented in the form$$u(A)=\sum_{n=1}^\infty \lambda_n\cdot f_n(Ax_n),\qquad A\in B(X),$$where $\lambda_n\in{\mathbb C}$, $x_n\in X$, $f_n\in X^*$ are such that$$\sum_{n=1}^\infty |\lambda_n|<\infty,\quad \sup_{n}||x_n||\le 1,\quad \sup_{n}||f_n||\le 1.$$Let us denote by $N(X)$ the space of all nuclear functionals on $B(X)$. If $X$ is a Hilbert space, then it is well known (see G.J.Murphy, C*-Algebras and Operator Theory, Theorem 4.2.1) that the dual space $K(X)^*$ to the space of all compact operators $K(X)$ coinsides with the space of all nuclear functionals: $$ K(X)^*=N(X) $$ (this is an isomorphism of Banach spaces, but for me it is important that this is an equality of sets). Is the same true for all Banach spaces $X$? Or at least for all Banach spaces with the (classical) approximation property? I am mostly interested in the case when $X=C(T)$, the space of continuous functions on a compact topological space $T$.
I'm training a neural network (or any ML model with non-convex gradient-based optimization) to predict a continuous outcome variable. Currently, I use the mean squared error loss function, i.e., if $y$ is the true outcome and $\hat{y}$ is the model prediction, I minimize the expected loss $$\text{E}[(y-\hat{y})^2]$$ However, the expected metric I really care about maximizing is, $$\frac{\text{E}[y\hat{y}]}{\sqrt{\text{Var}{(y\hat{y})}}}$$ Using this (or it's negative) as a loss function presents two problems: the non differentiability of the ratio at $0$, and $\text{Var}(y\hat{y})$ depends on the second moment of the model predictions and true outcome, so this cannot be computed for a single data point from the training or validation sets $(x_i, y_i)$. Is there a way to approximate this loss function as a linear combination of moments of $y$ and $\hat{y}$ which I can optimize instead? In general, I am trying to get a better analytical understanding of this custom metric. How does it penalize bias and variance of the model? Where is is irregular? Is there another way to write this function that is equivalent in optimization but simpler?
Current browse context: cs.IT Change to browse by: References & Citations Bookmark(what is this?) Computer Science > Information Theory Title: Constructions of transitive latin hypercubes (Submitted on 28 Feb 2013 (v1), last revised 28 Aug 2019 (this version, v3)) Abstract: A function $f:\{0,...,q-1\}^n\to\{0,...,q-1\}$ invertible in each argument is called a latin hypercube. A collection $(\pi_0,\pi_1,...,\pi_n)$ of permutations of $\{0,...,q-1\}$ is called an autotopism of a latin hypercube $f$ if $\pi_0f(x_1,...,x_n)=f(\pi_1x_1,...,\pi_n x_n)$ for all $x_1$, ..., $x_n$. We call a latin hypercube isotopically transitive (topolinear) if its group of autotopisms acts transitively (regularly) on all $q^n$ collections of argument values. We prove that the number of nonequivalent topolinear latin hypercubes grows exponentially with respect to $\sqrt{n}$ if $q$ is even and exponentially with respect to $n^2$ if $q$ is divisible by a square. We show a connection of the class of isotopically transitive latin squares with the class of G-loops, known in noncommutative algebra, and establish the existence of a topolinear latin square that is not a group isotope. We characterize the class of isotopically transitive latin hypercubes of orders $q=4$ and $q=5$. Keywords: transitive code, propelinear code, latin square, latin hypercube, autotopism, G-loop. Submission historyFrom: Denis Krotov [view email] [v1]Thu, 28 Feb 2013 21:00:01 GMT (11kb) [v2]Mon, 4 May 2015 14:17:37 GMT (19kb) [v3]Wed, 28 Aug 2019 05:32:29 GMT (24kb)
Another Comparison Theorem for Integrals of Step Functions on General Intervals Recall from the The Limit of the Integral of a Decreasing Sequence of Nonnegative Step Functions Approaching 0 a.e. on General Intervals page that if $(s_n(x))_{n=1}^{\infty}$ is a decreasing sequence of nonnegative step functions that converge to $0$ almost everywhere on the interval $I$ then:(1) We will now use that very important result to prove a nice comparison theorem for integrals of step functions on general intervals. Theorem 1: Let $(f_n(x))_{n=1}^{\infty}$ be an increasing sequence of step functions that converge to $f$ almost everywhere on $I$, and suppose that $\displaystyle{\lim_{n \to \infty} \int_I f_n(x) \: dx}$ exists. Then for any step function $g$ such that $g(x) \leq f(x)$ almost everywhere on $I$ we have that $\displaystyle{\int_I g(x) \: dx \leq \lim_{n \to \infty} \int_I f_n(x) \: dx}$. Proof:Define a new sequence of functions $(s_n(x))_{n=1}^{\infty}$ for each $n \in \mathbb{N}$ by: Then $(s_n(x))_{n=1}^{\infty}$ is a decreasing sequence of functions (since $g(x)$ is a fixed function and $(f_n(x))_{n=1}^{\infty}$ is an increasing sequence of functions, so $(-f_n(x))_{n=1}^{\infty}$ is a decreasing sequence of functions). Furthermore, $(s_n(x))_{n=1}^{\infty}$ is nonnegative, and $(s_n(x))_{n=1}^{\infty}$ converges to $0$ almost everywhere on $I$. Therefore we have that: So for each $n \in \mathbb{N}$ we have that: Taking the limit as $n \to \infty$ of both sides and using $(*)$ yields: So $\displaystyle{\int_I g(x) \: dx \leq \lim_{n \to \infty} \int_I f_n(x) \: dx}$. $\blacksquare$
Introduction to Sound waves Sound is a form of energy arising due to mechanical vibrations. Hence sound waves require a medium for its propagation sound cannot travel a medium for its propagation. Sound cannot travel in vacuum. The sound waves are propagated as longitudinal mechanical waves through solids liquids and gases. Speed of Sound Waves in Solids, Liquids, Gases Newton’s Formula for Speed of Sound Waves Newton showed that the speed of sound in a medium \(v=\sqrt{\frac{E}{P}}\) E = modulus of elasticity of the medium P – the density of the medium. Also Read: Wave Motion Speed of Sound Waves in Solids \(v=\sqrt{\frac{y}{P}}\) Y = Young’s modulus of the solid P = density of the solid Speed of Sound Waves in Liquid \(v=\sqrt{\frac{B}{P}}\) B – Bulk modulus of the liquid P – Density of the liquid Speed of Sound Waves in Gases Newton considered the propagation of sound waves through gases as an isothermal process PV = constant (as the medium is into getting heated up when sound is passing through it.) then he stated \(v=\sqrt{\frac{\rho }{\gamma }}\) P – Pressure of the gas (isothermal Bulk modulus of gas) there was a huge discrepancy in the speed of sound determined by using this formula with the experimentally determined values. Hence a correction to this formula was given by Laplace it is known as Laplace correction. Laplace Correction According to Laplace, the propagation of sound waves in gas takes place adiabatically. So the adiabatic bulk modulus of the gas (γP) has to be used hence the speed of sound waves in the gas: \(V=\sqrt{\frac{\gamma P}{P}}\) γP – adiabatic bulk modulus of the gas P – the density of the medium The values obtained by Newton – Laplace formula is in excellent agreement with the experiment results. Factors Affecting the Speed of Sound in Gases Effect of pressure Effect of temperature Effect of density of the gas Effect of humidity Effect of wind Effect of change in frequency (or) wavelength of the sound wave Effect of amplitude Effect of Pressure If the pressure is increased at a constant temperature by Boyle’s law PV = constant [for a fixed mass of gas] P = density of the gas (for the fixed value of density) \(\frac{P}{P}=constant\) So change in pressure does not affect the speed of sound waves through a gas. Effect of Temperature Velocity of sound in a gas \(V=\sqrt{\frac{\gamma P}{P}}\) \(P=\frac{M}{V}\) ∴ \(\,\,\,\,\,V=\sqrt{\frac{\gamma PV}{M}}\) For perfect gas PV = nRT [for 1 mole of gas] \(V=\sqrt{\frac{\gamma RT}{M}}\) \(V\alpha \sqrt{T}\) So, \(\frac{{{V}_{1}}}{{{V}_{2}}}=\sqrt{\frac{{{T}_{1}}}{{{T}_{2}}}}\) How Density Affects the Speed of Sound? From the velocity of sound in the gas \(V=\sqrt{\frac{\gamma P}{P}}\) \(V\alpha \frac{1}{\sqrt{P}}\) Effect of Humidity Under the same conditions of temperature and pressure, the density of water vapor is less that of dry air in the presence of moisture decreases the effective density of air hence the sound wave travels faster in moist air than in dry air. Effect of wind Wind simply adds its velocity vectorially to that of the sound wave if the component of V w of wind speed is in the direction of the sound wave, the resultant speed of sound is V resultant = V + V w V w – wind speed Effect of Change in Frequency (or) Wavelength of the Sound Wave Change of frequency (or) wavelength do not affect the speed of sound in a medium (Homogeneous isotropic medium) Sound travels with the same speed in all directions. V = λF = constant When sound wave passes from one medium to another medium, frequency remains constant but wavelength and velocity changes. Effect of Amplitude From velocity relation \(V\sqrt{\frac{\gamma P}{P}}\) (for some amplitudes) Generally, small amplitude does not affect the speed of sound in the gas. However, a very large amplitude may affect the speed of the sound wave. Relation between Speed of Sound in Gas and RMS Speed of Gas Molecules From velocity of sound wave \(V\sqrt{\frac{\gamma P}{P}}=\sqrt{\frac{\gamma PV}{M}}\) pv = nRT n = 1 PV = RT \(V=\sqrt{\frac{\gamma RT}{M}}\) then rms speed of gas molecules, \({{V}_{rms}}=\sqrt{\frac{3RT}{M}}=\sqrt{\frac{3}{P}}\sqrt{\frac{\gamma RT}{M}}=\sqrt{\frac{3}{P}}\,\,V\) \({{V}_{rms}}=\sqrt{\frac{3}{P}}\,\,V\) Where, V – is speed of sound waves through gas.
Segment of a Circle: A region bounded by a chord and a corresponding arc lying between the chord’s endpoints is known as segment of a circle. It is to be noted that the segments do not contain the center point. Definition: The segment of a circle divides it into two region namely major segment and minor segment. The segment having larger area is known as the major segment and the segment having smaller area is known as minor segment. In fig. 1, ADB is the major segment and ABC is the minor segment. If nothing is stated, segment means minor segment. In order to calculate the area of segment of a circle, one should know how to calculate the area of sector of circle. Area of Segment of Circle: In fig. 2, if ∠AOB=θ (in degrees), then the area of the sector AOBC (\(A_{sector ~AOBC}\)) is given by the formula \(A_{sector ~AOBC}\) = \(\frac{θ}{360°}~\times~ πr^2\) Let the area of \(∆AOB\) be \(A_{∆AOB}\). So, area of the segment \(ABC(A_{segment~ ABC})\) is given by \(A_{segment~ ABC}\) = \(A_{sector~ AOBC} – A_{∆AOB}\) \(A_{segment ~ABC}\) = \(\frac{θ}{360°}~\times~πr^2 – A_{∆AOB}\) The area of \(∆AOB\) can be calculated in two steps, As shown in fig. 3, Calculate the height of \(∆AOB\) i.e. \(AP\) using Pythagoras theorem as: \(AP\) = \(\sqrt{r^{2} – \left ( \frac{AB}{2} \right )^{2}}\), if length of \(AB\) is given or, \(AP\) = \(r~cos~\frac{θ}{2}\), if θ is given (in degrees) Calculate the area of ∆AOB using the formula \(A_{area ~∆AOB}\) = \(\frac{1}{2}×base×height\) = \(\frac{1}{2}×AB×AP\) Substituting the values in area of segment formula, the area can be calculated. Theorems on Segment of a Circle: Theorem 1: Alternate Segment Theorem: The alternate angle is the angle made in the other sector from a chord. Consider the alternate angle ACD (as x) is equal to the angle ABC shown on the other side of the chord, where DC is the tangent to the circle. Triangle ABC has points A,B and C on the circumference of a circle with centre O. Join points OA and OC to form triangle AOC. Let \(\angle ACD = x\), and \(\angle OCA = y\). We know that tangent to a circle is at a right angle to the radius of a circle, therefore \(x + y = 90^{\circ}\) ——————————(i) Bisect the triangle AOC from the point O, then the triangle formed is a right-angled triangle at E. Let the bisected angle be z, therefore \(\angle AOE = \angle COE = z\) Sum of angles of a triangle is \(180^{\circ}\). In \(\bigtriangleup COE\), \(y + z + 90 = 180\) or \(y + z = 90^{\circ}\) —————————(ii) Equating equation (i) and (ii), we have x = z We know, \(\angle ABC = \angle ACD = x\) Therefore, \(\angle AOC = 2z\) and \(\angle ABC = x\) which implies \(\angle AOC = 2 \angle ABC\) Theorem 2: Angle in the same segment are equal: Consider triangle ABC and ADC having \(\angle ABC \;\; and \;\; \angle ADC\) in the major segment of a circle. To prove: \(\angle ABC = \angle ADC\) Construction: Join O to A and C. Proof: Let \(\angle AOC = x\) From Theorem 1, we have \(x = 2.\angle ABC\) —————–(i) Also, \(x = 2.\angle ADC\) ———————-(ii) From (i) and (ii), we have \(\angle ABC = \angle ADC\) Lets Work Out: Area of the sector AOB (blue region + green region) = \(\frac{\theta }{360} \times \pi r^{2} = \frac{60}{360} \times \pi \times 6^{2}\) = \(6π~cm^2\) \(Area \; of ∆AOB\) = \(\frac{1}{2}~×~OC~×~AB\) where \(OC\) = \(6~cos~30°\) = \(6 \times \frac{\sqrt{3}}{2} = 3\sqrt{3}\) cm and \(AB\) = \(2BC\) = \(2~×~6~sin~30°\) = \(2~×~6~×~\frac{1}{2}\) = \(6~cm\) Substituting the values, Area of ∆AOB = \(\frac{1}{2} \times 3\sqrt{3} \times 6 = 9 \sqrt{3} \; cm^{2}\) So, area of segment AB = \(6π~cm^2 – 9√3~cm^2\) = \(3(2π – 3√3)cm^2\) To solve more questions on segment of circle, download BYJU’S – The Learning App.
In this MathStackExchange post the question in the title was asked without much outcome, I feel. Edit: As Douglas Zare kindly observes, there is one more answer in MathStackExchange now. I am not used to basic Probability, and I am trying to prepare a class that I need to teach this year. I feel I am unable to motivate the introduction of random variables. After spending some time speaking about Kolmogoroff's axioms I can explain that they allow to make the following sentence true and meaningful: The probability that, tossing a coin $N$ times, I get $n\leq N$ tails equals $$\tag{$\ast$}{N \choose n}\cdot\Big(\frac{1}{2}\Big)^N.$$ But now people (i.e. books I can find) introduce the "random variable $X\colon \Omega\to\mathbb{R}$ which takes values $X(\text{tails})=1$ and $X(\text{heads})=0$" and say that it follows the binomial rule. To do this, they need a probability space $\Omega$: but once one has it, one can prove statement $(\ast)$ above. So, what is the usefulness of this $X$ (and of random variables, in general)? Added: So far my question was admittedly too vague and I try to emend. Given a discrete random variable $X\colon\Omega\to\mathbb{R}$ taking values $\{x_1,\dots,x_n\}$ I can define $A_k=X^{-1}(\{x_k\})$ for all $1\leq k\leq n$. The study of the random variable becomes then the study of the values $p(A_k)$, $p$ being the probability on $\Omega$. Therefore, it seems to me that we have not gone one step further in the understanding of $\Omega$ (or of the problem modelled by $\Omega$) thanks to the introduction of $X$. Often I read that there is the possibility of having a family $X_1,\dots,X_n$ of random variables on the same space $\Omega$ and some results (like the CLT) say something about them. But then I know no example—and would be happy to discover—of a problem truly modelled by this, whereas in most examples that I read there is either a single random variable; or the understanding of $n$ of them requires the understanding of the power $\Omega^n$ of some previously-introduced measure space $\Omega$. It seems to me (but admit to have no rigourous proof) that given the above $n$ random variables on $\Omega$ there should exist a $\Omega'$, probably much bigger, with a single $X\colon\Omega'\to\mathbb{R}$ "encoding" the same information as $\{X_1,\dots,X_n\}$. In this case, we are back to using "only" indicator functions. I understand that this process breaks down if we want to make $n\to \infty$, but I also suspect that there might be a deeper reason for studying random variables. All in all, my doubts come from the fact that random variables still look to me as being a poorer object than a measure (or, probably, of a $\sigma$-algebra $\mathcal{F}$ and a measure whose generated $\sigma$-algebra is finer than $\mathcal{F}$, or something like this); though, they are introduced, studied, and look central in the theory. I wonder where I am wrong. Caveat: For some reason, many people in comments below objected that "throwing random variables away is ridiculous" or that I "should try to come out with something more clever, then, if I think they are not good". That was not my point. I am sure they must be useful, lest all textbooks would not introduce them. But I was unable to understand why: many useful and kind answers below helped much.
Table of Contents The Closure of a Set and the Distance from Points to a Set Recall from the The Distance Between Points and Subsets in a Metric Space page that if $(S, d)$ is a metric space, $A \subseteq S$, and $x \in S$ then we can define a function $f_A : S \to \mathbb{R}$ for all $x \in S$ by:(1) We said that the distance from $x$ to $A$ is defined to be the number $f_A(x)$. We will now look at a nice theorem which gives us an alternative definition for the closure of a set in terms of collection of all points in $S$ that are of a distance of $0$ from $A$. Theorem 1: Let $(S, d)$ be a metric space and let $A \subseteq S$. Then the closure of $A$ is the set of all points $x \in S$ whose distance to $A$ is equal to $0$, that is, $\bar{A} = \{ x \in S : f_A(x) = 0 \}$. Proof:Let $x \in \bar{A}$. Then $x$ is either an isolated point of $A$ or an accumulation point of $A$. If $x$ is an isolated point of $A$ then $x \in A$, and so: Therefore $x \in \{ x \in S : f_A(x) = 0 \}$. If $x$ is not an isolated point of $A$ then $x$ is an accumulation point of $A$ and so for all $r > 0$ the open ball centered at $x$ with radius $r$ contains some points of $A$ different from $x$, that is, for all $r > 0$: So for every $r > 0$ there exists a $y \in A$ such that $d(x, y) = r$. So $f_A(x) = 0$ and therefore $x \in \{ x \in S : f_A(x) = 0 \}$. So: Now suppose that $x \in \{ x \in S : f_A(x) = 0 \}$. Then $f_A(x) = 0$. So either $x \in A$ or $x \not \in A$. If $x \in A$ then since $A \subseteq bar{A}$ we see that $x \in \bar{A}$. If $x \not \in A$ then since $x \in S$ and $f_A(x) = 0$ we see that for every open ball centered at $x$ with radius $r > 0$ there exists a $y \in A$ such that $d(x, y) = r$. So for all $r > 0$ we have that $B(x, r) \cap A \setminus \{ x \} \neq \emptyset$ which implies that $x$ is an accumulation point of $A$, so $x \in A’ \subseteq A \cup A’ = \bar{A}$. Therefore: So we conclude that $\bar{A} = \{ x \in S : f_A(x) = 0 \}$. $\blacksquare$
Given an abelian variety $A$ defined over $\mathbb{Q}$, for a positive integer (we can suppose prime) $\ell$, let $A[\ell]$ denote the groupof points of $A$ that are annihilated by $\ell$, the division field $\mathbb{Q}(A[\ell])$ is obtained by adjoining to $\mathbb{Q}$ the coordinates of the points of $A[k]$. Why the $\ell$-th cyclotomic field is contained in $\mathbb{Q}(A[\ell])$? I saw somewhere that this is because of the existenece of the Weil pairing (for simplicity, we can suppose that $A$ has a principal polarization) $$ e_\ell: A[\ell]\times A[\ell] \rightarrow \mu_\ell $$ From hereI know that there exist points $P,Q\in A[\ell]$ such that $e_\ell(P,Q)$ is a primitive $\ell$-th root of unity, but I dont understand how this is related to the coordinates of the points in $A[\ell]$.Thanks in advance.
Considering that the top/truth quark is the only quark with higher mass than the massive bosons, is the W boson in its decay different than the off-shell bosons that mediate the weak decay of other particles? My usual disclaimer: I use these questions as a way of learning for myself about different aspects of physics, so please view this as an attempt, rather than as an answer in itself. The top quark is the only quark heavy enough to decay into an on-shell W boson, usually accompanied by a bottom quark. We observe a jet containing the bottom quark decay products, $e_{\nu} $, $\mu_{\nu} $, $\tau_{\nu} $. Image source for all images displayed: T.Tait Talk.pdf Decays into other possible candidate such as strange or down quarks are suppressed by the small Cabibbo–Kobayashi–Maskaw matrix elements $V_{ts}$ and $V_{td}$ As the decay into $W_b $ is found to occur on nearly every occasion, we can then chart the decay of the weak boson into a branching ratios are 1:1:1:6 (jets). This ratio is 11% each for the neutrinos listed above and 67% for jets. Three notable aspects of the top quark decay process: The CKM elements are almost diagonal. The W coupling constants are identical. The light fermion masses are almost all small compared to the $M_W $ (mass of the W boson). Top decay is a left handed interaction, using Dirac /Gamma matrices $$\gamma^\mu (I -\gamma_5) $$ In a charged lepton decay, the lepton exhibits a tendency to move on the direction of top polarisation.
So i'm asked to calculate the generating function $Z(J)$ for a Lagrangian density $$L = -\left( \partial \phi \right) ^2 + m^2\phi^2 + f\left(x\right) \phi$$ for a fixed function $ f(x) $ , and express it in terms of the usual propagator and $f$ in the case $f=0$. My problem is in the role of the function $f(x)$. We are told to do this by integrating $$Z(J) = \int D\phi \exp \left( i \left( \int d^4x L + J\phi \right) \right) \, ,$$ so I get $$ Z(J) = \int D\phi \exp( i (\int d^4x L_0 + f(x)\phi + J\phi)) $$ where $L_0$ is the typical Lagrangian density for a Klein- Gordon field. My confusion is what I do with $f(x)$ - what stops it from just being absorbed into $J$ by setting $J' = J + f$, which would give effectively the same result as $Z$ for the Klein- Gordon field? Below is what I have so far. $$Z(J) = \int D\phi \exp(i\int d^4x L + J\phi ) = \int D\phi \exp(iS) \, ,$$ but Fourier transforming everything gives $$S = \int \frac{d^4k}{(2\pi)^4} \ \ \frac{d^4k'}{(2\pi)^4} \ \ d^4x \ \ \frac{1}{2}\eta^{ab} k_ak_b e^{i(k+k')x} + \frac{1}{2} (m^2-i\epsilon) e^{i(k+k')x}\tilde{\phi}(k)\tilde{\phi}(k') + e^{i(k+k')x} \tilde{f}(k')\tilde{\phi}(k) + e^{i(k+k')x}\tilde{J}(k')\tilde{\phi}(k) \, .$$ As usual, integrating over $x$ gives the delta function $$(2\pi)^4\delta^{4}(k+k')$$ which allows us to do the $d^4k'$ integral sending $k'$ to $-k$. We are left with $$S= \int \frac{d^4k}{(2\pi)^4} \left[ -\frac{1}{2} \tilde{\phi}(k) \left( k^2 + (m^2-i\epsilon)\right) \tilde{\phi}(-k) + \tilde{f}(-k)\tilde{\phi}(k) + \tilde{J}(-k)\tilde{\phi}(k) \right] \, .$$ In the free field theory, we would then change variables to (dropping all the tildes for brevity) $ \chi(k) = \phi(k) - \frac{J(k)}{k^2 +m^2 - i\epsilon}$. I'm not sure whether this is the right step in this problem. Proceeding with it anyway, I arrive at $$ S = \int \frac{d^4k}{(2\pi^4)} \left[ -\frac{1}{2} \frac{J(k)J(-k)}{k^2 + m^2 - i\epsilon} + \frac{f(-k)J(k)}{k^2 +m^2 - i\epsilon} + \frac{J(-k)J(k)}{k^2 + m^2 -i\epsilon} \right] \, .$$ I'm not sure what to do from here to obtain $Z(J)$, and even if the past step was the right thing to do. Any help would be greatly appreciated. Thanks
Mathematics is rapidly growing as old subfields are deepened and new subfields are created. It is simultaneously integrating, as direct connections are found between subfields previously regarded as distant. It is dramatically increasing its role in other disciplines, both in science and beyond. These are exciting times for mathematics, and it is more important than ever that we mathematicians have a clear global view of our discipline. However we are still rather stuck in our old ways. Our habits do not help us cultivate a broad view of mathematics, in either our students or ourselves. The demands of the traditional curriculum focus our teaching on narrow topics, such as how to integrate rational functions or how to diagonalize matrices. Our research efforts are much more likely to result in publications if we stay focused within our narrow areas of expertise. The Princeton Companion to Mathematics aims to improve this situation. It is a monumental work aimed at readers ranging from undergraduate math majors to established researchers. Its goal is to assist these readers in cultivating a global view of mathematics. It thus strives to help individuals grow in a way that parallels the enormous growth of mathematics. The editor and driving force behind PCM is Fields medalist Timothy Gowers. He is assisted by almost 200 experts. Together they succeed completely. Organization. PCM is divided into eight parts: Part Pages Articles Average Length Organization I: Introduction 76 4 19.0 Logical II: The Origins of Modern Mathematics 80 7 11.4 Chronological III: Mathematical Concepts 158 99 1.6 Alphabetical IV: Branches of Mathematics 366 26 14.1 Thematic V: Theorems and Problems 52 35 1.5 Alphabetical VI: Mathematicians 94 96 1.0 Chronological VII: The Influence of Mathematics 128 14 9.1 Thematic VIII: Final Perspectives 60 7 8.6 Random As indicated by the table, the parts are quite different from one another. The nature of the book is described in detail in the preface. The central focus is on modern pure mathematics. Indeed the focus of Parts I, III, IV, and V is very modern and pure. However Parts II and VI form a large historical component and Part VII represents applied mathematics. It is convincingly argued that these parts are necessary for balance: besides being subjects in their own right, mathematical history and applied mathematics provide important perspectives on modern pure mathematics. A fundamental priority of the book is accessibility. The goal is to discuss a given mathematical idea at the "lowest level that is practical." To obtain this lowest level, examples and intuition are emphasized, and exposition is kept informal. The priority of maximum accessibility required "interventionist editing" throughout the six-year process of creating the book. Maximum accessibility is indeed evident in the final product. One of its consequences is that different sections are written at different levels. Another priority is that the book should be much more than a collection of separate articles. One way this is achieved is by judicious use of cross-references, around five to a page. Another way is by the careful overall organization. Part I, for example, consists of material that is "part of the necessary background of all mathematicians rather than belonging to one specific area." Similarly, "the reflections of Part VIII are a sort of epilogue, and therefore an appropriate way for the book to sign off." Parts I and VIII. Parts I and VIII are the most accessible parts. Part I is written entirely by Timothy Gowers and can best be described as an expert's overview of a solid undergraduate curriculum. The seriousness of the undertaking and the comprehensiveness of the coverage is clear from some of the subsection titles: Sets; Functions; Relations; Binary Operations; Logical Connectives; Quantifiers; Negation; Free and Bound Variables; The Natural Numbers; The Integers; The Rational Numbers; The Real Numbers; The Complex Numbers; Groups; Fields; Vector Spaces, Rings; Substructures; Products; Quotients; Homomorphisms, Isomorphisms, and Automorphisms; Linear Maps and Matrices; Eigenvalues and Eigenvectors; Limits; Continuity; Differentiation; Partial Differential Equations; Integration; Holomorphic Functions; Geometry and Symmetry Groups; Euclidean Geometry; Affine Geometry; Topology; Spherical Geometry; Hyperbolic Geometry; Projective Geometry; Lorentz Geometry; Manifolds and Differential Geometry; Riemannian Metrics. The tone throughout is remarkably gentle, given the rapidly changing material. The last article, The General Goals of Mathematical Research, would be of particular interest to aspiring mathematicians. It, like all of Part I, is especially well-balanced. For example, it includes a long discussion about the relative place of rigorous and nonrigorous reasoning in mathematics. This discussion is not in the least a call to devalue rigor, but it is highly respectful of nonrigorous reasoning. It concludes, "The best way to describe the situation is perhaps to say that the two styles of argument have profoundly benefited each other and will undoubtedly continue to do so." Part VIII's articles are very different from one another. Michael Harris's "Why Mathematics?" You Might Ask is very philosophical. Hilbert S. Wilf's Mathematics: An Experimental Science is a succinct and elegant argument for the value of computers in pure mathematics. VIII.6 is Advice to a Young Mathematician, with separate sections written by Atiyah, Bollobás, Connes, McDuff, and Sarnak. Adrian Rice's A Chronology of Mathematical Events, concludes the book by a five-page summary of the history of mathematics. Part VIII, in its liveliness and subjectivity, illustrates one of the points made strongly in the preface: PCM is a companion, not an encyclopedia. Parts II and VI. Parts II and VI consist of historical material. The first six articles of Part II are historical surveys on broad topics: numbers, geometry, algebra, algorithms, rigor in analysis, and proof. The last article focuses on a shorter period, the "crisis in the foundations of mathematics" in the first third of the twentieth century. The ninety-six short articles of Part VI are each about a single mathematician, except for VI.18 on the Bernoullis and VI.96 on Bourbaki. The focus here is on contributions prior to 1950, and the delicate choice of which mathematicians to include seems impeccable. For example, the sixteen mathematicians included who were born before 1650 are Pythagoras, Euclid, Archimedes, Apollonius, Al-Khwarizmi, Fibonacci, Cardano, Bombelli, Viète, Stevin, Descartes, Fermat, Pascal, Newton, and Leibniz. Similarly, eight mathematicians are given special prominence by the inclusion of a portrait: Descartes, Newton, Leibniz, Euler, Gauss, Riemann, Poincaré, and Hilbert. Parts III and V. Part III consists of short articles on concepts. The emphasis on intuition and examples is clear everywhere. Terence Tao's article on compactness and compactification is illustrative of this style. It begins by carefully discussing how finite sets and infinite sets are different. It goes on to discuss, with reference to the unit interval [0,1], how some topological spaces behave very much like finite sets. It is these spaces that one would like to call compact. Only after all this preparation does the formal definition appear, that a space is compact exactly when all its open covers have finite subcovers. In just a few paragraphs, the reader is given a rather refined appreciation for this definition and some of its various near-equivalents. Similarly, the part of the article on compactification goes rather far, but is gently guided by considering the real line and its two most familiar compactifications: the extended line \([-\infty,\infty]\) and the projective line \(\mathbb{R}\cup\{\infty\}\). Part V is similar to Part III except the focus is shifted to theorems and problems. Most of the articles are again on topics that play an extremely important role in mathematics: the central limit theorem is central to our understanding of data; the uniformization theorem is central to our understanding of Riemann surfaces; the resolvability of singularities is central to our understanding of algebraic varieties. Given the all-star nature of the list, readers will be particularly enticed by the articles on unfamiliar topics. For professional mathematicians, the level is generally non-technical and welcoming. Part IV and VII. Part IV, on branches of mathematics, is described in the preface as "the heart of the book." The branches are a well-chosen sampling, including a healthy dose of mathematical physics of various sorts. Probabilistic Models of Critical Phenomena by Gordon Slade is representative of Part IV. It is as gentle as possible on the reader but goes deeply into its topic. An early example in this article, easier than the main examples later, involves branching processes. Suppose individuals in a certain population have zero, one, or two children with respective probabilities \((1-p)^2\), \(2p(1 – p)\), and \(p^2\). The average number of children is \(2p\). Accordingly, one can expect that for \(p < .5\) a given individual's descendants will eventually die out whereas for \(p > .5\) there is positive probability that the descendants will never die out. In fact, for \(p\) slightly smaller than \(.5\), the expected number of total descendants is approximately \(.5(.5 - p)^\gamma\) with \(\gamma=1\).. For \(p\) slightly larger than \(.5\), the chance that an individual will have infinitely many descendants is approximately \(8(p - .5)^\beta\) with \(\beta=1\). The "critical exponents" \(\gamma=1\) and \(\beta=1\) are remarkably stable: one can modify the branching process in a great many ways and the final formulas still have the same exponents. The idea of critical exponents has an amazing universality. For example, one has analogous quantities for percolation of fluids through porous materials and for ferromagnetism. In dimension two, a recent result is that \((\gamma,\beta)=(43/18,5/36)\) for percolation and \((\gamma,\beta)=(7/4, 1/8)\) for ferromagnetism. In dimensions greater than two, there are predictions from experiment but rarely rigorous confirmation. Part VII illustrates how mathematics influences other fields. Chemistry, biology, engineering, computer science, economics, statistics, medicine, philosophy, music, and art are all represented by at least one article. A short and particularly intriguing article is Mathematical Statistics by Persi Diaconis. It explains how common uses of averages and least-squares estimators are sometimes inappropriate, and more sophisticated concepts need to be used instead. Many readers are likely to find these replacements quite counterintuitive. In general, the articles in Parts IV and VII present the most challenging reading in PCM. Conclusion. Page 1 of PCM begins, "It is notoriously hard to give a satisfactory answer to the question, What is Mathematics?" Indeed, it is not reasonable to try to capture what mathematics is in a short paragraph in the style of a mathematical definition. On the other hand, "What is Mathematics?" is surely a fundamental question all of us must answer in our own way. PCM gives us very valuable support in trying to come up with our own answers. It is unprecedented in its signature combination of depth and accessibility. It takes us beyond our own experience in teaching and research, and lets us share in the experience of many experts. It gives us a balanced and broad overview of mathematics in one single volume. For readers of MAA Online, there is no better way to invest $75 than to buy the Princeton Companion to Mathematics. David Roberts is a professor of mathematics at the University of Minnesota, Morris.
Ancillary-file links: Ancillary files (details): python3_src/KW_scaldims.py python3_src/LICENSE python3_src/README python3_src/TNR.py python3_src/cdf_ed_scaldimer.py python3_src/cdf_scaldimer.py python3_src/custom_parser.py python3_src/ed_scaldimer.py python3_src/initialtensors.py python3_src/modeldata.py python3_src/pathfinder.py python3_src/scaldim_plot.py python3_src/scaldimer.py python3_src/scon.py python3_src/scon_sparseeig.py python3_src/tensordispenser.py python3_src/tensors/abeliantensor.py python3_src/tensors/ndarray_svd.py python3_src/tensors/symmetrytensors.py python3_src/tensors/tensor.py python3_src/tensors/tensor_test.py python3_src/tensors/tensorcommon.py python3_src/tensorstorer.py python3_src/timer.py python3_src/toolbox.py Current browse context: cond-mat Change to browse by: Bookmark(what is this?) Condensed Matter > Strongly Correlated Electrons Title: Topological conformal defects with tensor networks (Submitted on 11 Dec 2015 (v1), last revised 23 Sep 2016 (this version, v3)) Abstract: The critical 2d classical Ising model on the square lattice has two topological conformal defects: the $\mathbb{Z}_2$ symmetry defect $D_{\epsilon}$ and the Kramers-Wannier duality defect $D_{\sigma}$. These two defects implement antiperiodic boundary conditions and a more exotic form of twisted boundary conditions, respectively. On the torus, the partition function $Z_{D}$ of the critical Ising model in the presence of a topological conformal defect $D$ is expressed in terms of the scaling dimensions $\Delta_{\alpha}$ and conformal spins $s_{\alpha}$ of a distinct set of primary fields (and their descendants, or conformal towers) of the Ising CFT. This characteristic conformal data $\{\Delta_{\alpha}, s_{\alpha}\}_{D}$ can be extracted from the eigenvalue spectrum of a transfer matrix $M_{D}$ for the partition function $Z_D$. In this paper we investigate the use of tensor network techniques to both represent and coarse-grain the partition functions $Z_{D_\epsilon}$ and $Z_{D_\sigma}$ of the critical Ising model with either a symmetry defect $D_{\epsilon}$ or a duality defect $D_{\sigma}$. We also explain how to coarse-grain the corresponding transfer matrices $M_{D_\epsilon}$ and $M_{D_\sigma}$, from which we can extract accurate numerical estimates of $\{\Delta_{\alpha}, s_{\alpha}\}_{D_{\epsilon}}$ and $\{\Delta_{\alpha}, s_{\alpha}\}_{D_{\sigma}}$. Two key new ingredients of our approach are (i) coarse-graining of the defect $D$, which applies to any (i.e. not just topological) conformal defect and yields a set of associated scaling dimensions $\Delta_{\alpha}$, and (ii) construction and coarse-graining of a generalized translation operator using a local unitary transformation that moves the defect, which only exist for topological conformal defects and yields the corresponding conformal spins $s_{\alpha}$. Submission historyFrom: Markus Hauru [view email] [v1]Fri, 11 Dec 2015 23:01:19 GMT (1352kb,D) [v2]Mon, 16 May 2016 22:16:02 GMT (1920kb,AD) [v3]Fri, 23 Sep 2016 19:16:05 GMT (1919kb,AD)
The Mean Value Theorem The Mean Value Theorem We are now going to look at a very important theorem in Calculus known as The Mean Value Theorem. Theorem 1 (The Mean Value Theorem): If $f$ is a function that satisfies the following conditions: a) $f$ is a continuous function on the closed interval $[a, b]$. b) $f$ is differentiable on the open interval $(a, b)$. Then there must be a value $c \in (a, b)$ where $f'(c)(b - a) = f(b) - f(a)$, that is, there must be a point $(c, f(c))$ whose tangent line has the same slope as the line connecting $(a, f(a))$ and $(b, f(b))$. From the diagram it should be clear that the slope of the green line is equal to the slope of the pink line. Thus it follows that:(1) \begin{align} f'(c) = \frac{f(b) - f(a)}{b - a} \\ f'(c)(b - a) = f(b) - f(a) \end{align} Lemma 1: If $f'(x) = 0$ for all $x$ in an interval $[a, b]$, then $f$ had to be a constant on $[a, b]$. Proof:First let $x_1$ and $x_2$ be any two values in the interval $[a, b]$ such that $x_1 < x_2$. Since $f$ is continuous on $[a, b]$ we have that $f$ is continuous on $[x_1, x_2]$. Similarly, since $f$ is differentiable on $(a, b)$ we have that $f$ is differentiable on $(x_1, x_2)$. So by the Mean Value Theorem there is a number $c$ in $(x_1, x_2)$ such that: \begin{equation} f(x_2) - f(x_1) = f'(c)(x_2 - x_1) \end{equation} But by assumption $f'(x) = 0$ on $(a, b)$. So $f'(c) = 0$. Therefore: \begin{align} f(x_2) - f(x_1) & = 0(x_2 - x_1) \\ f(x_2) - f(x_1) & = 0 \\ f(x_2) & = f(x_1) \end{align} Thus $f$ is a constant on the interval $[a, b]$. $\blacksquare$ Lemma 2: If $f'(x) > 0$ for all $x$ in an interval $[a, b]$, then $f$ is increasing on $(a, b)$. Proof:Let $x_1$, $x_2$ be any two values in the interval $[a, b]$ where $x_1 < x_2$. Since $f$ is continuous on $[a, b]$ we have that $f$ is continuous on $[x_1, x_2]$. Similarly, since $f$ is differentiable on $(a, b)$ we have that $f$ is differentiable on $(x_1, x_2)$. So by the Mean Value Theorem there is a number $c$ in $(x_1, x_2)$ such that: \begin{equation} f(x_2) - f(x_1) = f'(c)(x_2 - x_1) \end{equation} But by assumption $f'(x) > 0$ on $(a, b)$. So $f'(c) > 0$. Also, since $x_1 < x_2$ we have that $x_2 - x_1 > 0$. Therefore: \begin{align} f(x_2) - f(x_1) & > 0 \\ f(x_1) &< f(x_2) \end{align} Thus the function $f$ must be increasing on the interval $(a, b)$. $\blacksquare$ Lemma 3: If $f'(x) < 0$ for all $x$ in an interval $[a, b]$, then $f$ is decreasing on $(a, b)$. Proof:Let $x_1$, $x_2$ be values in the closed interval $[a, b]$ such that $x_1 < x_2$. Since $f$ is continuous on $[a, b]$ we have that $f$ is continuous on $[x_1, x_2]$. Similarly, since $f$ is differentiable on $(a, b)$ we have that $f$ is differentiable on $(x_1, x_2)$. So by the Mean Value Theorem there is a number $c$ in $(x_1, x_2)$ such that: \begin{equation} f(x_2) - f(x_1) = f'(c)(x_2 - x_1) \end{equation} But by assumption $f'(x) < 0$ on $(a, b)$. So $f'(c) < 0$. Also, since $x_1 < x_2$ we have that $x_2 - x_1 > 0$. Therefore: \begin{align} f(x_2) - f(x_1) & < 0 \\ f(x_1) & > f(x_2) \end{align} Thus the function $f$ is decreasing on the interval $(a, b)$. $\blacksquare$
I know asking for proof-verification on MO is a tricky thing. On one hand interesting research level proofs are usually subject of articles and can not be discussed here in detail. On the other hand most simple proof which can be written on a forum are not "high-level" enough for MO and other places are more appropriate. After all MO does not have a "proof-verification" tag, like math.stackexchange. Anyway, for my personal taste at least, the following is research level. So lets see where this goes: I want to proof the following statement: Consider everything over the field $\mathbb{Q}$. For a fixed, given $n\geq 2$, let $\mathcal{E}_{n}$ be the $E_{n}$-suboperad of the Barratt-Eccles operad $\mathcal{E}$, $\mathcal{E}_{n}^{i}$ its Koszul dual cooperad in the sense described in the paper "Koszul duality of En-operads" by Benoit Fresse and let $e_{n}$ be the operad of $(n-1)$-Gerstenhaber algebras. Then there exist a solution to the Maurer Cartan equation in the convolution dg Lie algebra $\Pi_{k\in\mathbb{N}}Hom_{\Sigma_{k}}(\mathcal{E}_{n}^{i}(k),\Omega e_{n}^{i}(k))$ where $\Omega e_{n}^{i}$ is the minimal model of $e_{n}$. Proof: Since $\mathcal{E}_{n}$ is an $E_{n}$-operad, by the definition of $E_{n}$-operads there is a zig-zag of quasi-isomorphisms of dg-operads $ \mathcal{E}_{n}\overset{\simeq}{\longleftarrow}\bullet\overset{\simeq}{\longrightarrow}\cdots\overset{\simeq}{\longleftarrow}\bullet\overset{\simeq}{\longrightarrow}e_{n} $ where we consider $e_{n}$ as a differential graded operad with trivial differential in each arity. Now since in both cases ($\mathcal{E}_{n}$ as well as $e_{n}$), the appropriate Koszul dual cooperads $\mathcal{E}_{n}^{i}$ and $e_{n}^{i}$ are the linear duals "up to tensoring with appropriate shifting cooperads", this implies the existence of the following diagram of dg-cooperad quasi-isomorphisms: $ \mathcal{E}_{n}^{i}\overset{\simeq}{\longrightarrow}\bullet\overset{\simeq}{\longleftarrow}\cdots\overset{\simeq}{\longrightarrow}\bullet\overset{\simeq}{\longleftarrow}e_{n}^{i} $ since the linear dual of a quasi-isomorphism is a quasi-isomorphism. Now if we change the category of differential graded cooperads withmorphisms of differential graded cooperads into the category of dgcooperads with infinity morphisms of dg cooperads (Such aninfinity morphism $F_{\infty}:\mathcal{C}_{1}\rightsquigarrow\mathcal{C}_{2}$is defined as (or equivalent to ) a morphism of dg operads $\Omega F_{\infty}:\Omega\mathcal{C}_{1}\to\Omega\mathcal{C}_{2}$) then any quasi-isomorphism has an actual inverse in termsof these infinity morphisms (To emphasis these different kind of maps, I write $\rightsquigarrow$ for them). Therefore in this other category, there exist the following diagram of dg-cooperad infinity-isomorphisms $ \mathcal{E}_{n}^{i}\overset{\simeq}{\rightsquigarrow}\bullet\overset{\simeq}{\rightsquigarrow}\cdots\overset{\simeq}{\rightsquigarrow}\bullet\overset{\simeq}{\rightsquigarrow}e_{n}^{i} $ and by composition, we get a single infinity isomorphism of dg-cooperads $\mathcal{E}_{n}^{i}\rightsquigarrow e_{n}^{i}$. By definition of these infinity morphisms, this is equivalent to the existence of an ordinary isomorphism of dg-operads $ \Omega\mathcal{E}_{n}^{i}\to\Omega e_{n}^{i} $ which in turn is equivalent to the existence of a solution to the Maurer Cartan equation in $\Pi_{k\in\mathbb{N}}Hom_{\Sigma_{k}}(\mathcal{E}_{n}^{i}(k),\Omega e_{n}^{i}(k))$. q.e.d. Second question: The proof relays on the transition from ordinary morphisms of dg-cooperads to the $\infty$-morphisms of dg-cooperads. Is this the transition to the derived category of dg-cooperads?
On the monotonicity of the period function of a quadratic system 1. Department of Mathematics, Sun Yat-sen University, Guangzhou, 510275 $ \dot x=- y + x y,\quad \dot y=x + 2 y^2-c x^2, \quad -\infty < c < +\infty.$ We show that this system has two isochronous centers for $c=1/2$, and its period function has only one critical point for $c\in(7/5, 2)$. For all other cases, the period function is monotone. This improves the results in [1]. Mathematics Subject Classification:34C07, 34C08, 37G1. Citation:Yulin Zhao. On the monotonicity of the period function of a quadratic system. Discrete & Continuous Dynamical Systems - A, 2005, 13 (3) : 795-810. doi: 10.3934/dcds.2005.13.795 [1] Yu Chen, Yanheng Ding, Suhong Li. Existence and concentration for Kirchhoff type equations around topologically critical points of the potential. [2] [3] [4] Tsung-Fang Wu. On semilinear elliptic equations involving critical Sobolev exponents and sign-changing weight function. [5] Gioconda Moscariello, Antonia Passarelli di Napoli, Carlo Sbordone. Planar ACL-homeomorphisms : Critical points of their components. [6] P. Candito, S. A. Marano, D. Motreanu. Critical points for a class of nondifferentiable functions and applications. [7] [8] [9] Cristian Bereanu, Petru Jebelean. Multiple critical points for a class of periodic lower semicontinuous functionals. [10] [11] [12] [13] [14] Zhiyou Wu, Fusheng Bai, Guoquan Li, Yongjian Yang. A new auxiliary function method for systems of nonlinear equations. [15] Alexander Blokh, Michał Misiurewicz. Dense set of negative Schwarzian maps whose critical points have minimal limit sets. [16] Jaume Llibre, Jesús S. Pérez del Río, J. Angel Rodríguez. Structural stability of planar semi-homogeneous polynomial vector fields applications to critical points and to infinity. [17] Fengjuan Meng, Chengkui Zhong. Multiple equilibrium points in global attractor for the weakly damped wave equation with critical exponent. [18] M. L. Miotto. Multiple solutions for elliptic problem in $\mathbb{R}^N$ with critical Sobolev exponent and weight function. [19] Iuliana Oprea, Gerhard Dangelmayr. A period doubling route to spatiotemporal chaos in a system of Ginzburg-Landau equations for nematic electroconvection. [20] Primitivo B. Acosta-Humánez, J. Tomás Lázaro, Juan J. Morales-Ruiz, Chara Pantazi. On the integrability of polynomial vector fields in the plane by means of Picard-Vessiot theory. 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
HESS upper limits for Kepler's supernova remnant Date2008 Author Büsching, I. De Jager, O.C. Holleran, M. Raubenheimer, B.C. Venter, C. H.E.S.S. Collaboration MetadataShow full item record Abstract Aims. Observations of Kepler's supernova remnant (G4.5+6.8) with the HESS telescope array in 2004 and 2005 with a total live time of 13 h are presented. Methods. Stereoscopic imaging of Cherenkov radiation from extensive air showers is used to reconstruct the energy and direction of the incident gamma rays. Results. No evidence for a very high energy (VHE: >100 GeV) gamma-ray signal from the direction of the remnant is found. An upper limit (99% confidence level) on the energy flux in the range $230~{\rm GeV}{-}12.8~{\rm TeV}$ of 8.6 $\times$ $10^{-13}~{\rm erg}~{\rm cm}^{-2}~{\rm s}^{-1}$ is obtained. Conclusions. In the context of an existing theoretical model for the remnant, the lack of a detectable gamma-ray flux implies a distance of at least $6.4~{\rm kpc}$. A corresponding upper limit for the density of the ambient matter of $0.7~{\rm cm}^{-3}$ is derived. With this distance limit, and assuming a spectral index $\Gamma = 2$, the total energy in accelerated protons is limited to $E_{\rm p} < 8.6$ $\times$ $10^{49}~{\rm erg}$. In the synchrotron/inverse Compton framework, extrapolating the power law measured by RXTE between 10 and $20~{\rm keV}$ down in energy, the predicted gamma-ray flux from inverse Compton scattering is below the measured upper limit for magnetic field values greater than $52~\mu {\rm G}$ URIhttp://hdl.handle.net/10394/2747 https://doi.org/10.1051/0004-6361:200809401 http://www.aanda.org/articles/aa/pdf/2008/34/aa09401-08.pdf
Performing Topology Optimization with the Density Method Engineers are given significant freedom in their pursuit of lightweight structural components in airplanes and space applications, so it makes sense to use methods that can exploit this freedom, making topology optimization a popular choice in the early design phase. This method often requires regularization and special interpolation functions to get meaningful designs, which can be a nuisance to both new and experienced simulation users. To simplify the solution of topology optimization problems, the COMSOL® software contains a density topology feature. About the Density Method for Topology Optimization As the name suggests, topology optimization is a method that has the ability to come up with new and better topologies for an engineering structure given an objective function and set of constraints. The method comes up with these new topologies by introducing a set of design variables that describe the presence, or absence, of material within the design space. These variables are defined either within every element of the mesh or on every node point of the mesh. Changing these design variables thus becomes analogous to changing the topology. This means that holes in the structure can appear, disappear, and merge as well as that boundaries can take on arbitrary shapes. In addition, the control parameters are somewhat automatically defined and tied to the discretization. As of COMSOL Multiphysics® software version 5.4, the add-on Optimization Module includes a density topology feature to improve the usability of topology optimization. The feature is designed to be used as a density method (Ref. 3), meaning that the control parameters change a material parameter through an interpolation function. Interpolation functions for solid and fluid mechanics are built into the feature and used in example models throughout the Application Library in COMSOL Multiphysics. A bracket geometry is topology optimized, leaving only 50% of the material, which contributes the most to the stiffness. The printed bracket geometry. The density method involves the definition of a control variable field, \theta_c, which is bounded between 0 and 1. In solid mechanics, \theta_c=1 corresponds to the material from which the structure is to be built, while \theta_c=0 corresponds to a very soft material. By default, the void Young’s modulus is 0.1% of the solid Young’s modulus. In fluid mechanics, convention dictates that \theta_c=1 corresponds to fluid, while \theta_c=0 is a (slightly) permeable material with an inverse permeability factor, \alpha; i.e., a damping term is added to the Navier-Stokes equation: The damping term is 0 in fluid domains, while a large value is used in solid domains. These different values give a good approximation of the no-slip boundary condition on the interface between the domains. An Introduction to the Density Model Feature The Density Model feature supports regularization via a Helmholtz equation (Ref. 1). This introduces a minimum length scale using the filter radius, R_\mathrm{min}: Here, \theta_c is the raw control variable, which is modified by the optimizer, and \theta_f is the filtered variable. The mesh edge size is the default value for the filter radius. While this works well in terms of regularizing the optimization problem, it’s important to set a fixed length (larger than the mesh edge size) to get mesh-independent results. Top: The equation for the Helmholtz filter can be solved analytically for a 1D Heaviside function. Bottom: This plot is taken from the MBB beam optimization model. It shows the raw control variables to the left and the filtered version to the right. The Helmholtz filter gives rise to significant grayscale, which does not have a clear physical interpretation. The grayscale can be reduced by applying a smooth step function in what is referred to as projection in topology optimization. Projection reduces grayscale, but it also makes it more difficult for the optimizer to converge. The density topology feature supports projection based on the hyperbolic tangent function, and the amount of projection can be controlled with the projection steepness, \beta. Here, \theta_{\beta} is the projection point. Plot showing the filtered field to the left and the projected field to the right. Projection makes it possible to avoid grayscale, but grayscale can still appear if the optimization problem favors it. If the same interpolation function is used for the mass and the stiffness, grayscale is optimal in volume-constrained minimum compliance problems. It is thus common to use interpolation functions that cause intermediate values to be associated with little stiffness relative to their cost (compared to the fully solid value). You can think of this as a penalization of intermediate values for the material volume factor, and the Density Model interface (shown below) supports two such interpolation schemes for solid mechanics: solid isotropic material with penalization (SIMP) and rational approximation of material properties method (RAMP) interpolation. Darcy interpolation is provided for fluid mechanics. The interpolated variable is called the penalized material volume factor, \theta_p, and is used for interpolating the material parameters, e.g., for SIMP interpolation, the p_\textsc{simp} exponent can be increased to reduce the stiffness of intermediate values, so that grayscale becomes less favorable. \theta_p %26= \theta_\mathrm{min}(1-\theta_\mathrm{min})\theta^{p_\textsc{simp}}\\ E_p %26= E\theta_p \end{align} Here, E is the Young’s modulus of the solid material and E_p is the penalized Young’s modulus to be used throughout all optimized domains. The Density Model feature is available under Topology Optimization in Component > Definitions . The mesh edge length is taken as the default filter radius and it works well, but it has to be replaced with a fixed value in order to produce mesh-independent results. The penalized Young’s modulus can be defined as a domain variable, or (as in the case of the bracket model) it can be defined directly in the materials. Topology optimization with the density method involves varying the Young’s modulus spatially. In this case, it is achieved by going to the material properties and multiplying the solid Young’s modulus with the penalized material volume factor, dtopo1.theta_p. In summary, the density topology feature adds four variables. The filtered material volume factor is defined implicitly using a dependent variable. Symbol Description Equation \theta_c Control material volume factor 0\leq\theta_c\leq1 \theta_f Filtered material volume factor \theta_f = R_\mathrm{min}^2\mathbf{\nabla}^2\theta_f + \theta_c \theta Material volume factor \theta = \frac{\tanh(\beta(\theta_f-\theta_{\beta}))+\tanh(\beta\theta_{\beta})}{\tanh(\beta(1-\theta_{\beta}))+\tanh(\beta\theta_{\beta})} \theta_p Penalized material volume factor \theta_p = \theta_\mathrm{min}+(1-\theta_\mathrm{min})\theta^{p_\textsc{simp}} or \theta_p = \frac{q_\mathrm{Darcy}(1-\theta)}{q_\mathrm{Darcy}+\theta} When the filtering is disabled, the filtered variable becomes undefined and the projection instead uses the control material volume factor directly. If the projection is disabled, the material volume factor still exists, but it becomes identical to the projection input. Applying Continuation to Avoid Local Minima When the topology is not too complicated, the default values of the density topology feature work well. This is the case for the MBB beam optimization and topology optimized hook models. If the optimal design is more complicated (such as for the bracket example shown at the top of this post), there might be many local minima. To avoid these minima, you can use continuation in the SIMP exponent and the projection slope. This can be achieved by modifying the initial value expression in the density topology feature and adding a Parametric Sweep feature, as shown below. As a result, the solver ramps over the specified parameters, using the optimum from the previous case as the initial value for the next optimization step. That is, it starts with a small SIMP exponent and projection slope and then continues to higher values. It is possible to apply continuation by combining a parametric sweep with a study reference. See the Bracket — Topology Optimization tutorial model for details. Objectives and Constraints in Topology Optimization If the geometry is optimized for a single load case (as shown below to the left), the resulting design will be optimal with respect to that load case. This can seem obvious, but often designers make assumptions about symmetries and the design topology. Unless these assumptions are formalized as constraints, they will not be respected. Therefore, the design shown to the right below uses eight load cases (two load groups times four constraint groups). Left: The bracket geometry is optimized for a single load case, resulting in an asymmetric design with two loosely connected halves. Right: The bracket geometry with eight load cases. Designers often have several objectives that need to be weighted. To make an informed decision about these objectives, a designer can trace the Pareto optimal front using several optimizations with different weights. The Pareto optimal front for the bracket geometry can be traced by varying the weight in a parametric sweep. Animation of the topology optimized bracket. (Download the glTF™ file from the Application Gallery in GLB-file format to rotate the geometry yourself.) Exporting and Importing Topology Optimization Results It is possible to analyze the result of a topology optimized design with respect to stress concentration and buckling without remeshing. However, if you want to be completely sure that the void phase does not play a role, you can eliminate it by exporting and importing the resulting design, as shown below. The details of this procedure are discussed in a previous blog post. The contour (left) for the topology optimized MBB beam design is exported and imported as an interpolation curve (right). Next Steps To learn more about the built-in tools and features for solving optimization problems, check out the Optimization Module product page by clicking the button below. Further Resources Try using the density feature for topology optimization with these example models: Read more about topology optimization on the COMSOL Blog: References B.S. Lazarov and O. Sigmund, “Filters in topology optimization based on Helmholtz‐type differential equations,” International Journal for Numerical Methods in Engineering, vol. 86, no. 6, pp. 765–781, 2011. F. Wang, B.S. Lazarov, and O. Sigmund, “On projection methods, convergence and robust formulations in topology optimization,” Structural and Multidisciplinary Optimization, vol. 43, pp. 767–784, 2011. M.P. Bendsøe, “Optimal shape design as a material distribution problem,” Structural Optimization, vol. 1, pp. 193–202, 1989. glTF and the glTF logo are trademarks of the Khronos Group Inc. Comments (0) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
We claim that the statement is false.As a counterexample, consider the matrices\[A=\begin{bmatrix}1 & 0\\0& 0\end{bmatrix} \text{ and } B=\begin{bmatrix}0 & 0\\0& 1\end{bmatrix}.\]Then\[A+B=\begin{bmatrix}1 & 0\\0& 1\end{bmatrix}\]and we have\[\det(A+B)=\begin{vmatrix}1 & 0\\0& 1\end{vmatrix}=1.\] On the other hand, the determinants of $A$ and $B$ are\[\det(A)=0 \text{ and } \det(B)=0,\]and hence\[\det(A)+\det(B)=0\neq 1=\det(A+B).\] Therefore, the statement is false and in general we have\[\det(A+B)\neq \det(A)+\det(B).\] Remark. When we computed the $2\times 2$ matrices, we used the formula\[\begin{vmatrix}a & b\\c& d\end{vmatrix}=ad-bc.\] This problem showed that the determinant does not preserve the addition.However, the determinant is multiplicative.In general, the following is true:\[\det(AB)=\det(A)\det(B).\] 12 Examples of Subsets that Are Not Subspaces of Vector SpacesEach of the following sets are not a subspace of the specified vector space. For each set, give a reason why it is not a subspace.(1) \[S_1=\left \{\, \begin{bmatrix}x_1 \\x_2 \\x_3\end{bmatrix} \in \R^3 \quad \middle | \quad x_1\geq 0 \,\right \}\]in […] If the Matrix Product $AB=0$, then is $BA=0$ as Well?Let $A$ and $B$ be $n\times n$ matrices. Suppose that the matrix product $AB=O$, where $O$ is the $n\times n$ zero matrix.Is it true that the matrix product with opposite order $BA$ is also the zero matrix?If so, give a proof. If not, give a […] True or False. Every Diagonalizable Matrix is InvertibleIs every diagonalizable matrix invertible?Solution.The answer is No.CounterexampleWe give a counterexample. Consider the $2\times 2$ zero matrix.The zero matrix is a diagonal matrix, and thus it is diagonalizable.However, the zero matrix is not […] Is there an Odd Matrix Whose Square is $-I$?Let $n$ be an odd positive integer.Determine whether there exists an $n \times n$ real matrix $A$ such that\[A^2+I=O,\]where $I$ is the $n \times n$ identity matrix and $O$ is the $n \times n$ zero matrix.If such a matrix $A$ exists, find an example. If not, prove that […] Similar Matrices Have the Same EigenvaluesShow that if $A$ and $B$ are similar matrices, then they have the same eigenvalues and their algebraic multiplicities are the same.Proof.We prove that $A$ and $B$ have the same characteristic polynomial. Then the result follows immediately since eigenvalues and algebraic […]
Lower central quotients can be extracted from group homology via spectral sequence built up from free simplicial resolution of a group. So, if your complex variety is aspherical, you probably know those Hodge structures because everything is as natural and functorial as it can be. By a classical result of Magnus, for free groups we have isomorphism between free Lie ring on abelianisation $\mathcal LF_{ab} = \mathcal LH_1(F, \mathbb Z)$ and Magnus Lie ring $LG := \bigoplus L_nF = \bigoplus \gamma_i(F)/\gamma_{i+1}(F)$.In general, this morphism is just epi. Now take free simplicial resolution $F_{\bullet} \twoheadrightarrow G$. We can look at exact couple defined by exact sequences $L_nF \to F/\gamma_{n+1}F \to F/\gamma_{n}F$ and associate with it graded Lie ring spectral sequence converging to $L_n(\pi_0 (F)) = LG$. First sheet is given by $E^1_{p, q} = \pi_q (L_p F)$, and $s$-th differentials have degree $(s, -1)$. Actually, $E^1$ differs from free graded Lie ring on group homology only by torsion (we can check it introducing analogous s. s. for augmentation powers filtration on $\mathbb ZF$ and looking at morphism between them induced by $G \hookrightarrow \mathbb Z G$; rationally it's split injection by PBW, and it's known that $LG \otimes Q \cong \Delta_{\mathbb Q}G$), zeroth row is always free Lie on $G_{ab}$, first column is the shifted by 1 integral homology of $G$, and stripes below $k$ depend only on $H_{\leq k + 1}$. Also, we instantly prove Stallings' result about maps inducing isomorphism on factors by $\gamma_i$ (for $G \xrightarrow{f} G'$ $G$ is para-$G'$ iff $H_1(f)$ iso and $H_2(f)$ is epi) just by checking differentials degree. Expression for $\gamma_1/\gamma_2$ comes from boundary morphism in this s. s. (I don't remember reference for that stuff, but it's not hard to convert those speculations to actual proof. See also Cochran&Harvey, http://arxiv.org/pdf/math/0407203.pdf and Ellis, "Magnus-Witt type isomorphism for non-free groups".)
I have a basic confusion concerning the mean-field theory of quantum phase transitions in Fermi systems. Consider as an example the BCS theory of superconductivity in a Dirac fermion system, considered by Sachdev in Sec. 17.1 of his QPT book. The variational ground state energy is given by \begin{align} E_{BCS}=\frac{J_1}{2}\left(|\Delta_x|^2+|\Delta_y|^2\right)-\int\frac{d^2k}{(2\pi)^2}[E_k-\varepsilon_k], \end{align} obtained by integrating out the fermions, where $\Delta_x$ and $\Delta_y$ are $d$-wave order parameters. The single-particle energy $E_k$ is \begin{align} E_k=\sqrt{\varepsilon_k^2+|J_1(\Delta_x\cos k_x+\Delta_y\cos k_y)|^2} \end{align} which is positive and always greater than $\varepsilon_k$. Suppose I would like to derive a Ginzburg-Landau energy by expanding the last term (fermion determinant) in powers of the order parameters. If I expand the integrand and perform the momentum integrals, I will obtain \begin{align} E_{BCS}=r(|\Delta_x|^2+|\Delta_y|^2)+u(|\Delta_x|^4+|\Delta_y|^4)+\text{mixed terms}. \end{align} Because the term coming from the fermion determinant comes with a minus sign, it is clear that there will be a negative contribution to $r$ that offsets the positive ``bare mass'' $J_1$, thus giving the possibility of a transition as $J_1$ is varied. Now, imagine we would like the transition to be continuous. This requires $u>0$, and $u$ comes entirely from the fermion determinant. It is not clear at all to me how this is possible, because $-\int\frac{d^2k}{(2\pi)^2}[E_k-\varepsilon_k]$ is always negative. In particular, if the order parameter is very large the quartic terms will dominate over the quadratic terms, and it seems that we generically get $u<0$.
On existence, uniform decay rates and blow up for solutions of systems of nonlinear wave equations with damping and source terms 1. Department of Mathematics and Statistics, Federal University of Campina Grande, 58109-970, Campina Grande, PB, Brazil 2. Department of Mathematics - State University of Maringá, 87020-900 Maringá, PR, Brazil 3. Department of Mathematics, State University of Maringá, Maringá, PR, 87020-900, Brazil 4. Department of Mathematics, University of Nebraska-Lincoln, Lincoln, NE, 68588-0130, United States 5. Department of Mathematics, University of Nebraska-Lincoln, Lincoln, NE 68588 $u_{t t}$ $- \Delta u + |u_t|^{m-1}u_t = F_u(u,v) \text{ in }\Omega\times ( 0,\infty )$, $v_{t t}$$ - \Delta v + |v_t|^{r-1}v_t = F_v(u,v) \text{ in }\Omega\times( 0,\infty )$, where $\Omega$ is a bounded domain in $\mathbb{R}^n$, $n=1,2,3$ with a smooth boundary $\partial\Omega=\Gamma$ and $F$ is a $C^1$ function given by $ F(u,v)=\alpha|u+v|^{p+1}+ 2\beta |uv|^{\frac{p+1}{2}}. $ Under some conditions on the parameters in the system and with careful analysis involving the Nehari Manifold, we obtain several results on the global existence, uniform decay rates, and blow up of solutions in finite time when the initial energy is nonnegative. Mathematics Subject Classification:Primary: 35L55, 35L05; Secondary: 35B40, 74H3. Citation:Claudianor O. Alves, M. M. Cavalcanti, Valeria N. Domingos Cavalcanti, Mohammad A. Rammaha, Daniel Toundykov. On existence, uniform decay rates and blow up for solutions of systems of nonlinear wave equations with damping and source terms. Discrete & Continuous Dynamical Systems - S, 2009, 2 (3) : 583-608. doi: 10.3934/dcdss.2009.2.583 [1] Vo Anh Khoa, Le Thi Phuong Ngoc, Nguyen Thanh Long. Existence, blow-up and exponential decay of solutions for a porous-elastic system with damping and source terms. [2] [3] Yanbing Yang, Runzhang Xu. Nonlinear wave equation with both strongly and weakly damped terms: Supercritical initial energy finite time blow up. [4] Nguyen Thanh Long, Hoang Hai Ha, Le Thi Phuong Ngoc, Nguyen Anh Triet. Existence, blow-up and exponential decay estimates for a system of nonlinear viscoelastic wave equations with nonlinear boundary conditions. [5] Huiling Li, Mingxin Wang. Properties of blow-up solutions to a parabolic system with nonlinear localized terms. [6] [7] Pan Zheng, Chunlai Mu, Xuegang Hu. Boundedness and blow-up for a chemotaxis system with generalized volume-filling effect and logistic source. [8] Akmel Dé Godefroy. Existence, decay and blow-up for solutions to the sixth-order generalized Boussinesq equation. [9] Evgeny Galakhov, Olga Salieva. Blow-up for nonlinear inequalities with gradient terms and singularities on unbounded sets. [10] [11] Vural Bayrak, Emil Novruzov, Ibrahim Ozkol. Local-in-space blow-up criteria for two-component nonlinear dispersive wave system. [12] Min Li, Zhaoyang Yin. Blow-up phenomena and travelling wave solutions to the periodic integrable dispersive Hunter-Saxton equation. [13] Yuta Wakasugi. Blow-up of solutions to the one-dimensional semilinear wave equation with damping depending on time and space variables. [14] Asma Azaiez. Refined regularity for the blow-up set at non characteristic points for the vector-valued semilinear wave equation. [15] [16] Julián López-Gómez, Pavol Quittner. Complete and energy blow-up in indefinite superlinear parabolic problems. [17] Petronela Radu, Grozdena Todorova, Borislav Yordanov. Higher order energy decay rates for damped wave equations with variable coefficients. [18] Jun Zhou. Global existence and energy decay estimate of solutions for a class of nonlinear higher-order wave equation with general nonlinear dissipation and source term. [19] Guangyu Xu, Jun Zhou. Global existence and blow-up of solutions to a singular Non-Newton polytropic filtration equation with critical and supercritical initial energy. [20] Mohamed-Ali Hamza, Hatem Zaag. Blow-up results for semilinear wave equations in the superconformal case. 2018 Impact Factor: 0.545 Tools Metrics Other articles by authors [Back to Top]
Step Functions We will soon look at Riemann-Stieltjes integrals where the integrator $\alpha$ is a step function, however, we will first need to formally define what exactly a step function is. Definition: A Step Function $\alpha$ on the interval $[a, b]$ is a piecewise constant function containing finitely many pieces, i.e., there exists a partition $P = \{a = x_0, x_1, ..., x_n = b \} \in \mathscr{P}[a, b]$ such that $\alpha (x)$ constant for all $x \in (x_{k-1}, x_k)$ for each $k \in \{1, 2, ..., n \}$. The Jump at $x_k$ for $k \in \{0, 1, 2, ..., n \}$ is defined to be $\alpha(x_k^+) - \alpha(x_k^-)$. For $k = 0$ the jump at $x_0$ is defined to be $\alpha(x_0^+) - \alpha(x_0)$, and for $k = n$, the jump at $x_n$ is defined to be $\alpha (x_n) - \alpha(x_n^-)$. For example, consider the function $\alpha$ defined on the interval $[0, 3]$ by:(1) Then $\alpha$ is indeed a step function because $\alpha (x)$ is constant on the intervals $(0, 1)$, $\left ( 1, \frac{3}{2} \right )$, $\left ( \frac{3}{2}, 2 \right )$ and $(2, 3)$ corresponding to the partition $P = \left \{ 0, 1, \frac{3}{2}, 2, 3 \right \} \in \mathscr{P}[0, 3]$. The graph of $\alpha$ is given below: Notice that the points of discontinuities of step functions are the "joining" points of these subintervals. In the example above, we see that locations of possibly discontinuities are $x_0 = 0$, $x_1 = 1$, $x_2 = \frac{3}{2}$, $x_3 = 2$, and $x_4 = 3$. It is important to note that given an arbitrary partition $P = \{ a = x_0, x_1, ..., x_n = b \} \in \mathscr{P} [a, b]$, by the definition of a step function that if $\alpha$ is a step function that is constant on each open subinterval $(x_{k-1}, x_k)$ for each $k \in \{1, 2, ..., n \}$ then $\alpha$ need not be left or right continuous at each of the points $x_0, x_1, x_2, ..., x_n$.
Chamley-Judd revisited Matthew Martin9/17/2014 04:52:00 PM Tweetable If 1+1=3, then 2=1.This is a valid theorem. We can prove it: we know that 2=3-1, and it is postulated in the theorem that 3=1+1, therefore 2=1+1-1, which implies 2=1. Neither the postulate nor the conclusion are factually correct, but the theorem is nevertheless correct. I mention this example because it turns out that one of the most important theorems in tax theory makes exactly such an error. I'm referring of course to the famous Chamley-Judd result which is usually described as saying that we can't redistribute capital income to workers--that the optimal tax rate on capital is zero. But that's not what the theorem actually says. Take for example Judd (1985): in the rather extreme setting where capitalists are unable to work and workers are unable to save, Judd's theorem says Theorem 2. If the redistributive capital taxation program maximizing a Paretian social welfare function converges [to a steady state]...then the optimal capital income tax vanishes asymptotically. Specifically, there should be no redistribution in the limit and any government consumption should be financed by lump-sum taxation of workers.Ok, there's a lot of verbiage in there, but the point is that Judd's theorem postulates that at the optimal tax rate all quantities and multipliers converge to fixed steady state values, and from this concludes that that optimal capital tax rate is also constant at zero. As a theorem, "if steady-state exists, then optimal rate is zero" is correct. But a recent paper by Ludwig Straub and Ivan Werning showed that Judd's postulate "if a steady-state exists" turns out to be about as wrong as 1+1=3. As a result, so is Judd's conclusion. In fact, at the optimal tax rate the steady-state which Judd assumed does not necessarily exist. To illustrate the conditions when it doesn't exist, suppose that capitalists have utility functions of the form [$$]U=\sum_{t=0}^\infty\beta^t\frac{C_t^{1-\sigma}}{1-\sigma}[$$] (which is the same as in my DSGE calculator) where [$]C_t[$] is the capitalist's consumption in period [$]t[$], and [$]\beta,\sigma[$] are just constants. As Straub and Werning showed, if [$]\sigma\gt 1[$] and we want to make workers as well of as absolutely possible, then the optimal tax rate doesn't converge to zero as Judd claimed, but actually diverges all the way to 100 percent 1in the long run! It turns out that at the actual optimal tax rate, equilibrium doesn't converge to a steady-state but actually diverges so that capital stocks and consumption plummet towards zero over time. This result illustrates exactly how extreme a setting the Judd model where workers can't save really is--so extreme, that workers are actually better off suffering immiseration than the pittance they'd earn under a zero-tax regime. At least with immiseration, workers will get to consume the capital stock first. Judd is still wrong in the special case where [$]\sigma=1[$] which corresponds to logarithmic preferences. 2Even though in this case the quantities do converge to steady state, the optimal capital tax is still positive in the steady-state because it turns out that the multipliers diverge. If you're unfamiliar with the math, 'multipliers' are a weird relic of mathematical optimization techniques that do not represent real-world things--they can be interpreted as the theoretical "marginal utility of wealth" but they are really just abstract mathematical constructs. Yet Judd's theorem requires that these too converge to steady state values at the optimum tax rate, which isn't even true in the simplest case of his model. For anti-taxers, there is a small silver lining. If [$]\sigma\lt 1,[$] then the equilibrium does converge to steady state and the optimal tax rate converges eventually to zero. But for practical purposes, even if you think [$]\sigma\lt 1,[$] this probably doesn't matter much, because convergence to that steady state is quite slow:So, no, we can't actually say that the optimal tax rate on capital is zero. Chamley-Judd didn't even say that! I've only focused on Judd, but Straub and Werning also look at Chamley as well. There's a lot more in their paper than I was able to squeeze in here, so go check it out! 1 I'm assuming that the government's only function is to redistribute. In the case where the government also consumes resources (often termed "wasteful government spending"), such as when redistribution imposes administrative costs or where government provides services other than social insurance, then the long-run capital stock must remain just large enough for the government to be able to finance it's own consumption, which requires a maximum long-run capital tax less than 100 percent. 2 To see why, take the derivative: if [$]\sigma=1[$] then [$$]\frac{\partial}{\partial C_t}\left(\frac{C_t^{1-\sigma}}{1-\sigma}\right)=\frac{1}{C_t}=\frac{\partial}{\partial C_t} ln\left(C_t\right)[$$] for all [$]C_t.[$]
A Porous medium can be defined to specify porousity characteristics of the computational domain. To model a porous mediuam, navigate to Advanced Concepts and under Porous Media add a new porous medium. The following models are supported: This porousity model takes non-linear effects into account by adding inertial terms to the pressure-flux equation. The model requires both Darcy d and Forchheimer f coefficients to be supplied by the user. The model leads to the following source term: $$\vec{S} = – (\mu d + \frac{\rho |\vec{U}|}{2} f) \vec{U}$$ where μ represents dynamic viscosity, ρ density, and \(\vec{U}\) velocity. The Darcy d coefficient is the reciprocal of the permeability κ. $$d = \frac{1}{\kappa}$$ If the coefficient f is set to zero, the equation degenerates into the Darcy equation. This model requires α and β to be supplied by the user. The corresponding source term is: $$\vec{S} = – \rho_{ref} (\alpha + \beta |\vec{U}|) \vec{U}$$ Additionally, a coordinate system specifies the main directions of the porous zone resistance. The vectors \(\vec{e_1}\) and \(\vec{e_3}\) are unit vectors. The vector \(\vec{e_2}\) is implicitly defined such that \((\vec{e_1} \vec{e_2} \vec{e_3})\) is a right-handed coordinate system like ( x y z). The x, y and z components for d and f correspond to the vectors \(\vec{e_1}\), \(\vec{e_2}\) and \(\vec{e_3}\) respectively. It can be used to define non-isotropic porosity. For isotropic media, all 3 values should be identical. Once the setup is complete, a porous region must be assinged. Such a region can be defined using Geometry Primitives.
Compact Sets in Metric Spaces are Complete Recall from the Complete Metric Spaces page that a metric space $(M, d)$ is said to be complete if every Cauchy sequence in $M$ converges in $M$. Furthermore, if $S \subseteq M$ then we said that $S$ is complete if every Cauchy sequence in $S$ converges in $S$. We will now look at an important theorem which says that if $S \subseteq M$ and if $S$ is a compact set then $S$ is also complete. Theorem 1: Let $(M, d)$ be a metric space and let $S \subseteq M$. If $S$ is compact then $S$ is complete. Proof:Suppose that $S \subseteq M$ is a compact set. Let $(x_n)_{n=1}^{\infty}$ be a Cauchy sequence in $S$, and let $X \subseteq S$ be the set of terms in this sequence. Suppose that $X$ is finite. Then clearly $(x_n)_{n=1}^{\infty}$ must converge to some $p \in X$ since by the definition of a Cauchy sequence we must have that for every $\epsilon > 0$ there exists an $N \in \mathbb{N}$ such that if $m, n \geq N$ then $d(x_m, x_n) < \epsilon$, i.e., given any $\epsilon > 0$ there exists a tail of the sequence $(x_n)_{n=1}^{\infty}$ such that the distance between any terms in the tail of the sequence is less than $\epsilon$. Instead suppose that $X$ is an infinite set. Since $S$ is a compact set we by the Every Infinite Subset of a Compact Set in a Metric Space Contains an Accumulation Point page that $X$ contains an accumulation point $p \in X$. We will show that $(x_n)_{n=1}^{\infty}$ converges to $p$. Since $(x_n)_{n=1}^{\infty}$ is a Cauchy sequence we have that for $\epsilon_1 = \frac{\epsilon}{2} > 0$ there exists an $N \in \mathbb{N}$ such that if $m, n \geq N$ then: Furthermore, since $p$ is an accumulation point of $X$ we have that for all $r > 0$ that $B(p, r) \cap X \setminus \{ x \} \neq \emptyset$, so for $r = \epsilon_1 = \frac{\epsilon}{2} > 0$ there exists an $x_m \in X$ with $m \geq N$ such that: Hence $d(x_m, p) \leq \epsilon_1 = \frac{\epsilon}{2}$. By the triangle inequality, we have that if $n \geq N$ then: Therefore $(x_n)_{n=1}^{\infty}$ converges to $p \in X \subseteq S$, and so every Cauchy sequence of $S$ converges in $S$, so $S$ is complete. $\blacksquare$
Before looking at the perimeter and area of a circle, a basic understanding of perimeter and area is needed. Perimeter is associated with any closed figure like triangle, quadrilateral, polygons or circles. It is the measure of distance covered while going around the closed figure on its boundary. For example, the perimeter of a square of side 2 cm = 8 cm, as we know that the square comprises of 4 sides having equal lengths, thus the total distance covered will be \(4 \times 2\), which will be the total length (i.e. Perimeter) Area means the actual space enclosed by a closed figure (or within the perimeter). It means all the points within the closed figure and not the boundary. This is actually measured as the square of length. Now, coming to Perimeter of a circle, as explained above, it is the measure of distance going around the boundary of a circle. This distance is difficult to calculate exactly. Looking at various circles having different radii, it is easy to visualize that the distance to go around is larger in a circle of larger radius than a smaller radius. Hence, perimeter is a function of the radius of a circle. In case of circle we generally use the term CIRCUMFERENCE instead of perimeter. It is given by \(\large p = 2 \pi r\) (Here r is a radius and π is a constant and actually defined as the ratio of circumference to the diameter of a circle). The value of \(\large \pi\) is 3.1416 Circumference Formula of a Circle is given by \(\large A = \pi r\) Let us understand the concepts related to circles along with following questions- To cover a distance of 10 km a wheel rotates 5000 times. Find the radius of the wheel. Solution: No. of rotations = 5000. Total distance covered = 100 km and we have to find out the radius of the circle. Let ‘r’ be the radius of the wheel Circumference of the wheel = Distance covered in 1 rotation = 2πr. In 5000 rotations, the distance covered = 100 km = 10 x 103x 102 cm. Hence, in 1 rotation, the distance covered = \(\frac{1000000}{5000}cm=200\: cm\) But this is equal to the circumference. Hence, 2πr = 2000 r = 200/2π r = 100/π Taking approximate value of π as 22/7, we get r = 100 x 7/22 r = 31.82 cm approx. The diameter of the given semi-circular slice of watermelon is 14 cm. What will be the perimeter of the slice of watermelon? Solution: Given diameter=14 cm Radius=d/2= 7 cm Perimeter (p) =2πr p = 2 x 22/7 x 7= 44 cm. An interesting fact to note here is that the shape of the slipe is semi-circular along with a line. Thus perimeter of semi-circular slice is \( P = \frac{p}{2} + 2r\) \( P = \frac{44}{2} + 14\) \(P = 36\) cm 3.The difference between the circumference and the diameter of a circular bangle is 5 cm. Find the radius of the bangle. (Take \(\pi = \frac{22}{7}\)) Solution: Let the radius of the bangle be’r’ According to the question: Circumference – Diameter=5 cm We know, Circumference of a circle = 2πr Diameter of a circle = 2r Therefore, 2πr – 2r =5 cm 2r(π-1) = 5cm \(2r(\frac{22}{7}-1)=5cm\\ \\ 2r\times \frac{15}{7}=5\\ \\ r=\frac{5\times 7}{15\times 2}\\ \\ r=1.166cm\) The radius of bangle is 1.166cm. 4: A girl wants to make a square shaped figure from a circular wire of radius 49 cm. Determine the sides of a square. Solution: Let the radius of the circle be ’r’. Length of the wire=circumference of the circle=\(2\pi r\) \(2\times \frac{22}{7}\times 49=2\times 22\times 7\\ \\ =308\: cm\) Let the side of the square be ‘s’. Perimeter of the square = length of the wire = 4s \(s=\frac{308}{4}\\ \\ s=77\:cm\)< Therefore, the sides of the square is 77 cm. Learn more about Surface area & volumes of the figures with BYJU’S-The learning app.
Let $X$ and $Y$ be topological spaces. A function $f: X \rightarrow Y$ is defined as continuous if for each open set $U \subset Y$, $f^{-1}(U)$ is open in $X$. This definition makes sense to me when $X$ and $Y$ are metric spaces- it is equivalent to the usual $\epsilon-\delta$ definition. But why is this a good definition when $X$ and $Y$ are not metric spaces? How should we think about this definition intuitively? One abstract way to think about continuity (in the sense that it generalizes to non-metric spaces) is that it is about error. A function $f : X \to Y$ is continuous at $x$ precisely when $f(x)$ can be "effectively measured" in the sense that, by measuring $x$ closely enough, we can measure $f(x)$ to any desired precision. (In other words, the error in our measurement of $f(x)$ can be controlled. "Precision" here means "to within an arbitrary neighborhood of $f(x)$," so it does not depend on any metric notions.) This is an abstract formulation of one of the most basic assumptions of science: that (most of) the quantities we try to measure ($f(x)$) depend continuously on the parameters of our experiments ($x$). If they didn't, science would be effectively impossible. If you like thinking about limits, a function is continuous if and only if it preserves limits of filters or, equivalently, nets. These are two ways to generalize converge of sequences to spaces which are not first-countable. Maybe it's just me, but I've never thought that the usual $\epsilon$-$\delta$ definition of continuity is intuitive at all. Why should a function be continuous at $x$ if every ball of radius $\epsilon$ around $f(x)$ contains the image under $f$ of a ball of radius $\delta$ around $x$? Instead, in metric spaces, I think of a function as continuous if it preserves limits, which can be intuitively (and generalizably) be phrased by saying that $f$ is continuous if and only if whenever $x$ is in the closure of a set $A$, then $f(x)$ is in the closure of the set $f(A)$. (Take a piece of paper and draw out the arguments of the next two paragraphs) To see that the $\epsilon$-$\delta$ continuity implies 'closure' continuity, suppose that $f$ is not closure continuous at $x$, that is $x$ is in the closure of some set $A$ but $f(x)$ is not in the closure of $f(A)$. Then there exists an $\epsilon$-ball around $f(x)$ that does not intersect $f(A)$ even though every $\delta$-ball intersects $A$. Hence, some $\epsilon$-ball around $f(x)$ contains no image of a $\delta$-ball around $x$, and so $f$ is also not $\epsilon$-$\delta$ continuous. To see that 'closure' continuity implies $\epsilon$-$\delta$ continuity, suppose that $f$ is not $\epsilon$-$\delta$ continuous. Then there exists an $\epsilon$-ball around $f(x)$ that contains no image of a $\delta$-ball around $x$. In other words, the preimage of the $\epsilon$-ball around $f(x)$ contains no $\delta$-ball around $x$, so let $A$ be the collection of points that are not in the preimage of the $\epsilon$-ball. Then $x$ is in the closure of $A$ since any $\delta$-ball around $x$ has a point outside the preimage of the $\epsilon$-ball and hence in $A$, but $f(x)$ is not in the closure of $f(A)$ since the $\epsilon$-ball around $f(x)$ is disjoint from $f(A)$. Now, the cool thing to notice is that the above equivalence of definitions works perfectly fine if you replace $\delta$-balls and $\epsilon$-balls with open sets in the appropriate topological spaces, so really what you should care about is how to make sense of 'closure' continuity in a space that is not a metric space, and the answer is given by the wiki:Kuratowski Closure Axioms. You might also find useful the answers to this mathoverflow question, specifically this one by sigfpe and this one by Vectornaught. The first one talks about how open sets can be thought of as rulers that try to measure imprecisely things in the vector space (but which doesn't explain why continuity it as it is)), while the second phrases the Kuratwoski Closure Axioms in terms of the intuitive notion of 'nearness' of points (which does account for continuity). You have it exactly right. It works well when $X$ and $Y$ are metric spaces, and has proved useful in more general contexts. When you think of open sets as points "near" another, this is the proper translation of the usual $\epsilon-\delta$ definition. It's true that this definition generalizes that for metric spaces, but there are other generalizable definitions (e.g. takes convergent sequences to convergent sequences), and perhaps implicit in the OP's question is: why this particular definition? Of course part of the answer is that it turns out to work well, but this is not too satisfying. The following are just a couple of things that just occurred to me. More generally, given a class of mathematical objects like a topological space (vector space, group, ring, etc.) it is natural to ask: what are the "structure-preserving maps" between such objects? Vector spaces are sets equipped with a scalar multiplication map; groups are sets equipped with a group operation, etc. and the notions of linear map and homomorphism are precisely defined to preserve this structure. Now with a topological space, of course, the structure comes as a set of "open sets." Here the generalization from concepts of metric spaces is especially clear. Therefore, a continuous map, a structure-preserving map on a topological space, should be one that "preserves open sets." At first you might think that such a map should take open sets to open sets (i.e. an open map), but examining the conditions on open sets shows that this is bad. The point is that if $f$ is a set map, then $f^{-1}$ is actually much "nicer" than $f$ in terms of how it commutes with unions, intersections, etc. In fact, I might make the following observation. A structure-preserving map from $X$ to a set $Y$ should prescribe a natural structure for $Y$. Suppose $x$ is a point in some topological space, then we can define $\mathcal{N}_x$ to be the set of (not necessarily open) neighbourhoods of $x$. Then "$f$ is continuous at $x$" is defined as $$\forall V \in \mathcal{N}_{f(x)} \exists U \in \mathcal{N}_x : f(U) \subseteq V.$$ Or more colourfully: Whenever "the enemy" comes with a cleverly chosen and "small" neighbourhood of $f(x)$, we must be able to find a neighbourhood of $x$ that maps into said neighbourhood. What is nice about this is that it is (obviously?) the topological version of the $\varepsilon$-$\delta$ definition from $\mathbb{R}$ and metric spaces that we know (and love(?)), and it's relatively easy to prove that $f: X \to Y$ is continuous at $x$ for all $x \in X$ if and only if $f^{-1}(U)$ is open in $X$ for all open $U \subseteq Y$. What I'm trying to say is that I don't have much intuition for the "preimage of open sets is open" definition either, but it's not clear to me that you really need that. We take this as the definition because it's simple, it's entirely written in terms of the topologies of $X$ and $Y$ (i.e. the collections of their respective open sets) and easily shown to be equivalent to something which we do have an intuition about (assuming that one finds $\varepsilon$-$\delta$ intuitive, obviously). Let me also add one little bit (perhaps this may even seem backwards). Topology is defined using sets satisfying some set of axioms, that we call Open sets. However, the collection of subsets we choose (respecting some set of axioms) to call a Topology on the space may vary with the underlying set. But, we can go from one such collection to another using a special class of maps. But, how do we define such maps? Now, properties that are to be 'intrinsic' to the Topology should not depend on this choice, so should be invariant across these maps. But, these properties (whatever they be) are defined via set operations. Perhaps we want the assignment to be direct: i.e. $U$ is open and thus $f(U)$ should be open. But, notice that since the map is general: $f(A \cap B)$ is contained and not equal to $f(A) \cap f(B)$ etc. However, with the assignment $f^{-1}$ (which need not even be well-defined) all is fine with unions and intersections, i.e., we have equalities, so, the properties follow through almost trivially (see for example the proofs that continuous maps preserve compactness, connectedness etc.) The most intuitive way of thinking of continuity is actually through its characterization in terms of closed sets. Let me introduce some non-standard (i.e. my own made up) definitions: Given two subsets $R$ and $S$ of a topological space, say that $R$ is close to $S$if $R$ is contained in the closure of $S$ (i.e. $R \subseteq \overline{S}$). Say that a point $y$ in a topological space is close toa subset $S$ if $\{ y \}$ is close to $S$. Note that with these definitions, a subset is closed if and only if it contains every point/subset that is close to it. Now a map $f : X \to Y$ is continuous if and only if for all subsets $A \subseteq X$, $f\left( \overline{A} \right) \subseteq \overline{f(A)}$. This can be restated as: Continuity: A map $f : X \to Y$ is continuous if and only if for all subsets $A \subseteq X$, $f$ maps points that are close to $A$ to points that are close to $f(A)$. You can replace the word "points" above with the word "sets" and it'll still be true. Thus continuous maps are exactly those that preserve (in one direction) the notion of "closeness" in $X$. Part of the problem is that courses in analysis are not presented geometrically enough. If you present a few pictures of continous and non continuous (partial) functions $f: \mathbb R \to \mathbb R$ and see how the condition $f(M) \subseteq N$ works out for neighbourhoods $M$ of $x$, $N$ of $f(x)$, then you begin to see how the definition works out. What can be confusing is that $\epsilon$ and $\delta$ are measurements of the sizes of nieghbourhoods rather than the actual neighbourhoods themselves, and so one step further away from intuition. Of course for many calculations you do need the sizes, as well as the understanding. I think all the confusion of (the non-metric space definition of) continuity stems from the lack of forming an intuitive knowledge of what an open set really means. Upon defining a topology for some set $X$, the elements of the topology are open by definition (assuming you defined the topological space in terms of open sets). The notion may be thought of as a way to distinguish points in the set which have the underlying topology. If you look at two points in your set $X$, and you find they don't belong to the same open set, then those two points are topologically distinguishable; conversely, two points would be indistinguishable if they have precisely the same neighborhood. Further, the collection of the family of all open subsets contained in a set $X$ is called the set's interior, usually denoted $X^o$ - you can think of this as (very roughly) taking an orange and only considering the inside bits, i.e. no skin. Consider, for instance, why the boundary of a set necessarily doesn't involve any open sets. If we have a point in some set, $p\in X$, and we can't form any subset $V\subset X$ such that $p\in U\subset V$, then we are in trouble if we want to eat the skin (boundary) of an orange because $p$ isn't there. The analogy to a ball of radius $\mathbb{R}\ni \varepsilon>0$ is pretty straightforward, you can think of that picture the whole time at first; the concept applies to all kinds of crazy shapes and things you can't visualize. On to continuity. Suppose you have a mapping from some topological space to another $$ \varphi:(S,\sigma)\rightarrow (T,\tau)$$and you choose the open set definition of a topology. Immediately any element of $\sigma$ or $\tau$ is open in its respective topology. Then, if you have a good feeling for the definition of an open set and a neighborhood, we have that $\varphi$ is continuous if you can pick any subset of $T$ which is open (i.e. an element of $\tau$), and then you find that for each respective pre-image of $T$, such as a possible pre-image $U\subset S$, that it also is open (i.e. an element of $\sigma$). Back to fruits. Please accept the following horrible lack of rigor in exchange for an intended sense of intuition. Consider a mapping from an orange with skin to an apple with its skin. As we said before, if some subset of the apple is open, then it certainly isn't the part of the skin; so, we instead look at the fleshy, non-skin part of the apple (called the mesocarp, and suppose for each fruit there is only skin or mesocarp, nothing else). Then, if the mapping is continuous we should be able to look at any subset of the mesocarp of the apple and find a corresponding subset of orange mesocarp which has not even a little skin in it. While I'm being silly, the importance of abstraction remains in the definition of continuity and not being able to perfectly correlate it to a metric space analogy is important.
I will try to answer as many questions as I can. I won't presume to give you complete exhaustive answers, but maybe they will be nonetheless useful to you. What variables determine the range of temperatures over which matter is liquid? My understanding of thermodynamics is that matter changes from a solid to a gas when some thermal vibrations create an effective repulsion of sufficient magnitude to overcome the forces of attraction between particles. But a balance of these two forces only gives rise to two phases - the "attraction < repulsion" phase, and the "attraction > repulsion" phase. A second inequality seems necessary to get three phases; what is it? I guess pressure must be involved, but how? I think that it is better to reason in the following way: if $K$ is the total kinetic energy and $U$ is the absolute value of the total potential energy, you will have $K/U \ll 1$ for the solid $K/U \gg 1$ for the gas $K/U \simeq 1$ for the liquid Do classical "billiard ball" computer simulations give rise to solid-liquid-gas phenomena? I'm talking about simulations with identical hard spheres bouncing off of each other without friction. If not, what is the smallest modification to such simulations which is necessary to observe liquids? For instance, is it sufficient to modify the potential barrier between particles to be somewhat smooth (rather than an infinite step function as for hard spheres)? System of hard spheres have been extensively studied theoretically and with simulations and nowadays we know quite a lot about them. In the following picture, you can see the phase diagram of an hard-sphere system: The first thing you will notice is that temperature is irrelevant for the phase behavior of such a system. This is because the only interaction is the "infinite step function" you mentioned, so changing the temperature will just make the dynamics of the system faster or slower but won't change the average intensity of the interaction (because the potential energy is simply $0$).The phase behavior is only controlled by the packing fraction $$\eta = \frac \pi 6 \rho \sigma^3$$ Where $\rho$ is density and $\sigma$ is the diameter of a sphere. The packing fraction is just the fraction of the total volume which is occupied by the spheres. You can see that the system has only two phases: fluid and solid. The fluid freezes at $\eta_f=0.494$ and the solid melts at $\eta_m=0.545$. Between those two values fluid and solid are at equilibrium. The maximum value of $\eta$ is the close-packing packing fraction $\eta_{CP}=\pi \sqrt{2}/6\simeq0.74$ and it is realized for cristalline hcp or fcc arrangements. So there is only one "fluid" state for hard spheres: there are no "gas" and "liquid" phases. So there is no liquid phase in an hard sphere system. To have a liquid state, it turns out, we must introduce some sort of attraction: even a square well potential is sufficient. To quote Hansen-McDonald ( Theory of Simple Liquids): The most important feature of the pair potential between atoms or molecules is the harsh repulsion that appears at short range and has its origin in the overlap of the outer electron shells. The effect of these strongly repulsive forces is to create the short-range order that is characteristic of the liquid state. The attractive forces, which act at long range, vary much more smoothly with the distance between particles and play only a minor role in deter- mining the structure of the liquid. They provide, instead, an essentially uniform, attractive background and give rise to the cohesive energy that is required to stabilise the liquid. So to have a liquid you need repulsion (and that's in a certain way the most important thing), but also attraction. It is not sufficient to modify the repulsive barrier to make it smooth: you have to make it (partly) attractive! Alternatively, is quantum mechanics necessary for a correct description of the phenomenon of liquidity? Is it necessary to invoke superconductivity (e.g. of phonons)? No, superconductivity and phonons are not really necessary. And quantum mechanics isn't either. I mean, if you want to find the exact function describing the inter-molecular forces in a liquid you will need to take QM into account. But simple models are more than sufficient to understand the physics of liquids. You could take a square well,a Yukawa or a Lennard-Jones potential and the system will always show a liquid phase with a qualitatively similar behavior. In general, QM becomes important when the De Broglie thermal wavelength of the particles $$\lambda = \sqrt{\frac{2 \pi \beta \hbar^2}{m}}$$ is of the order of the mean nearest-neighbour separation, $$a\simeq \rho^{-1/3}$$ For most liquids (an exception being for example $^4$He which becomes a superfluid in a certain pressure-density region), quantum mechanical effects can be completely neglected. Anyway, to conclude, I would suggest that you tried to think about the free energy $F=U-TS$. A system will always try to minimize its free energy. At low temperatures, the $TS$ term won't be so important so the system will minimize $U$ forming as many bonds as it can (the solid state). At high temperatures the $TS$ term will be more important so it will try to attain the most disordered, high entropy state it can attain (the gaseous state). But at intermediate temperatures, when both terms are important, it will find a "balance" between energy and entropy, and from this balance the liquid state will result. Remember: the liquid state is no easy business to study! The classical fluid is perhaps not everyone's cup of tea -N. W. Ashcroft
A quantum time crystal does not from my reading of Wilczek and others appear to require entanglement, but the idea is interesting. It does seems plausible that a quantum time crystal model could be developed with entanglement. A quantum time crystal is just a periodicity in a system in time that has a lattice structure. An elementary quantum time crystal is then just a chain that is periodic in time. This chain would then be a measure of some periodicity in a system. Wilzcek's time crystal is a curious system then that exhibits dynamics in the ground state. Normally a ground state is where the Hamiltonian acts trivially. However if time translation symmetry is violated then some type of motion in the ground state is possible. This comes very close to being a form of perpetual motion machine. Breaking time symmetry may though be involved with the arrow of time. The Wilczek time crystal assumes a charge $q$ confined to a ring of unit radius contains a magnetic flux $2\pi\alpha/q$ with the gauge covariant momentum $\pi_\pi~=~\dot\phi~+~\alpha$, for $\phi$ an angle around the ring and $-i\partial/\partial\phi$ a generator of angular momentum. The Lagrangian is then$$L~=~\frac{1}{2}\dot\phi^2~+~\alpha\dot\phi$$and the Hamiltonian$$H~=~\frac{1}{2}(\pi_\phi~-~\alpha)^2.$$For states defined as $|\ell\rangle~=~|e^{iL\phi}\rangle$ it is not hard to see that $\langle\dot\phi\rangle~-~\ell~-~\alpha$ and even for the ground state with $\ell~=~0$ there is the expectation $\langle\ell_0|\dot\phi|\ell_0\rangle~=~-\alpha$ The Page-Wooters model has two Hilbert spaces $H_1$ and $H_2$ with an entanglement of their states given by the product states of these Hamiltonians $H_1\otimes I_2~+~ I_1\otimes H_2$. A state then of the form $$|\Psi\rangle~ =~\sum_{ij}c_{ij}(|\psi_i\rangle|\phi_j\rangle~+~ |\psi_j\rangle|\phi_i\rangle), $$$|\psi_i\rangle~\in~H_1$ and $|\phi_j\rangle~\in~H_2$ is an entanglement of state from these two Hamiltonians. Now we take an arbitrary state of the for $|\chi\rangle~=~ \sum_ka_i|\psi_i\rangle~\in~H_1$ and project onto $|\Psi\rangle$ with$$\langle\chi|\Psi\rangle~=~\sum_{ijk}a^*_k c_{ij}(\langle\psi_k|\rangle|\psi_i\rangle|\phi_j\rangle~+~\langle\psi_k|\psi_j\rangle|\phi_i\rangle),$$$$~=~\sum_{ijk}a^*_k c_{ij}(\delta_{ik}|\phi_j\rangle~+~\delta_{jk}|\phi_i\rangle)$$$$~=~\sum_{ij}(a^*_ic_{ij}|\phi_j\rangle~+~a^*_jc_{ij}|\phi_i\rangle).$$Since $c_{ij}~=~a^*_ia_j$ this is then written as$$\langle\chi|\Psi\rangle ~=~ \sum_{ij}(|a_i|^2a_j|\phi_j\rangle ~+~ |a_j|^2a_i|\phi_i\rangle) ~=~ 2\sum_{ij}|a_i|^2a_j|\phi_j\rangle$$The matrix element $c_{ij} ~=~ a^*_ia_j$ is a relative phase term $c_{ij} ~=~ exp(i\theta_i)exp(-i\theta'_j)$ and the difference in this relative phase $\theta_i ~-~\theta'_j~=~\omega t$. This projection does here then is a way of measuring the phase of one system relative to another. This relative phase definition of time holds for a system with different ground states for $H_1$ and $H_2$ We then have something analogous to a time crystal. The main interest with the Page-Wooters model is to define time within the Wheeler-DeWitt equation $H\Psi[g]~=~0$. The occurrence of time may then be a relative phase with entangled states, which have analogues to a time crystal. It has though been shown that time crystals are defined on an approximate vacuum, and hold for a Floquet oscillator. This means they are quasi-stable. This is certainly an interesting area to study, and I offer here only cursory observations F. Wilczek, "Quantum Time Crystals" PRL $\bf 109$ 16 (2012). https://arxiv.org/abs/1202.2539v2 D. V. Else, B. Bauer, C. Nayak, "Floquet Time Crystals, Phys. Rev. Lett. $\bf 117$, 090402 (2016) https://arxiv.org/abs/1603.08001
The force you experience is of the form $\vec{F} = - Gmr\vec{u_r}$, and we also know that in the surface, $r=R$, it is $\vec{F}=- gm\vec{u_r}$, so $$\vec{F} = -gm\frac{r}{R}\vec{u_r}$$ This is a conservative force that can be derived from a potential $$U = \frac{1}{2}gm\frac{r^2}{R}$$ Because this is a central force, angular momentum is conserved, so $r^2 \dot{\theta} = L$, and if $\Omega$ is the rotational velocity of earth, $$r^2 \dot{\theta} = R^2 \Omega$$ And of course we have conservation of energy, $$\frac{1}{2}m(\dot{r}^2+r^2\dot{\theta}^2)+\frac{1}{2}gm\frac{r^2}{R} = E$$ but we also know the initial conditions, $r=R$, $\dot{\theta}=\Omega$, $\dot{r}=0$, so $$E = \frac{1}{2}m(R^2\Omega^2)+\frac{1}{2}gmR$$ and conservation of energy can be rewritten as $$\dot{r}^2+r^2\dot{\theta}^2+g\frac{r^2}{R} = R^2\Omega^2+gR$$ and including conservation of angular momentum as $$\dot{r}^2+ \frac{R^4 \Omega^2}{r^2}+g\frac{r^2}{R} = R^2\Omega^2+gR$$ If you set $\dot{r} = 0$ and solve for $r$, there are two solutions, marking the annular region in which motion will happen. One is the obvious $r=R$, the other comes out to $$r = \Omega R \sqrt{\frac{R}{g}}$$ which with the Earth parameters at the equator, comes out to $r=3740\ \mathrm{km}$. You can rearrange the equation of energy as $$\frac{dr}{\sqrt{R^2\Omega^2+gR - \frac{R^4 \Omega^2}{r^2}-g\frac{r^2}{R}}} = dt$$ which you could integrate to get a probably implicit relation between $r$ and $t$, which you could use in the conservation of angular momentum to get $\theta$ as a function of $r$ and/or $t$. I have done that numerically, and again, for the point on the Equator, it would take about 21 minutes to reach the point closest to the Earth's center, and 21 more to get back up at the surface. One neat result I don't fully understand where it comes from is that, at the minimum point, the angle $\theta$ has changed by $\pi / 2$, independently of what the rotation speed is, so that you always emerge at a point opposite where you went down. Since the Earth is rotating, you wouldn't actually come out at the antipodal point, but some $1175\ \mathrm{km}$ from it. Away from the equator you would have a reduced $\Omega$, and the movement will happen in a plane perpendicular to the meridian going through that point.