text
stringlengths
256
16.4k
A very simple but elegant approach to demodulate FSK is a delay and multiply. (Followed by a simple low pass filter). This is a non-coherent FM detector using the principal that the output of a multiplier is proportional to the phase of the two inputs. An XOR gate can be used as a such a multiplier. On its own it, a multiplier or XOR gate is a phase detector, given what I described, but if both inputs come from the same source, while one is passed through a fixed delay relative to the other, the result will be a phase that is proportional to input frequency and therefore the combination is a frequency discriminator! This may be easier to see / understand with my pictures below. First a multiplier followed by a low pass filter is a phase detector. If you multiply two signals, the output will be the sum and difference of the phase and frequency of the two inputs. After the low pass filter, only the difference results. If the two inputs are at the same frequency then the output will be dependent on the phase difference of the input. For a multiplier (mixer) the function is \$V_{out} = \cos(\phi)\$ so forms a linear phase detector at the 90° crossing. With an XOR gate this function is a ramp so is linear over the range from 0 to \$\pi\$ in that case. The use of an XOR gate as such is really "analog" so for a complete digital implementation I would stick with the concept of simply multiplying the two signals. Next to use this phase detector as a frequency discriminator we introduce a fixed delay. A fixed delay has a linearly increasing negative phase versus frequency, and thus as shown it converts frequency to phase, prior to applying the same signal to the multiplier. For the case of FSK, an optimum delay value is that which results in the two FSK symbols F1 and F2 being positioned as I show in the diagram above, so the values that would correspond to a 0° and 180° phase shift for the carrier frequency in use at the input to the demodulator. Note this is simple (and often done due to its simplicity) but will have worst SNR to coherent demodulation approaches such as the PLL approach previously answered (no more than 3 dB worst). It is clear how this is the case since the noise at both inputs to the multiplier will be uncorrelated due to the delay (above a corner equal to 1/T where T is the delay) and will therefore add relative to the signal. I have also seen analog equivalents to this demodulator where the delay element is formed by a parallel LC tank to ground followed by a small capacitor (to introduce 90° shift to center the discriminator). The tank has 0 degree phase at resonance but a high phase vs frequency slope corresponding to the delay desired. This delayed signal is multiplied with the original signal, the result low pass filtered to be the FM demodulated signal.
Molar ... may refer to: Molar(tooth), the fourth kind of tooth in mammals Molar(grape), another name for the Spanish wine grape ... Listan Negro Molar(unit), a unit of concentration equal to 1 mole per litre Molarmass Molarvolume El Molar, Tarragona, a ... village in the comarca (county) of Priorat, province of Tarragona in the autonomous region of Catalonia, Spain El Molar, Madrid ... Molar Peak ... (64°41′S 63°19′W / 64.683°S 63.317°W / -64.683; -63.317Coordinates: 64°41′S 63°19′W / 64.683°S 63.317°W / - ... " MolarPeak". Geographic Names Information System. United States Geological Survey. Retrieved 2013-10-29. This article ... incorporates public domain material from the United States Geological Survey document " MolarPeak" (content from the Geographic ... Ben Molar Adet, Manuel (9 May 2015). "Ben Molar" (in Spanish). Argentina: El Litoral. Retrieved 26 August 2015. "Murió Ben Molar, un ... Molarwas an ambitious talent scout, a shrewd businessman, who had a good ear for what the general public and young people were ... It was Molarwho proposed a National Day of the Tango and suggested the date should be 11 December annually to coincide with ... From childhood Molarsang with the neighborhood street musicians and at a very early age began writing songs. He worked for a ... Molar volume The molarvolume of a substance can be found by measuring its molarmass and density then applying the relation V m = M ρ {\ ... and the measurement of the molarvolume of silicon, both by X-ray crystallography and by the ratio of molarmass to mass ... The molarvolume of an ideal gas at 100 kPa (1 bar) is 0.022 710 980(38) m3/mol at 0 °C, 0.024 789 598(42) m3/mol at 25 °C. The ... The molarvolume, symbol Vm, is the volume occupied by one mole of a substance (chemical element or chemical compound) at a ... Molar (tooth) They are: maxillary first molar, maxillary second molar, maxillary third molar, mandibular first molar, mandibular second molar... Zalambdodont molarsare found in for example golden moles and solenodons. Like zalambdodont molars, dilambdodont molarshave a ... The molarsor molarteeth are large, flat teeth at the back of the mouth. They are more developed in mammals. They are used ... In humans, the molarteeth have either four or five cusps. Adult humans have twelve molars, in four groups of three at the back ... Mulberry molar ... or the molarscan be removed and an implant or bridge can be put in place of the mulberry molar. A mulberry molaris caused by ... Mulberry molarsare physically defective permanent molars. The deformity is caused by congenital syphilis. This type of ... The grinding surface of a mulberry molaris also corrupted. Normally, the grinding surface of a molarhas a pit and is ... Here, the size of the mulberry molaris diminished in all aspects, creating a stumpy version of a conventional molar. The cause ... Molar mass Online MolarMass Calculator with the uncertainty of M and all the calculations shown MolarMass Calculator Online MolarMass ... the average molarmass of dry air is 28.97 g/mol. Molarmass is closely related to the relative molarmass (M r) of a compound ... The technical definition is that the relative molarmass is the molarmass measured on a scale where the molarmass of unbound ... The molarmass of atoms of an element is given by the Standard atomic weight of the element multiplied by the molarmass ... Holy Molar ... is a noise rock band from San Diego, composed of members Mark McCoy (of Charles Bronson, Virgin Mega Whore, and Das ... The band had never officially disbanded, leaving their record label, Three One G to state, "As for Holy Molar, the project has ... Holy Molarare a San Diego based band formed in 2001. The band consisted of vocalist Mark McCoy (under the stage name Mark ... Holy MolarMySpace Page "Three One G Records". threeoneg.com. Retrieved 2017-09-19. "Three One G Records". threeoneg.com. ... Molar Massif ... (71°38′S 163°45′E / 71.633°S 163.750°E / -71.633; 163.750Coordinates: 71°38′S 163°45′E / 71.633°S 163.750°E ... This article incorporates public domain material from the United States Geological Survey document " MolarMassif" (content from ... the outline of the massif resembles a molartooth. Geographical features include: Canine Hills Dentine Peak Evison Glacier ... Husky Pass Incisor Ridge Leap Year Glacier Tobogganers Icefall Wisdom Hills " MolarMassif". Geographic Names Information System ... Molar Band Males constitute 56% of the population and females 44%. MolarBand has an average literacy rate of 68%, higher than the ... national average of 59.5%: male literacy is 77%, and female literacy is 58%. In MolarBand, 18% of the population is under 6 ... Muroid molar After the molarserupt, wear progressively obliterates the distinct features of the molarcrown. Carleton and Musser, 2005, p. ... and some living species have lost the third and even the second molars. Features of the molarcrown are often used in muroid ... Jerboas have a dental formula of 1.0.0-1.31.0.0.3 × 2 = 16-18, including incisors in the upper and lower jaws, three molarsin ... Some have suggested that the first molarin muroids is in fact a retained deciduous premolar, but this hypothesis has been ... Molar distalization ... is a process in the field of Orthodontics which is used to move molarteeth, especially permanent first ... End-on molarrelationship Mesially angulated upper molarsLate mixed dentition patient Mild to moderate crowding Impacted ... Tipping movement occurs where the 1st molarsare angled backwards when the 2nd molarhas not erupted yet. In addition, the ... The authors concluded that the effect of maxillary second and third molareruption stage on molardistalization in both the ... Molar conductivity ... is defined as the conductivity of an electrolyte solution divided by the molarconcentration of the ... is the molarconductivity at infinite dilution (or limiting molarconductivity), which can be determined by extrapolation of Λ ... the molarconductivity strongly depends on concentration: The more dilute a solution, the greater its molarconductivity, due ... can be written in terms of molarconductivity. Thus, the pKa values of acids can be calculated by measuring the molar... Molar solubility ... is the number of moles of a substance (the solute) that can be dissolved per liter of solution before the ... the molarsolubility would be written as: S 0 = − N A x B y ( Δ ) V {\displaystyle S_{0}=-{\frac {N_{{\ce {A}}_{x}{\ce {B}}_{y ... molarsolubility is defined assuming no common ions are already present in the solution. N ( i ) + N ( Δ ) = N ( f ) {\ ... the molarsolubility can be computed without the aforementioned equation. Ionic substance AB2 dissociates into A and 2B, or one ... El Molar ... can refer to: El Molar, Madrid, Community of Madrid, Spain El Molar, Priorat, Province of Tarragona, Catalonia, Spain ... Molar concentration The sum of molarconcentrations gives the total molarconcentration, namely the density of the mixture divided by the molar... A commonly used unit for molarconcentration in chemistry is the molarwhich is defined as the number of moles per litre (unit ... A simpler relation can be obtained by considering the total molarconcentration namely the sum of molarconcentrations of all ... "How low can you go? The Y to Y". MolarSolution Concentration Calculator Experiment to determine the molarconcentration of ... Molar refractivity The molarrefractivity is defined as A = 4 π 3 N A α , {\displaystyle A={\frac {4\pi }{3}}N_{A}\alpha ,} where N A ≈ 6.022 × 10 ... Molarrefractivity, A {\displaystyle A} , is a measure of the total polarizability of a mole of a substance and is dependent on ... Substituting the molarrefractivity into the Lorentz-Lorenz formula gives A = R T p n 2 − 1 n 2 + 2 {\displaystyle A={\frac {RT ... p}}{\frac {n^{2}-1}{n^{2}+2}}} For a gas, n 2 ≈ 1 {\displaystyle n^{2}\approx 1} , so the molarrefractivity can be ... El Molar, Priorat El Molarwas bombed by Condor Legion planes during the Spanish Civil War. An abandoned galena mine nearby was used as a field ... El Molaris a municipality in the comarca of Priorat, Tarragona Province, Catalonia, Spain. Most of the local economy is based ... There are remainders of two ancient villages, the oldest from the 6th or 7th century, within the El Molarmunicipal term. The ... According to an 1156 document by which Ramon Berenguer IV ceded the territory to the Poblet Monastery, El Molarwas linked to ... Standard molar entropy The total molarentropy is the sum of many small changes in molarentropy, where each small change can be considered a ... The standard molarentropy of a gas at STP includes contributions from: The heat capacity of one mole of the solid from 0 K to ... If a mole of substance were at 0 K, then warmed by its surroundings to 298 K, its total molarentropy would be the addition of ... In chemistry, the standard molarentropy is the entropy content of one mole of substance under a standard state (not STP). The ...
Background Generally, all phase transitions require some input energy in order for the transition to occur. For instance, the transition from solid-to-liquid or vice versa requires what is called the enthalpy of fusion or latent heat of fusion. This is the amount of energy needed to change the total interal energy (i.e., enthalpy) of a substance in order to produce the phase transition. The amount of energy necessary to produce sublimation is, not too surprisingly, called the enthalpy of sublimation. For example, imagine we had water ice at $-10^{\circ}$ C and we wanted to convert this to a gas. We would need to add energy, but it would occur in the following steps: add enough energy to change the temperature of the ice from $-10^{\circ}$ C to $0^{\circ}$ C; add enough energy to overcome the latent heat of fusion; add enough energy to change the temperature of the ice from $0^{\circ}$ C to just below $100^{\circ}$ C; add enough energy to overcome the latent heat of vaporization; and finally add more energy to increase the temperature of the steam (if one so desires, so long as the steam is contained). The process would occur in these steps under STP if we added the energy continuously in at a slow rate (i.e., supply slighly more energy than is lost through radiation and/or conduction). It is possible to sublimate ice into a vapour but this generally occurs through ablation by UV light (and/or higher energy photons). Why do some substances undergo sublimation while others do not? The answer lies in thermodynamics, specifically the triple point of the substance. If this occurs at a large pressure compared to STP, then it is possible for a material to sublimate. For instance, if you look at the phase diagram image below you can see that the sublimation point for CO$_{2}$ occurs at a lower temperature and pressure than its triple point. Whereas, water has a triple point at a much lower pressure than standard atmospheric pressure, so sublimation is less likely to occur under normal conditions (and in the absence of high energy photons or particles that can induce ablation). Thus, if you continously add energy to solid CO$_{2}$, its first transition will be to a gaseous form not a liquid form under STP. If you increased the pressure on the system and then added energy, so long as the pressure is high enough then you can produce a liquid CO$_{2}$. The substance's chemical properties determine its critical and triple points and these have been well documented for most substances. as there must be a state between solid-gas in which particles have greater velocity then solids but less than gaseous particle... No, there must not. If you add enough energy to induce a phase transition and it is easier for the substance to change into a gas than a liquid, it will go straight to the gaseous form, not though the liquid. so why do we use this term for some substances at some temperatures and for some we do not...if we know that the transition is always happening continuously.? A phase transition is, by definition and occurrence, a discontinuous change. The problem is, while you can change the temperature of a substance in a semi-continuous manner when it is far away from any phase transition lines, once at a phase transition line you can add energy but see no change in temperature until the substance undergoes the phase transition. Then, once in the new state, the substance can change temperature again (assuming the transition was not bounded by another phase transition line) in a semi-continuous manner by adding more energy. You can try this at home with a standard cooking thermometer. Start with near-ice-cold water and then raise it to a boiling temperature. Make sure the thermometer is not touching the metal pot/pan in which the water lies. If your oven surface elements provide a roughly constant energy to the pan, then you should see a semi-continuous change in temperature up until just under $100^{\circ}$ C. If you keep the heat low enough, you can sit at this point, just below boiling, for a relatively long time. This is because the rate at which you add energy may be only just overcoming radiative and conductive heat losses to the surrounding room. If you crank up the heat, the time spent just below the $100^{\circ}$ C mark will be much shorter as your energy input can vastly exceed radiative and conductive losses. Caveats and Notes Note that phase transitions are defined within the context of thermodynamics, which is, by construction, a fluid model (i.e., meant for macroscopic parameter space) not a kinetic model. Note that kinetic models can describe macroscopic parameters as well, but they do so by finding the ensemble average in velocity space of the distribution function to treat a large number of particles as if they exhibit a bulk behavior similar to a fluid. The differences between fluid and kinetic models can be subtle, but you can think of them in the following ways. We treat fluids as a continuous blob occupying a unit volume that can deform under stress but the initial fluid element, if incompressible, will occupy the same volume. For kinetic models, we assume a model distribution function that most closely describes the discrete particles in the system. In many cases, this function is continuous in the mathematical sense and the use of ensemble averages can yield bulk properties similar to those in thermodynamics (e.g., temperature). However, the interpretations are often different and the limitations and assumptions required in kinetic models are generally fewer than fluid models. Continuous vs. Discontinuous When I use the term discontinuous above, I am referring to a change that occurs on a smaller scale than the resolution of the specific observation. For instance, we assume that shock waves contain a discontuous jump in density, pressure, etc. in a region called the ramp. This region is generally assumed to be "infinitely" thin when looking at the asymptotic state of the fluid/kinetic gas on either side of the ramp. However, it is well known that the ramp has a finite thickness of the order of $\sim 1 \ \mu m$ in Earth's atmosphere at STP. For most fluid models, this spatial scale is so small we can approximate it as being infinitesimal and neglect it. This approximation greatly simplifies many of the equations we would use to try and model such phenomena, even though the transition from upstream to downstream is not truly discontinuous. We define the transition as being discontinuous because it is comparable to or smaller than the smallest relevant scale lengths considered for the problem at hand (i.e., in this case, the inter-particle mean free path). In nature, there are few things that could be truly called discontinuous (I actually know of none, but some of the quantum whisperers on this site might know of some. Thus I am trying to be careful in this statement.). However, that some phenomena changes continuously on the smallest scales may not matter for the macroscopic dynamics where we assumed a discontinuous change. As in the shock wave example above, that the ramp region has a finite thickness does not render the conservation relations used to model most shocks (i.e., Rankine-Hugoniot relations) useless. The assumption that the ramp is discontinuous works because the transition is faster than the scales (i.e., fluid) considered in this specific problem. Thus, the definition of continuous vs. discontinuous depends upon the problem being addressed! So in the purest sense, yes, a phase transition is close to (not exact because particles are discrete) a continuous transition if we could measure things "infinitely" fast and on an "infinitely" small scale. Interesting Side Note: The use of a model distribution function generally inserts irreversibility into any model one would evolve dynamically from this point forward. Reference The phase diagram image was taken from Wikipedia, courtesy of Cmglee - Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=29176053
For most physical properties of substances, it's useful to know how those properties depend on other parameters. For example, the various coefficients of thermal expansion (CTEs) indicate how rapidly the dimension(s) of a substance change when the temperature is changed by a given amount. A common CTE used when working with materials is the linear CTE: $$\alpha_\mathrm L = {1\over L} {dL \over dT}$$ When thermal expansion occurs, if the temperature change is relatively small the amount of thermal expansion is roughly proportional to the original length $L$. So, the definition of $\alpha_\mathrm L$ divides the length dependence $dL/dT$ by $L$ to yield a parameter that is approximately constant within a particular temperature range. A similar rationale underlies the definition of the molar conductance (also called the 'molar conductivity') as: $$\Lambda_\mathrm m = {\kappa \over c}$$ The conductivity $\kappa$ of an electrolyte is a strong function of the concentration $c$ of the electrolyte of interest. However, for electrolytes containing only strong acids and/or bases (or salts of these), $\kappa$ is nearly a linear function of $c$ over a very wide concentration range, and thus $\Lambda_\mathrm m$ for these electrolytes is nearly constant. Thus, for species that show this strong linearity, defining $\mathbf{\Lambda_m}\equiv\kappa / c$ is useful because a single $\mathbf{\Lambda_m}$ value can be employed to calculate the absolute conductivity $\mathbf\kappa$ across a wide range of electrolyte concentrations $c$, as: $$\kappa = \Lambda_\mathrm m c$$
I am able to prove the iff in the forward direction. But, I am having trouble proving the statement in other direction. I am trying to use the definition of dense, but I am not getting anywhere with it. Let $U \subseteq X$ be an open set and let $S \subseteq X$ be the set in question. Denseness is equivalent to saying that it intersects every open set non-trivially, hence $U \cap \bar{S}^c \neq \emptyset$. Considering properties of the complement, this is the same as saying $U$ is not a subset of $\bar{S}$. Since the closure of $S$ does not contain an open set, it has empty interior and so $S$ is nowhere dense. Let $S^{c}$ denote the complement of $S$ in $X$ and $cl(S)$ the closure of $S$ in $X$. To prove that $S$ is nowhere dense, any open set $U \subseteq cl(S)$ must be empty. Let $U$ be such a set. The complements are included in the opposite order, that is $cl(S)^{c} \subseteq U^{c}$. Taking the closures of both sides and noting that $U^{c}$ is closed, this gives $cl( cl(S)^{c})) \subseteq U^{c}$. But $cl(S)^{c}$ is by assumption dense, which (by definition) means that $cl( cl(S)^{c})) = X$. We thus got ourselves a relation $X \subseteq U^{c}$. This is possible only if $U = \emptyset$. Hence any open subset of $cl(S)$ must be empty and thus $S$ is nowhere dense. Let $A^c$ be complement of $A$, $\overline{A}$ be closure of $A$ and $A^o$ be interior of $A$. We define: 1). $A$ is dense iff $\overline{A}=X$. 2). $A$ is nowhere dense iff $(\overline{A})^o=\varnothing$. Edit: We will use the facts that $(A^c)^c=A$ (Involution) in the following proof. First we prove following lemma $$ \overline{A}=((A^c)^o)^c\tag1 $$ By definition, $A^o$ (interior set of $A$) is the largest open contained in $A$. So $(A^c)^o$ is the largest open contained in $A^c$, i.e $(A^c)^o\subset A^c$. Thus $A=(A^c)^c\subset ((A^c)^o)^c$. Since $(A^c)^o$ is open, $((A^c)^o)^c$ is closed. So this means that $((A^c)^o)^c$ is the smallest closed set containing $A$. By definition of closure, $(1)$ follows. Now if $A$ be nowhere dense, then $(\overline{A})^o=\varnothing$. Since by $(1)$ and Involution $$ \overline{(\overline{A})^c}=((((\overline{A})^{c})^c)^o)^c=((\overline{A})^o)^c=\varnothing^c=X $$ i.e. $(\overline{A})^c$ is dense. Second if $(\overline{A})^c$ is dense, then $\overline{(\overline{A})^c}=X$ and $$ ((\overline{A})^o)^c=((((\overline{A})^{c})^c)^o)^c=\overline{(\overline{A})^c}=X $$ So $(\overline{A})^o=\varnothing$, i.e $A$ is nowhere dense. Assume that complement of (clA) is dense. Then cl[complement of (clA)] = X. So complement of (int clA) = X. Thus int clA is empty. Hence A is nowhere dense.
A proof can be made with the following thought experiment. (In relativity thought experiments are ubiquitous). I'm sorry if I can't make drawings, but if you search "light clock experiment" you will see what I'm talking about with drawings and possibly a more detailed explanation. Take a box with two reflective walls at distance $d/2$ and imagine that a beam of light is bouncing back and forth, and imagine that a detector at one of the two walls counts the light bounces and uses it to track time. If you look at the box in its rest frame you'll see the light beam bouncing every $\Delta t = d/c$ seconds. Let's say that there is a built in mechanism that makes the box, i don't know, ring an alarm, after $n$ bounces. That is, the box alarm will go off after a time $n \Delta t$. Now let's go into a reference frame moving at constant velocity $v$ with respect to the original frame, along the direction orthogonal to the light beam. The time interval for a full bounce is $\Delta t'$ and the distance travelled is$$\mathrm{dist} = 2\sqrt{(\Delta x')^2 + (\Delta y')^2}\,,\qquad\Delta x = \frac{v\Delta t'}{2}\,,$$where $\Delta y'$ is the distance travelled in the vertical direction in the new frame. It must equal $d/2$ because intervals orthogonal to $v$ don't get transformed under a boost (see later for an explanation of that). From the fact that $c$ is the same in both frames we conclude$$c^2 (\Delta t')^2 = v^2 (\Delta t')^2 + d^2\;\;\Rightarrow\;\;(\Delta t')^2 = \frac{c^2}{c^2-v^2}\Delta t^2\,.$$We need to show that $\Delta y' = \Delta y = d/2$. There's another funny little thought experiment to prove that 1. Say I have two halves of a pipe painted differently and kept separated. They are running towards each other along the main axis of the pipe. If the transverse directions would shrink after a Lorentz transformation then in one frame I would see one half pipe enclosing the other, and in one other frame the opposite. Clearly the color seen by an observer after the pipes meet must be a frame invariant thing, so the solution is that the pipes don't shrink/enlarge along the orthogonal directions. Ok, now let's come to the punch line. The events $E_1$ and $E_2$ are, respectively, the light clock starts and the light clock alarm rings. Let us compute the distance$$S^2=c^2(t_1 - t_1)^2 - (\vec{x}_1 - \vec{x}_2)^2\,,$$in the two cases. For the original frame $S^2 = (nc\Delta t )^2$. For the other frame$$S^2 = (n c \Delta t')^2 - (n v \Delta t')^2 = (c^2 - v^2) \frac{(nc\Delta t )^2}{c^2 - v^2}\,.$$So the two factors simplify and it works out! But we are not done yet. This proves that the distance is invariant under arbitrary boosts for positive intervals (usually called time-like). Another answer given previously proved it for null intervals (called light-like) in full generality. We are left to prove it for negative intervals (space-like) and also for pure rotations. Now the pure rotation case is just trivial because if we have $\Delta t = \Delta t'$ that's just the statement that the Euclidean norm is invariant under rotation. Whereas the proof for space-like intervals can be done with another thought experiment. Take two detectors along the $x$ direction and shine a laser from the middle point. In the rest frame $\Delta t = 0$ while in a frame boosted along $x$ the events won't be simultaneous. The math will work out similarly so you can do it as an exercise.
Stationary Kernels Stationary kernels can be expressed as a function of the difference between their inputs. Note that any norm may be used to quantify the distance between the two vectors \mathbf{x} \ \& \ \mathbf{y}. The values p = 1 and p = 2 represent the Manhattan distance and Euclidean distance respectively. Instantiating Stationary Kernels Stationary kernels are implemented as a subset of the StationaryKernel[T, V, M] class which requires a Field[T] implicit object (an algebraic field which has definitions for addition, subtraction, multiplication and division of its elements much like the number system). You may also import spire.implicits._ in order to load the default field implementations for basic data types like Int, Double and so on. Before instantiating any child class of StationaryKernel one needs to enter the following code. 1 2 3 4 5 6 import spire.algebra.Field import io.github.mandar2812.dynaml.analysis.VectorField //Calculate the number of input features //and create a vector field of that dimension val num_features: Int = ... implicit val f = VectorField(num_features) Radial Basis Function Kernel¶ The RBF kernel is the most popular kernel function applied in machine learning, it represents an inner product space which is spanned by the Hermite polynomials and as such is suitable to model smooth functions. The RBF kernel is also called a universal kernel for the reason that any smooth function can be represented with a high degree of accuracy assuming we can find a suitable value of the bandwidth. 1 val rbf = new RBFKernel(4.0) Squared Exponential Kernel¶ A generalization of the RBF Kernel is the Squared Exponential Kernel 1 val rbf = new SEKernel(4.0, 2.0) Mahalanobis Kernel¶ This kernel is a further generalization of the SE kernel. It uses the Mahalanobis distance instead of the Euclidean distance between the inputs. The Mahalanobis distance (\mathbf{x}-\mathbf{y})^\intercal \Sigma^{-1} (\mathbf{x}-\mathbf{y}) is characterized by a symmetric positive definite matrix \Sigma. This distance metric reduces to the Euclidean distance if \Sigma is the identity matrix. Further, if \Sigma is diagonal, the Mahalanobis kernel becomes the Automatic Relevance Determination version of the SE kernel (SE-ARD). In DynaML the class MahalanobisKernel implements the SE-ARD kernel with diagonal \Sigma. 1 2 3 4 val bandwidths: DenseVector[Double] = _ val amp = 1.5 val maha_kernel = new MahalanobisKernel(bandwidths, amp) Student T Kernel¶ 1 val tstud = new TStudentKernel(2.0) Rational Quadratic Kernel¶ 1 val rat = new RationalQuadraticKernel(shape = 1.5, l = 1.5) Cauchy Kernel¶ 1 val cau = new CauchyKernel(2.5) Gaussian Spectral Kernel¶ 1 2 3 4 5 6 //Define how the hyper-parameter Map gets transformed to the kernel parameters val encoder = Encoder( (conf: Map[String, Double]) => (conf("c"), conf("s")), (cs: (Double, Double)) => Map("c" -> cs._1, "s" -> cs._2)) val gsmKernel = GaussianSpectralKernel[Double](3.5, 2.0, encoder) Matern Half Integer¶ The Matern kernel is an important family of covariance functions. Matern covariances are parameterized via two quantities i.e. order \nu and \rho the characteristic length scale. The general matern covariance is defined in terms of modified Bessel functions. Where d = ||\mathbf{x} - \mathbf{y}|| is the Euclidean (L_2) distance between points. For the case \nu = p + \frac{1}{2}, p \in \mathbb{N} the expression becomes. Currently there is only support for matern half integer kernels. 1 2 implicit ev = VectorField(2) val matKern = new GenericMaternKernel(1.5, p = 1) Wavelet Kernel¶ The Wavelet kernel (Zhang et al, 2004) comes from Wavelet theory and is given as Where the function h is known as the mother wavelet function, Zhang et. al suggest the following expression for the mother wavelet function. 1 val wv = new WaveletKernel(x => math.cos(1.75*x)*math.exp(-1.0*x*x/2.0))(1.5) Periodic Kernel¶ The periodic kernel has Fourier series as its orthogonal eigenfunctions. It is used when constructing predictive models over quantities which are known to have some periodic behavior. 1 val periodic_kernel = new PeriodicKernel(lengthscale = 1.5, freq = 2.5) Wave Kernel¶ 1 val wv_kernel = WaveKernel(th = 1.0) Laplacian Kernel¶ The Laplacian kernel is the covariance function of the well known Ornstein Ulhenbeck process, samples drawn from this process are continuous and only once differentiable. 1 val lap = new LaplacianKernel(4.0)
Definition 28.35.1.reference Let f : X \to SJersey Football Jets Nfl Cheap Color Jerseys Discount Jerseys Rush White be a morphism of schemes. Let \mathcal{L} be an invertible \mathcal{O}_ X-module. We say \mathcal{L} is relatively ample, or f-relatively ample, or ample on X/S, or f-ample if f : X \to S is quasi-compact, and if for every affine open V \subset S the restriction of \mathcal{L} to the open subscheme f^{-1}(V) of X is ample. 28.35 Relatively ample sheaves Let X be a scheme and \mathcal{L} an invertible sheaf on X. Then \mathcal{L} is ample on X if X is quasi-compact and every point of X is contained in an affine open of the form X_ s, where s \in \Gamma (X, \mathcal{L}^{\otimes n}) and Cardinals T T Jersey Cardinals Cardinals T T Jersey Cardinals n \geq 1, see Properties, Definition 27.26.1. We turn this into a relative notion as follows. Definition 28.35.1.reference Let f : X \to SJersey Football Jets Nfl Cheap Color Jerseys Discount Jerseys Rush White be a morphism of schemes. Let \mathcal{L} be an invertible \mathcal{O}_ X-module. We say \mathcal{L} is We note that the existence of a relatively ample sheaf on X does not force the morphism X \to S to be of finite type. Lemma 28.35.2. Let X \to S be a morphism of schemes. Let \mathcal{L} be an invertible \mathcal{O}_ X-module. Let n \geq 1. Then \mathcal{L} is f-ample if and only if \mathcal{L}^{\otimes n} is f-ample. Proof. This follows from Properties, Lemma Selanne Anaheim Jersey Ducks Teemu. \square Lemma 28.35.3. Let f : X \to S be a morphism of schemes. If there exists an f-ample invertible sheaf, then f is separated. Proof. Being separated is local on the base (see Schemes, Lemma 25.21.7 for example; it also follows easily from the definition). Hence we may assume S is affine and X has an ample invertible sheaf. In this case the result follows from Properties, Lemma 27.26.8. \square There are many ways to characterize relatively ample invertible sheaves, analogous to the equivalent conditions in Properties, Proposition 27.26.13. We will add these here as needed. The invertible sheaf \mathcal{L} is fAlternate Online New Jets Hockey Jerseys Jersey Cheap Black York Shop-ample. There exists an open covering S = \bigcup V_ i such that each \mathcal{L}|_{f^{-1}(V_ i)} is ample relative to f^{-1}(V_ i) \to V_ i. There exists an affine open covering S = \bigcup V_ i such that each \mathcal{L}|_{f^{-1}(V_ i)} is ample. There exists a quasi-coherent graded \mathcal{O}_ S-algebra \mathcal{A} and a map of graded \mathcal{O}_ X-algebras \psi : f^*\mathcal{A} \to \bigoplus _{d \geq 0} \mathcal{L}^{\otimes d} such that U(\psi ) = X andr_{\mathcal{L}, \psi } : X \longrightarrow \underline{\text{Proj}}_ S(\mathcal{A}) is an open immersion (see Constructions, Lemma 26.19.1 for notation). The morphism f is quasi-separated and part (4) above holds with \mathcal{A} = f_*(\bigoplus _{d \geq 0} \mathcal{L}^{\otimes d}) and \psi the adjunction mapping. Same as (4) but just requiring r_{\mathcal{L}, \psi } to be an immersion. Proof. It is immediate from the definition that (1) implies (2) and (2) implies (3). It is clear that (5) implies (4). Assume (3) holds for the affine open covering S = \bigcup V_ i. We are going to show (5) holds. Since each f^{-1}(V_ i) has an ample invertible sheaf we see that f^{-1}(V_ i) is separated (Properties, Lemma 27.26.8). Hence f is separated. By Schemes, Lemma 25.24.1 we see that \mathcal{A} = f_*(\bigoplus _{d \geq 0} \mathcal{L}^{\otimes d}) is a quasi-coherent graded \mathcal{O}_ S-algebra. Denote \psi : f^*\mathcal{A} \to \bigoplus _{d \geq 0} \mathcal{L}^{\otimes d} the adjunction mapping. The description of the open U(\psi ) in Constructions, Section 26.19 and the definition of ampleness of \mathcal{L}|_{f^{-1}(V_ i)} show that U(\psi ) = X. Moreover, Constructions, Lemma 26.19.1 part (3) shows that the restriction of r_{\mathcal{L}, \psi } to f^{-1}(V_ i) is the same as the morphism from Properties, Lemma 27.26.9 which is an open immersion according to Properties, Lemma 27.26.11. Hence (5) holds. Let us show that (4) implies (1). Assume (4). Denote \pi : \underline{\text{Proj}}_ S(\mathcal{A}) \to S the structure morphism. Choose V \subset S affine open. By Constructions, Definition 26.16.7 we see that Cardinals T T Jersey Cardinals \pi ^{-1}(V) \subset \underline{\text{Proj}}_ S(\mathcal{A}) is equal to \text{Proj}(A) where A = \mathcal{A}(V) as a graded ring. Hence r_{\mathcal{L}, \psi } maps f^{-1}(V)Reebok On Reebok Jersey Field On Field On Reebok Jersey Field Jersey Reebok isomorphically onto a quasi-compact open of \text{Proj}(A). Moreover, \mathcal{L}^{\otimes d} is isomorphic to the pullback of \mathcal{O}_{\text{Proj}(A)}(d) for some d \geq 1. (See part (3) of Constructions, Lemma 26.19.1 and the final statement of Constructions, Lemma 26.14.1.) This implies that \mathcal{L}|_{f^{-1}(V)} is ample by Properties, Lemmas 27.26.12 and Selanne Anaheim Jersey Ducks Teemu. Assume (6). By the equivalence of (1) - (5) above we see that the property of being relatively ample on X/S is local on S. Hence we may assume that S is affine, and we have to show that \mathcal{L} is ample on X. In this case the morphism r_{\mathcal{L}, \psi } is identified with the morphism, also denoted r_{\mathcal{L}, \psi } : X \to \text{Proj}(A) associated to the map \psi : A = \mathcal{A}(V) \to \Gamma _*(X, \mathcal{L}). (See references above.) As above we also see that Cardinals T T Jersey Cardinals \mathcal{L}^{\otimes d} is the pullback of the sheaf \mathcal{O}_{\text{Proj}(A)}(d) for some d \geq 1. Moreover, since X is quasi-compact we see that X gets identified with a closed subscheme of a quasi-compact open subscheme Y \subset \text{Proj}(A)Jersey Giants White Sf White Sf. By Constructions, Lemma 26.10.6 (see also Properties, Lemma 27.26.12) we see that \mathcal{O}_ Y(d') is an ample invertible sheaf on Cardinals T T Jersey Cardinals Y for some d' \geq 1. Since the restriction of an ample sheaf to a closed subscheme is ample, see Properties, Lemma Black Shirts Malcolm Jerseys Eagles Jersey Jenkins Authentic we conclude that the pullback of Cardinals T T Jersey Cardinals \mathcal{O}_ Y(d') is ample. Combining these results with Properties, Lemma Selanne Anaheim Jersey Ducks Teemu we conclude that \mathcal{L} is ample as desired. \square Lemma 28.35.5.reference Let f : X \to S be a morphism of schemes. Let \mathcal{L} be an invertible \mathcal{O}_ X-module. Assume S affine. Then \mathcal{L} is f-relatively ample if and only if \mathcal{L} is ample on X. Proof. Immediate from Lemma Wholesale Hockey Jerseys Cheap Shop Online Sports and the definitions. \square Cheap Fashion Black Flag Usa 4 Adidas Jersey Anaheim Ducks Cam Nhl Women's Authentic Fowlerreference Let f : X \to S be a morphism of schemes. Then f is quasi-affine if and only if \mathcal{O}_ X is f-relatively ample. Proof. Follows from Properties, Lemma 27.27.1 and the definitions. \square Lemma 28.35.7. Let f : X \to Y be a morphism of schemes, \mathcal{M} an invertible \mathcal{O}_ Y-module, and \mathcal{L} an invertible \mathcal{O}_ X-module. If \mathcal{L} is f-ample and \mathcal{M} is ample, then \mathcal{L} \otimes f^*\mathcal{M}^{\otimes a} is ample for a \gg 0. If \mathcal{M} is ample and f quasi-affine, then f^*\mathcal{M} is ample. Proof. Assume \mathcal{L} is f-ample and \mathcal{M} ample. By assumption Y and f are quasi-compact (see Definition 28.35.1 and Properties, Definition 27.26.1). Hence X is quasi-compact. Pick x \in X. We can choose m \geq 1 and t \in \Gamma (Y, \mathcal{M}^{\otimes m}) such that Y_ t is affine and f(x) \in Y_ t. Since \mathcal{L} restricts to an ample invertible sheaf on Cardinals T T Jersey Cardinals f^{-1}(Y_ t) = X_{f^*t} we can choose n \geq 1 and s \in \Gamma (X_{f^*t}, \mathcal{L}^{\otimes n}) with x \in (X_{f^*t})_ s with (X_{f^*t})_ s affine. By Properties, Lemma 27.17.2 there exists an integer e \geq 1 and a section s' \in \Gamma (X, \mathcal{L}^{\otimes n} \otimes f^*\mathcal{M}^{\otimes em}) which restricts to s(f^*t)^ e on X_{f^*t}. For any b > 0 consider the section s'' = s'(f^*t)^ b of \mathcal{L}^{\otimes n} \otimes f^*\mathcal{M}^{\otimes (e + b)m}. Then X_{s''} = (X_{f^*t})_ s is an affine open of X containing x. Picking b such that n divides e + b we see \mathcal{L}^{\otimes n} \otimes f^*\mathcal{M}^{\otimes (e + b)m} is the nth power of \mathcal{L} \otimes f^*\mathcal{M}^{\otimes a} for some a and we can get any a divisible by m and big enough. Since X is quasi-compact a finite number of these affine opens cover X. We conclude that for some a sufficiently divisible and large enough the invertible sheaf \mathcal{L} \otimes f^*\mathcal{M}^{\otimes a} is ample on Cardinals T T Jersey Cardinals X. On the other hand, we know that \mathcal{M}^{\otimes c} (and hence its pullback to X) is globally generated for all c \gg 0 by Properties, Proposition 27.26.13. Thus \mathcal{L} \otimes f^*\mathcal{M}^{\otimes a + c} is ample (Properties, Lemma 27.26.5) for c \gg 0 and (1) is proved. Lemma 28.35.8. Let g : Y \to S and f : X \to Y be morphisms of schemes. Let \mathcal{M} be an invertible \mathcal{O}_ Y-module. Let \mathcal{L} be an invertible \mathcal{O}_ X-module. If S is quasi-compact, \mathcal{M} is g-ample, and \mathcal{L} is f-ample, then \mathcal{L} \otimes f^*\mathcal{M}^{\otimes a} is g \circ f-ample for a \gg 0. Lemma 28.35.9. Let f : X \to S be a morphism of schemes. Let \mathcal{L} be an invertible \mathcal{O}_ X-module. Let S' \to S be a morphism of schemes. Let f' : X' \to S' be the base change of f and denote \mathcal{L}' the pullback of \mathcal{L} to X'. If \mathcal{L} is f-ample, then \mathcal{L}' is f'-ample. Proof. By Lemma Wholesale Hockey Jerseys Cheap Shop Online Sports it suffices to find an affine open covering S' = \bigcup U'_ i such that \mathcal{L}' restricts to an ample invertible sheaf on (f')^{-1}(U_ i') for all i. We may choose U'_ i mapping into an affine open U_ i \subset S. In this case the morphism (f')^{-1}(U'_ i) \to f^{-1}(U_ i) is affine as a base change of the affine morphism U'_ i \to U_ i (Lemma Bears Kohls Bears Jersey Bears Kohls Bears Kohls Jersey Jersey). Thus \mathcal{L}'|_{(f')^{-1}(U'_ i)} is ample by Lemma 28.35.7. \square Lemma 28.35.10. Let Cardinals T T Jersey Cardinals g : Y \to S and f : X \to Y be morphisms of schemes. Let \mathcal{L} be an invertible \mathcal{O}_ X-module. If \mathcal{L}Jersey Service Salute Tucker Justin To is g \circ f-ample and f is quasi-compact 1 then \mathcal{L} is f-ample. Proof. Assume f is quasi-compact and \mathcal{L} is g \circ f-ample. Let U \subset S be an affine open and let V \subset Y be an affine open with g(V) \subset U. Then \mathcal{L}|_{(g \circ f)^{-1}(U)} is ample on Cardinals T T Jersey Cardinals (g \circ f)^{-1}(U) by assumption. Since f^{-1}(V) \subset (g \circ f)^{-1}(U) we see that \mathcal{L}|_{f^{-1}(V)} is ample on f^{-1}(V) by Properties, Lemma 27.26.14. Namely, f^{-1}(V) \to (g \circ f)^{-1}(U) is a quasi-compact open immersion by Schemes, Lemma 25.21.14 as (g \circ f)^{-1}(U) is separated (Properties, Lemma 27.26.8) and f^{-1}(V) is quasi-compact (as f is quasi-compact). Thus we conclude that \mathcal{L} is f-ample by Lemma Wholesale Hockey Jerseys Cheap Shop Online Sports. \square Post a comment Your email address will not be published. Required fields are marked. In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). All contributions are licensed under the GNU Free Documentation License.
Different post-stack inversion methods such as model based, sparse spike, colored and recursive inversion methods use high cut frequency impedance logs (low frequency) and mid-frequency seismic trace in order to generate an inverted impedance model. Then why are high frequency components are not taken into account during inversion? Because those frequencies are not present in the seismic data. Here's an example with some typical numbers: Seismic sample interval: 4 ms Therefore Nyquist is 0.5 $\times$ 1/0.004 = 125 Hz Usually there is a filter on the recorder at 0.8 $\times$ Nyquist = 100 Hz So the seismic does not contain frequencies beyond 100 Hz The maximum frequency $f_\text{max}$ of the data determines its resolving power. There are various ways to think about and calculate this; here's one formula (Kallweit & Wood, 1982): $$\tau_\text{min} = \frac{1}{1.4\,f_\text{max}}$$ So in our example, the thinnest resolvable bed is about 1 / (1.4 $\times$ 100) = 7.1 ms, and it will be thicker than that if there is a lot of noise or the bandwidth is not optimal (the earth attenuates high frequencies more than low ones, so this is an absolute minimum). If we assume a P-wave velocity of 2500 m/s, then 7.1 ms two-way time is about 2500 * (7.1/2) = 8.9 m (55 ft). Ordinary wireline logs have a resolution of 0.15 m or 6 in, so there's a substantial discrepancy. I call this 'the integration gap'. Because the seismic contains no information about geology thinner than this 8.9 m (in this example), not only is it pointless trying to invert for it, but it would be dangerous: anything we thought we'd resolved would be erroneous. This is why it's necessary to filter the logs back to the resolution of the seismic. Reference Kallweit, R & L Wood (1982). The limits of resolution of zero-phase wavelets. Geophysics 47 (7), p 1035–1046, DOI:10.1190/1.1441367.
We have the following nice relations for Striling numbers of the first kind $$\left[n\atop 2\right] = \Gamma(n) H_{n-1}$$ $$\left[n\atop 3\right] = \frac{\Gamma(n)}{2} ((H_{n-1})^2-H_{n-1}^{(2)})$$ Where $$H^{(p)}_n = \sum^n_{k=1} \frac{1}{k^p}, \,\,\,H^{(1)}_n \equiv H_n$$ Questions I want an algebraic proof (not combinatorial) for the previous relations ? Is there a "simple" general formula in terms of the harmonic numbers for $$\left[n\atop k\right] = ?$$
Is the x_e equation in CAMB correct or not? Post Reply 3 posts • Page 1of 1 I am looking at the Antony Lewis' paper https://arxiv.org/pdf/0804.3865.pdf for Eq.(B3) on page 11, there is this equation of number of free electron per hydrogen atom. But for this equation, if [tex]z \rightarrow[/tex] large values, then y=(1+z)^3/2 will also become large values, while y(z_re) and [tex]\Delta_y[/tex] are fixed. Since "[tex]\tanh(x \rightarrow \infty) \rightarrow 1[/tex]", then for large redshift, [tex]x_{\rm e} \rightarrow 1[/tex]. If [tex]z \rightarrow 0[/tex], then [tex]\tanh[/tex] function will becomes a negative value but greater than -1, then [tex]x_{\rm e} \rightarrow 0[/tex]. So this is completely opposite to the trend of Fig.6 on the same page. Can anyone explain what is going on here? Perhaps I made some stupid mistake, please point it out. Thank you. https://arxiv.org/pdf/0804.3865.pdf for Eq.(B3) on page 11, there is this equation of number of free electron per hydrogen atom. But for this equation, if [tex]z \rightarrow[/tex] large values, then y=(1+z)^3/2 will also become large values, while y(z_re) and [tex]\Delta_y[/tex] are fixed. Since "[tex]\tanh(x \rightarrow \infty) \rightarrow 1[/tex]", then for large redshift, [tex]x_{\rm e} \rightarrow 1[/tex]. If [tex]z \rightarrow 0[/tex], then [tex]\tanh[/tex] function will becomes a negative value but greater than -1, then [tex]x_{\rm e} \rightarrow 0[/tex]. So this is completely opposite to the trend of Fig.6 on the same page. Can anyone explain what is going on here? Perhaps I made some stupid mistake, please point it out. Thank you. Oh, then this typo affects the Planck reionization paper, the published version of Planck intermediate results XLVII. Planck constraints on reionization history, page 5, equation 2 also takes this typo. page 5, equation 2 also takes this typo.
For people like me who study algorithms for a living, the 21st-century standard model of computation is the integer RAM. The model is intended to reflect the behavior of real computers more accurately than the Turing machine model. Real-world computers process multiple-bit integers in constant time using parallel hardware; not arbitrary integers, but (because word sizes grow steadily over time) not fixed size integers, either. The model depends on a single parameter $w$, called the word size. Each memory address holds a single $w$-bit integer, or word. In this model, the input size $n$ is the number of words in the input, and the running time of an algorithm is the number of operations on words. Standard arithmetic operations (addition, subtraction, multiplication, integer division, remainder, comparison) and boolean operations (bitwise and, or, xor, shift, rotate) on words require $O(1)$ time by definition. Formally, the word size $w$ is NOT a constant for purposes of analyzing algorithms in this model. To make the model consistent with intuition, we require $w \ge \log_2 n$, since otherwise we cannot even store the integer $n$ in a single word. Nevertheless, for most non-numerical algorithms, the running time is actually independent of $w$, because those algorithms don't care about the underlying binary representation of their input. Mergesort and heapsort both run in $O(n\log n)$ time; median-of-3-quicksort runs in $O(n^2)$ time in the worst case. One notable exception is binary radix sort, which runs in $O(nw)$ time. Setting $w = \Theta(\log n)$ gives us the traditional logarithmic-cost RAM model. But some integer RAM algorithms are designed for larger word sizes, like the linear-time integer sorting algorithm of Andersson et al., which requires $w = \Omega(\log^{2+\varepsilon} n)$. For many algorithms that arise in practice, the word size $w$ is simply not an issue, and we can (and do) fall back on the far simpler uniform-cost RAM model. The only serious difficulty comes from nested multiplication, which can be used to build very large integers very quickly. If we could perform arithmetic on arbitrary integers in constant time, we could solve any problem in PSPACE in polynomial time. Update: I should also mention that there are exceptions to the "standard model", like Fürer's integer multiplication algorithm, which uses multitape Turing machines (or equivalently, the "bit RAM"), and most geometric algorithms, which are analyzed in a theoretically clean but idealized "real RAM" model. Yes, this is a can of worms.
> Input > Input >> 1² >> (3] >> 1%L >> L=2 >> Each 5 4 >> Each 6 7 >> L⋅R >> Each 9 4 8 > {0} >> {10} >> 12∖11 >> Output 13 Try it online! Returns a set of all possible solutions, and the empty set (i.e. \$\emptyset\$) when no solution exists. How it works Unsurprisingly, it works almost identically to most other answers: it generates a list of numbers and checks each one for inverse modulus with the argument. If you're familiar with how Whispers' program structure works, feel free to skip ahead to the horizontal line. If not: essentially, Whispers works on a line-by-line reference system, starting on the final line. Each line is classed as one of two options. Either it is a nilad line, or it is a operator line. Nilad lines start with >, such as > Input or > {0} and return the exact value represented on that line i.e > {0} returns the set \$\{0\}\$. > Input returns the next line of STDIN, evaluated if possible. Operator lines start with >>, such as >> 1² or >> (3] and denote running an operator on one or more values. Here, the numbers used do not reference those explicit numbers, instead they reference the value on that line. For example, ² is the square command (\$n \to n^2\$), so >> 1² does not return the value \$1^2\$, instead it returns the square of line 1, which, in this case, is the first input. Usually, operator lines only work using numbers as references, yet you may have noticed the lines >> L=2 and >> L⋅R. These two values, L and R, are used in conjunction with Each statements. Each statements work by taking two or three arguments, again as numerical references. The first argument (e.g. 5) is a reference to an operator line used a function, and the rest of the arguments are arrays. We then iterate the function over the array, where the L and R in the function represent the current element(s) in the arrays being iterated over. As an example: Let \$A = [1, 2, 3, 4]\$, \$B = [4, 3, 2, 1]\$ and \$f(x, y) = x + y\$. Assuming we are running the following code: > [1, 2, 3, 4] > [4, 3, 2, 1] >> L+R >> Each 3 1 2 We then get a demonstration of how Each statements work. First, when working with two arrays, we zip them to form \$C = [(1, 4), (2, 3), (3, 2), (4, 1)]\$ then map \$f(x, y)\$ over each pair, forming our final array \$D = [f(1, 4), f(2, 3), f(3, 2), f(4, 1)] = [5, 5, 5, 5]\$ Try it online! How this code works Working counter-intuitively to how Whispers works, we start from the first two lines: > Input > Input This collects our two inputs, lets say \$x\$ and \$y\$, and stores them in lines 1 and 2 respectively. We then store \$x^2\$ on line 3 and create a range \$A := [1 ... x^2]\$ on line 4. Next, we jump to the section >> 1%L >> L=2 >> Each 5 4 >> Each 6 7 The first thing executed here is line 7, >> Each 5 4, which iterates line 5 over line 4. This yields the array \$B := [i \: \% \: x \: | \: i \in A]\$, where \$a \: \% \: b\$ is defined as the modulus of \$a\$ and \$b\$. We then execute line 8, >> Each 6 7, which iterates line 6 over \$B\$, yielding an array \$C := [(i \: \% \: x) = y \: | \: i \in A]\$. For the inputs \$x = 5, y = 2\$, we have \$A = [1, 2, 3, ..., 23, 24, 25]\$, \$B = [0, 1, 2, 1, 0, 5, 5, ..., 5, 5]\$ and \$C = [0, 0, 1, 0, 0, ..., 0, 0]\$ We then jump down to >> L⋅R >> Each 9 4 8 which is our example of a dyadic Each statement. Here, our function is line 9 i.e >> L⋅R and our two arrays are \$A\$ and \$C\$. We multiply each element in \$A\$ with it's corresponding element in \$C\$, which yields an array, \$E\$, where each element works from the following relationship: $$E_i =\begin{cases}0 & C_i = 0 \\A_i & C_i = 1\end{cases}$$ We then end up with an array consisting of \$0\$s and the inverse moduli of \$x\$ and \$y\$. In order to remove the \$0\$s, we convert this array to a set ( >> {10}), then take the set difference between this set and \$\{0\}\$, yielding, then outputting, our final result.
NA62 Louvain-La-Neuve Our group is contributing in the design and construction of the pixel detector of this experiment. Members Professors Research scientists Postdocs Visitors PhD students ProjectsClick the title to show project description. Gigatracker is in the core of one of the spectrometers used in NA62. It's composed of three planes of silicon pixels detectors assembled in a traditional way: readout electronics bump bonded on silicon sensors. Each plane is composed by 18000 pixels 300 um x 300 um arranged in 45 columns and readout by 10 chips. The particularity of this sensor is that its timing resolution should be better than 200 ps in order to cope with high expected rate (800 MHz). Another particularity is its operation in vacuum. CP3 is involved in several aspects in the production and operation of this detector. 1) Production of 25 GTK stations that will be used during the NA62 <latex>$K^+\to\pi^+\nu\bar{\nu}$</latex> run 2) Operation of GTK during data taking: time and spatial calibration, efficiency studies, effects of radiation, .... 3) Track candidates reconstruction, simulation. 4) Signal development of the signal in the sensor. We use both commercial programs (i.e. TCAD by Synopsys) as well as software developed by us to study the expected signal in this sensor. The NA62 experiment in the North Area of the CERN SPS is now fully operational and taking data. The plan is to collect the highest statistics ever reached for <latex>$K^+$</latex> decays, of the order of <latex>$10^{13}$</latex> events in the fiducial decay region of the detector until the end of 2018. This high-intensity and high-precision setup makes it possible to probe a number of ultra-rare or forbidden decay channels. Of particular interest to the CP3 group are the LFV/LNV <latex>$K^+ \rightarrow \pi^{+}\mu^{\pm}e^{\mp}$</latex> and <latex>$K^+ \rightarrow \pi^{-}l^+l'^+ (l,l' = e,\mu})$</latex> modes. Many BSM theories predict some degree of LFV, including Supersymmetry or the introduction of massive neutrinos. Furthermore, there are indirect hints for New Physics in the flavor sector, e.g. in the semileptonic decays of B-mesons. Explanations for the observed discrepancies predict effects of LFV in kaon decays. These particular LFV/LNV <latex>$K \rightarrow \pi ll$</latex> processes which at present are not covered by another experiment provide an attractive opportunity to test the SM. Any observable rate for one of these modes would constitute unambiguous evidence for New Physics. Considering the statistics that will be available at NA62 the current limits on their branching-ratios could be improved by at least one order of magnitude. NA62 will look for rare kaon decays at SPS accelerator at CERN. A total of about $10^{12}$ kaon decays will be produced in two/three years of data taking. Even though the topology of the events is relatively simple, and the amount of information per event small, the volume of data to be stored per year will be of the order of ~1000 TB. Also, an amount of 500 TB/year is expected from simulation. Profiting from the synergy inside CP3 in sharing computer resources our group is participating in the definition of the NA62 computing scheme. CP3 will be also one of the grid virtual organization of the experiment.
Functions An online exercise on function notation, inverse functions and composite functions. This is level 1, describe function machines using function notation. You can earn a trophy if you get at least 9 correct and you do this activity online. The first question has been done for you. Instructions Try your best to answer the questions above. Type your answers into the boxes provided leaving no spaces. As you work through the exercise regularly click the "check" button. If you have any wrong answers, do your best to do corrections but if there is anything you don't understand, please ask your teacher for help. When you have got all of the questions correct you may want to print out this page and paste it into your exercise book. If you keep your work in an ePortfolio you could take a screen shot of your answers and paste that into your Maths file. Transum.org This web site contains over a thousand free mathematical activities for teachers and pupils. Click here to go to the main page which links to all of the resources available. Please contact me if you have any suggestions or questions. Mathematicians are not the people who find Maths easy; they are the people who enjoy how mystifying, puzzling and hard it is. Are you a mathematician? Comment recorded on the 24 May 'Starter of the Day' page by Ruth Seward, Hagley Park Sports College: "Find the starters wonderful; students enjoy them and often want to use the idea generated by the starter in other parts of the lesson. Keep up the good work" Comment recorded on the 9 October 'Starter of the Day' page by Mr Jones, Wales: "I think that having a starter of the day helps improve maths in general. My pupils say they love them!!!" Answers There are answers to this exercise but they are available in this space to teachers, tutors and parents who have logged in to their Transum subscription on this computer. A Transum subscription unlocks the answers to the online exercises, quizzes and puzzles. It also provides the teacher with access to quality external links on each of the Transum Topic pages and the facility to add to the collection themselves. Subscribers can manage class lists, lesson plans and assessment data in the Class Admin application and have access to reports of the Transum Trophies earned by class members. If you would like to enjoy ad-free access to the thousands of Transum resources, receive our monthly newsletter, unlock the printable worksheets and see our Maths Lesson Finishers then sign up for a subscription now:Subscribe Go Maths Learning and understanding Mathematics, at every level, requires learner engagement. Mathematics is not a spectator sport. Sometimes traditional teaching fails to actively involve students. One way to address the problem is through the use of interactive activities and this web site provides many of those. The Go Maths page is an alphabetical list of free activities designed for students in Secondary/High school. Maths Map Are you looking for something specific? An exercise to supplement the topic you are studying at school at the moment perhaps. Navigate using our Maths Map to find exercises, puzzles and Maths lesson starters grouped by topic. Teachers If you found this activity useful don't forget to record it in your scheme of work or learning management system. The short URL, ready to be copied and pasted, is as follows: Do you have any comments? It is always useful to receive feedback and helps make this free resource even more useful for those learning Mathematics anywhere in the world. Click here to enter your comments. © Transum Mathematics :: This activity can be found online at: www.transum.org/Maths/Exercise/Functions.asp?Level=1 Close Level 1 - Describe function machines using function notation. Level 2 - Evaluate the given functions. Level 3 - Solve the equations given in function notation. Level 4 - Find the inverse of the given functions. Level 5 - Simplify the composite functions. Level 6 - Mixed questions. Exam Style questions are in the style of GCSE or IB/A-level exam paper questions and worked solutions are available for Transum subscribers. The following notes are intended to be a reminder or revision of the concepts and are not intended to be a substitute for a teacher or good textbook. Function notation is quite different to the algebraic notation you have learnt involving brackets. \(f(x)\) does not mean the value of f multiplied by the value of x. In this case f is the name of the function and you would read \(f(x) = x^2\) as "f of x equals x squared". In terms of function machines, if the input is \(x\) then the output is \(f(x)\). Example \(x \to \)\( + 3 \)\( \to \)\( \times 4 \)\( \to f(x)\) In this case 3 is added to \(x\) and then the result is multiplied by 4 to give \(f(x)\) \( (x+3) \times 4 = f(x) \) \( f(x) = 4(x+3) \) Example if \(f(x)=x^2 + 3\) calculate the value of \(f(6)\) This means replace the \(x\) with a 6 in the given function to obtain the result. \(f(6) = 6^2+3\) \(f(6) = 39\) Example \(f(x)=3(x+7) \) find \(x\) if \(f(x) = 30\) \(3(x+7)=30\) \(x+7 = 10\) \(x = 3\) The inverse of a function, written as \(f^{-1}(x) \) can be thought of as a way to 'undo' the function. If the function is written as a function machine, the inverse can be thought of as working backwards with the output becomming the input and the input becoming the output. Example \( f(x) = 4(x+3) \) \(x \to \)\( + 3 \)\( \to \)\( \times 4 \)\( \to f(x)\) \( f^{-1}(x) \leftarrow \)\( - 3 \)\( \leftarrow \)\( \div 4 \)\( \leftarrow x \) \( f^{-1}(x) = \frac{x}{4} - 3 \) A quicker way of finding the inverse of \(f(x)\) is to replace the \(f(x)\) with \(x\) on the left side of the equals sign and replace the \(x\) with \( f^{-1}(x) \) on the right side of the equals sign. Then rearrange the equation to make \( f^{-1}(x) \) the subject. A composite function contains two functions combined into a single function. One function is applied to the result of the other function. You should evaluate the function closest to \(x\) first. Example if \(f(x)=2x+7\) and \(g(x)=5x^2\) find \(fg(3)\) \(g(3) = 5 \times 3^2\) \(g(3) = 5 \times 9\) \(g(3) = 45\) \(f(45) = 2 \times 45 + 7\) \(f(45) = 97\) so \( fg(3) = 97\) Example if \(f(x)=x+2\) and \(g(x)=3x^2\) find \(gf(x)\) \( gf(x) = 3(x+2)^2\) \( gf(x) = 3(x^2+4x+4) \) \( gf(x) = 3x^2+12x+12 \) Example Find \(f(x-2)\) if \(f(x)=5x^2+3\) \(f(x-2) =5(x-2)^2+3\) \(f(x-2) =5(x^2-4x+4)+3\) \(f(x-2) =5x^2-20x+20+3\) \(f(x-2) =5x^2-20x+23\) TI-nSpire: Don't wait until you have finished the exercise before you click on the 'Check' button. Click it often as you work through the questions to see if you are answering them correctly. You can double-click the 'Check' button to make it float at the bottom of your screen. Close
Search Now showing items 1-9 of 9 Production of $K*(892)^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$ =7 TeV (Springer, 2012-10) The production of K*(892)$^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$=7 TeV was measured by the ALICE experiment at the LHC. The yields and the transverse momentum spectra $d^2 N/dydp_T$ at midrapidity |y|<0.5 in ... Transverse sphericity of primary charged particles in minimum bias proton-proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV (Springer, 2012-09) Measurements of the sphericity of primary charged particles in minimum bias proton--proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV with the ALICE detector at the LHC are presented. The observable is linearized to be ... Pion, Kaon, and Proton Production in Central Pb--Pb Collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012-12) In this Letter we report the first results on $\pi^\pm$, K$^\pm$, p and pbar production at mid-rapidity (|y|<0.5) in central Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV, measured by the ALICE experiment at the LHC. The ... Measurement of prompt J/psi and beauty hadron production cross sections at mid-rapidity in pp collisions at root s=7 TeV (Springer-verlag, 2012-11) The ALICE experiment at the LHC has studied J/ψ production at mid-rapidity in pp collisions at s√=7 TeV through its electron pair decay on a data sample corresponding to an integrated luminosity Lint = 5.6 nb−1. The fraction ... Suppression of high transverse momentum D mesons in central Pb--Pb collisions at $\sqrt{s_{NN}}=2.76$ TeV (Springer, 2012-09) The production of the prompt charm mesons $D^0$, $D^+$, $D^{*+}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at the LHC, at a centre-of-mass energy $\sqrt{s_{NN}}=2.76$ TeV per ... J/$\psi$ suppression at forward rapidity in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012) The ALICE experiment has measured the inclusive J/ψ production in Pb-Pb collisions at √sNN = 2.76 TeV down to pt = 0 in the rapidity range 2.5 < y < 4. A suppression of the inclusive J/ψ yield in Pb-Pb is observed with ... Production of muons from heavy flavour decays at forward rapidity in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV (American Physical Society, 2012) The ALICE Collaboration has measured the inclusive production of muons from heavy flavour decays at forward rapidity, 2.5 < y < 4, in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV. The pt-differential inclusive ... Particle-yield modification in jet-like azimuthal dihadron correlations in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012-03) The yield of charged particles associated with high-pT trigger particles (8 < pT < 15 GeV/c) is measured with the ALICE detector in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV relative to proton-proton collisions at the ... Measurement of the Cross Section for Electromagnetic Dissociation with Neutron Emission in Pb-Pb Collisions at √sNN = 2.76 TeV (American Physical Society, 2012-12) The first measurement of neutron emission in electromagnetic dissociation of 208Pb nuclei at the LHC is presented. The measurement is performed using the neutron Zero Degree Calorimeters of the ALICE experiment, which ...
I believe your suggested method is likely to be the best way to do it in general. step 1: sample an interval to generate from, using the discrete probability distribution of the trapezoid areas. step 2: sample from the conditional distribution given you're within the trapezoid (i.e. as if it was scaled to be its own density). There are a number of efficient algorithms for step 1. See: How to sample from a discrete distribution? or How to generate numbers based on an arbitrary discrete distribution? Now step 2 can be done in a variety of ways. It's probably easiest to just write a generic trapezoid sampler (though if you're writing within particular platforms you may already have one). e.g. assume you have a trapezoid over $(0,1)$ (with only one parameter, $h$ - the height at $0$, where $0\leq h\leq 2$), then scale and shift the result as needed for each segment. Trapezoids can be generated in a variety of ways, for example you might: use inverse cdf -- the cdf is quadratic, so not hard to invert; $X = \frac{\sqrt{h^2\, +\, 4\,U\,(1-h)}\,-\,h}{2\,(1-h)}$, but you need to check for $h=1$ and return $U$ in that case (and if h is likely to be very very close to 1 you may be better to rewrite the function so that it doesn't suffer from catastrophic cancellation). treat it as a mixture of a uniform plus a triangular; the triangular is straightforward in any of several ways. you can treat it as a mixture of two triangular distributions you could even use simple rejection sampling on each segment (in many practical cases of this problem it will be fairly efficient and it can never be worse than 50% rejection) Illustration of the last three trapezoid methods If you needed many draws from this distribution, you'd precompute the information needed for the discrete sampler and precalculate all the $h$ and x-scaling values for each segment -- then generation from the distribution as a whole should proceed quickly. There are numerous other ways to generate from a trapezoid not mentioned here.[Note that this piecewise trapezoid needn't be everywhere continuous; if you keep track of both ends for every segment the same approach would work just fine.]
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
I was reading CLRS' book on how to use the substitution method to solve recurrences, where they have the following example: $T(n) = 2T(\lfloor{\frac{n}{2}}\rfloor) + n$ where $T(1) = 1$ They assume that $T(n) = O(n \log n)$ and go on to prove it by using induction. I understand the inductive step, but I do not understand how they find the base case. For $n=1$ we have $T(1) \leq c1 \log 1 = 0$, which is wrong because $T(1) = 1$. So we can not use $T(1)$ as a base case. For $n=2$ we have $T(2) = 2T(1) + 2 = 4$, so $4 \leq c2 \log2$. The last inequality holds for every $c \geq 2$. In the book they write that we also need $n=3$ to be the base case. However, they do not really explain why. Why isn't $n=2$ sufficient?
Functions An online exercise on function notation, inverse functions and composite functions. This is level 4, find the inverse of the given functions. You can earn a trophy if you get at least 9 correct and you do this activity online. Instructions Try your best to answer the questions above. Type your answers into the boxes provided leaving no spaces. As you work through the exercise regularly click the "check" button. If you have any wrong answers, do your best to do corrections but if there is anything you don't understand, please ask your teacher for help. When you have got all of the questions correct you may want to print out this page and paste it into your exercise book. If you keep your work in an ePortfolio you could take a screen shot of your answers and paste that into your Maths file. Transum.org This web site contains over a thousand free mathematical activities for teachers and pupils. Click here to go to the main page which links to all of the resources available. Please contact me if you have any suggestions or questions. Mathematicians are not the people who find Maths easy; they are the people who enjoy how mystifying, puzzling and hard it is. Are you a mathematician? Comment recorded on the 2 May 'Starter of the Day' page by Angela Lowry, : "I think these are great! So useful and handy, the children love them. Comment recorded on the 28 May 'Starter of the Day' page by L Smith, Colwyn Bay: "An absolutely brilliant resource. Only recently been discovered but is used daily with all my classes. It is particularly useful when things can be saved for further use. Thank you!" Answers There are answers to this exercise but they are available in this space to teachers, tutors and parents who have logged in to their Transum subscription on this computer. A Transum subscription unlocks the answers to the online exercises, quizzes and puzzles. It also provides the teacher with access to quality external links on each of the Transum Topic pages and the facility to add to the collection themselves. Subscribers can manage class lists, lesson plans and assessment data in the Class Admin application and have access to reports of the Transum Trophies earned by class members. If you would like to enjoy ad-free access to the thousands of Transum resources, receive our monthly newsletter, unlock the printable worksheets and see our Maths Lesson Finishers then sign up for a subscription now:Subscribe Go Maths Learning and understanding Mathematics, at every level, requires learner engagement. Mathematics is not a spectator sport. Sometimes traditional teaching fails to actively involve students. One way to address the problem is through the use of interactive activities and this web site provides many of those. The Go Maths page is an alphabetical list of free activities designed for students in Secondary/High school. Maths Map Are you looking for something specific? An exercise to supplement the topic you are studying at school at the moment perhaps. Navigate using our Maths Map to find exercises, puzzles and Maths lesson starters grouped by topic. Teachers If you found this activity useful don't forget to record it in your scheme of work or learning management system. The short URL, ready to be copied and pasted, is as follows: Do you have any comments? It is always useful to receive feedback and helps make this free resource even more useful for those learning Mathematics anywhere in the world. Click here to enter your comments. © Transum Mathematics :: This activity can be found online at: www.transum.org/Maths/Exercise/Functions.asp?Level=4 Close Level 1 - Describe function machines using function notation. Level 2 - Evaluate the given functions. Level 3 - Solve the equations given in function notation. Level 4 - Find the inverse of the given functions. Level 5 - Simplify the composite functions. Level 6 - Mixed questions. Exam Style questions are in the style of GCSE or IB/A-level exam paper questions and worked solutions are available for Transum subscribers. The following notes are intended to be a reminder or revision of the concepts and are not intended to be a substitute for a teacher or good textbook. Function notation is quite different to the algebraic notation you have learnt involving brackets. \(f(x)\) does not mean the value of f multiplied by the value of x. In this case f is the name of the function and you would read \(f(x) = x^2\) as "f of x equals x squared". In terms of function machines, if the input is \(x\) then the output is \(f(x)\). Example \(x \to \)\( + 3 \)\( \to \)\( \times 4 \)\( \to f(x)\) In this case 3 is added to \(x\) and then the result is multiplied by 4 to give \(f(x)\) \( (x+3) \times 4 = f(x) \) \( f(x) = 4(x+3) \) Example if \(f(x)=x^2 + 3\) calculate the value of \(f(6)\) This means replace the \(x\) with a 6 in the given function to obtain the result. \(f(6) = 6^2+3\) \(f(6) = 39\) Example \(f(x)=3(x+7) \) find \(x\) if \(f(x) = 30\) \(3(x+7)=30\) \(x+7 = 10\) \(x = 3\) The inverse of a function, written as \(f^{-1}(x) \) can be thought of as a way to 'undo' the function. If the function is written as a function machine, the inverse can be thought of as working backwards with the output becomming the input and the input becoming the output. Example \( f(x) = 4(x+3) \) \(x \to \)\( + 3 \)\( \to \)\( \times 4 \)\( \to f(x)\) \( f^{-1}(x) \leftarrow \)\( - 3 \)\( \leftarrow \)\( \div 4 \)\( \leftarrow x \) \( f^{-1}(x) = \frac{x}{4} - 3 \) A quicker way of finding the inverse of \(f(x)\) is to replace the \(f(x)\) with \(x\) on the left side of the equals sign and replace the \(x\) with \( f^{-1}(x) \) on the right side of the equals sign. Then rearrange the equation to make \( f^{-1}(x) \) the subject. A composite function contains two functions combined into a single function. One function is applied to the result of the other function. You should evaluate the function closest to \(x\) first. Example if \(f(x)=2x+7\) and \(g(x)=5x^2\) find \(fg(3)\) \(g(3) = 5 \times 3^2\) \(g(3) = 5 \times 9\) \(g(3) = 45\) \(f(45) = 2 \times 45 + 7\) \(f(45) = 97\) so \( fg(3) = 97\) Example if \(f(x)=x+2\) and \(g(x)=3x^2\) find \(gf(x)\) \( gf(x) = 3(x+2)^2\) \( gf(x) = 3(x^2+4x+4) \) \( gf(x) = 3x^2+12x+12 \) Example Find \(f(x-2)\) if \(f(x)=5x^2+3\) \(f(x-2) =5(x-2)^2+3\) \(f(x-2) =5(x^2-4x+4)+3\) \(f(x-2) =5x^2-20x+20+3\) \(f(x-2) =5x^2-20x+23\) TI-nSpire: Don't wait until you have finished the exercise before you click on the 'Check' button. Click it often as you work through the questions to see if you are answering them correctly. You can double-click the 'Check' button to make it float at the bottom of your screen. Close
Bounds of the Reciprocal of a Quadratic Suppose we want to find the bounds of \[\frac{2}{x^2+4x+9}\]. Numerator and denominator are both positive, and as \[x \rightarrow \infty 0\], the denominator tends to infinity also, so the fraction tends to zero. Hence \[0 \lt \frac{2}{x^2+4x+9}\]. Completing the square for the denominator gives \[\frac{2}{x^2+4x+9}=\frac{2}{(x+2)^2+5}\]. To maximise this fraction we must minimise the denominator, which is the sum of non negative terms. The denominator is minimised when \[(x+2)^2=0 \rightarrow x=-2\]. For this value of \[x\]the fraction is equal to \[0 \lt \frac{2}{(-2+2)^2+5}=\frac{2}{0+5}=\frac{2}{5}\]. Hence \[0 \lt \frac{2}{x^2+4x+9} \lt \frac{2}{5}\]. The graph of the function is shown below.
The state of a spin-$\frac12$ particle that is spin up along the axis whose direction is specified by the unit vector $n=\sin\theta\cos\phi i+\sin\theta\sin\phi j+\cos\theta k,$ is given by $$|+n \ \rangle=\cos\frac{\theta}{2}|+z \ \rangle + e^{i\phi}\sin\frac{\theta}{2}|-z \ \rangle.$$ a. Suppose that a measurement of $S_z$ is carried out on a particle in the state $|+n \ \rangle.$ What is the probably that the measurement yields $\hbar/2$, $-\hbar/2$. How about the measurements of $S_x?$ b. Determine the uncertainty $\triangle S_z$ and $\triangle S_x$ of your measurements. I know how to the measurement of $S_z$ for parts a and b, but I am not sure how to do the measurements for $S_x$. I have: For reference, $$|+x \ \rangle = \frac{1}{\sqrt{2}}|+ z \ \rangle+\frac{1}{\sqrt2}|-z \ \rangle$$ $$|+y \ \rangle = \frac{1}{\sqrt{2}}|+ z \ \rangle+\frac{i}{\sqrt2}|-z \ \rangle.$$ a.) Probability for $\hbar/2$: $$|\langle \ +z |+n \rangle|^2 = \cos^2\frac{\theta}{2}$$ Probability for $-\hbar/2$: $$|\langle \ -z |+n \rangle|^2 = \left|e^{i\phi}\sin\frac{\theta}{2}\right|^2=\sin^2\frac{\theta}{2}.$$ But I am not sure how to do it for $S_x?$
From Becker, Becker and Schwarz String Theory and M-Theory: For the infinitesimal conformal transformation $$\tag{3.25}\delta z=\varepsilon(z)\quad\text{and}\quad \delta\bar z=\tilde\varepsilon(\bar z),$$ the associated conserved charge that generates this transformation is $$\tag{3.26}Q=Q_\varepsilon+Q_{\tilde\varepsilon}=\frac{1}{2\pi i}\oint \left[T(z)\varepsilon(z)dz+\tilde T(\bar z)\tilde\varepsilon(\bar z)d\bar z\right].$$ The integral is performed over a circle of fixed radius. The variation of a field $\Phi(z,\bar z)$ under a conformal transformation is then given by $$\tag{3.27}\delta_\varepsilon\Phi(z,\bar z)=[Q_\varepsilon,\Phi(z,\bar z)]\quad\text{and}\quad \delta_{\bar\varepsilon}\Phi(z,\bar z)=[Q_{\bar\varepsilon},\Phi(z,\bar z)].$$ I have a few questions about this passage. In what sense is $Q$ conserved? I can see $$\partial\bar\partial Q=0$$ Is this what they mean? (In QTF, I would call a charge conserved if $\dot Q=0$.) How does (3.26) even come about? In standard QFT I would write $$Q=\int d^3x\,J^0=\int d^3x\,\frac{\partial\mathcal{L}}{\partial\dot\Psi_a}\delta\Psi_a$$ which is conserved due to the Euler-Lagrange equations and where $J^\mu$ is the Noether current. I don't see how this applies to Wick rotated space. I image the derivation of (3.27) will clarify itself once 2. has been answered. In QFT I would write $$Q=\int d^3x\,\frac{\partial\mathcal{L}}{\partial\dot\Psi_a}\delta\Psi_a=\int d^3x\,\pi^a\delta\Psi_a$$ By means of the canonical commutation relation we have $$\delta\Psi_a=i[Q,\Psi_a]$$ Is this procedure correct for deriving (3.27)? Any help greatly appreciated.
I just finished reading Feynman's Lectures on Physics vol.I, §34-9: "The momentum of light". The author explains that there is a relation between the wave 4-vector $k^{\mu}$ and the energy-momentum 4-vector $p^{\mu}$ of an EM wave, namely $$p^{\mu}=\hbar k^{\mu}, $$ or equivalently $$\tag{deB}W=\hbar \omega, \mathbf{p}=\hbar \mathbf{k},$$ and those equations are called de Broglie relations. However, as I learned in my classical electromagnetism course, flux of energy in such a wave is quantified by Poynting's vector, yielding formulas such as the following: $$\tag{1} I=\frac{1}{2 \mu_0 c} E_0^2, $$ where $I$ stands for "average intensity" of the wave and $E_0$ for "maximum amplitude of electric field". QuestionWhere is $\omega$? It does not appear in formula (1) nor in any other formula based on Poynting's vector. But as of equations (deB) it should do so. Am I wrong? Thank you.
There's a lot of important information contained in the subscripts of the summation signs. It's almost certainly a good idea to break up the subscript material into two or, better still, three lines to simplify the task of actually reading this material. Breaking up these lines may be achieved with, e.g., the \substack macro provided by the amsmath package. Separately, I'd recommend using \biggl\langle and \biggr\rangle instead of \langle and \rangle in the second equation. (Using \left and \right would generate "fences" that are clearly too large.) Furthermore, you should probably encase the text phrases s.t. (short for "such that", right?) and and in \text constructs, in order to typeset them in upright text mode. Finally, in the code below, I make use of \! (negative thinspace) directives to tighten up the layout in a couple of places. \documentclass{article} \usepackage{amsmath} % provides the \substack macro \begin{document} \[ N_i (\rho) = \sum _{\substack {B \in \rho \text{ s.t.} \\ n_{\delta_{\!j}}(B) = 0 \\ \forall j \neq i}} n_{\delta_i} (B) \] \[ \sum _{\substack{ \rho \text{ s.t.}\\ N_i(\rho) = a_i \\ \text{and }M_i(\rho) = b_i} } \prod _{B \in \rho} \biggl\langle \prod_{i=1}^{n} \delta_i ^{n_{\delta_i}(B)} y_i ^{n_{y_i}(B)} \biggr\rangle_{\!c} \] \end{document} If you wanted to tighten up the appearance of the first equation still further, you could load the mathtools package and encase the subscript of the summation symbol in a \mathclap macro, as is shown in Herbert's answer. Your second equation, in contrast, does not appear to lend itself to such tightening. Addendum to address the OP's follow-up questions: The \[ and \] items are LaTeX (not Plain-TeX) commands to start and end a display-style equation. The Plain-TeX method for starting and ending a display-math equation is $$. (As you must know, the $ symbol serves to initiate and terminate inline-math expressions.) When using LaTeX, it's very much preferred to use \[...\] rather than $$...$$. For more information on the differences between $$ and \[...\], please see Why is \[ ... \] preferable to $$? For more information on the various ways display-style equations can be set up in Plain-TeX and LaTeX see, e.g., this answer to the question, What are the differences between $$, \[, align, equation and displaymath? I don't see why using the revtex4 document class rather than the article document class should pose any issues. The control sequence \! introduces a "negative thinspace", shifting the subsequent material ever so slightly to the left (backwards). I wanted to move the "c" subscript of the right angle-brace very slightly to the left, so that it wouldn't risk looking forlorn and out of place. The effect is subtle but -- I'd argue -- worth undertaking. Compare the look of the following three forms of the final part of your second equation. The expression on the left uses \biggr\rangle_c, the one in the middle uses \biggr\rangle_{\!c}, and the one on the right uses \biggr\rangle_{\!\!c}, i.e., it shifts the "c" subscript twice to the left. In my (naturally subjective!) opinion, I'd say that the expression in the middle looks best. :-)
Functions An online exercise on function notation, inverse functions and composite functions. This is level 6, mixed questions. You can earn a trophy if you get at least 9 correct and you do this activity online. Instructions Try your best to answer the questions above. Type your answers into the boxes provided leaving no spaces. As you work through the exercise regularly click the "check" button. If you have any wrong answers, do your best to do corrections but if there is anything you don't understand, please ask your teacher for help. When you have got all of the questions correct you may want to print out this page and paste it into your exercise book. If you keep your work in an ePortfolio you could take a screen shot of your answers and paste that into your Maths file. Transum.org This web site contains over a thousand free mathematical activities for teachers and pupils. Click here to go to the main page which links to all of the resources available. Please contact me if you have any suggestions or questions. Mathematicians are not the people who find Maths easy; they are the people who enjoy how mystifying, puzzling and hard it is. Are you a mathematician? Comment recorded on the 9 April 'Starter of the Day' page by Jan, South Canterbury: "Thank you for sharing such a great resource. I was about to try and get together a bank of starters but time is always required elsewhere, so thank you." Comment recorded on the 9 May 'Starter of the Day' page by Liz, Kuwait: "I would like to thank you for the excellent resources which I used every day. My students would often turn up early to tackle the starter of the day as there were stamps for the first 5 finishers. We also had a lot of fun with the fun maths. All in all your resources provoked discussion and the students had a lot of fun." Answers There are answers to this exercise but they are available in this space to teachers, tutors and parents who have logged in to their Transum subscription on this computer. A Transum subscription unlocks the answers to the online exercises, quizzes and puzzles. It also provides the teacher with access to quality external links on each of the Transum Topic pages and the facility to add to the collection themselves. Subscribers can manage class lists, lesson plans and assessment data in the Class Admin application and have access to reports of the Transum Trophies earned by class members. If you would like to enjoy ad-free access to the thousands of Transum resources, receive our monthly newsletter, unlock the printable worksheets and see our Maths Lesson Finishers then sign up for a subscription now:Subscribe Go Maths Learning and understanding Mathematics, at every level, requires learner engagement. Mathematics is not a spectator sport. Sometimes traditional teaching fails to actively involve students. One way to address the problem is through the use of interactive activities and this web site provides many of those. The Go Maths page is an alphabetical list of free activities designed for students in Secondary/High school. Maths Map Are you looking for something specific? An exercise to supplement the topic you are studying at school at the moment perhaps. Navigate using our Maths Map to find exercises, puzzles and Maths lesson starters grouped by topic. Teachers If you found this activity useful don't forget to record it in your scheme of work or learning management system. The short URL, ready to be copied and pasted, is as follows: Do you have any comments? It is always useful to receive feedback and helps make this free resource even more useful for those learning Mathematics anywhere in the world. Click here to enter your comments. © Transum Mathematics :: This activity can be found online at: www.transum.org/Maths/Exercise/Functions.asp?Level=6 Close Level 1 - Describe function machines using function notation. Level 2 - Evaluate the given functions. Level 3 - Solve the equations given in function notation. Level 4 - Find the inverse of the given functions. Level 5 - Simplify the composite functions. Level 6 - Mixed questions. Exam Style questions are in the style of GCSE or IB/A-level exam paper questions and worked solutions are available for Transum subscribers. The following notes are intended to be a reminder or revision of the concepts and are not intended to be a substitute for a teacher or good textbook. Function notation is quite different to the algebraic notation you have learnt involving brackets. \(f(x)\) does not mean the value of f multiplied by the value of x. In this case f is the name of the function and you would read \(f(x) = x^2\) as "f of x equals x squared". In terms of function machines, if the input is \(x\) then the output is \(f(x)\). Example \(x \to \)\( + 3 \)\( \to \)\( \times 4 \)\( \to f(x)\) In this case 3 is added to \(x\) and then the result is multiplied by 4 to give \(f(x)\) \( (x+3) \times 4 = f(x) \) \( f(x) = 4(x+3) \) Example if \(f(x)=x^2 + 3\) calculate the value of \(f(6)\) This means replace the \(x\) with a 6 in the given function to obtain the result. \(f(6) = 6^2+3\) \(f(6) = 39\) Example \(f(x)=3(x+7) \) find \(x\) if \(f(x) = 30\) \(3(x+7)=30\) \(x+7 = 10\) \(x = 3\) The inverse of a function, written as \(f^{-1}(x) \) can be thought of as a way to 'undo' the function. If the function is written as a function machine, the inverse can be thought of as working backwards with the output becomming the input and the input becoming the output. Example \( f(x) = 4(x+3) \) \(x \to \)\( + 3 \)\( \to \)\( \times 4 \)\( \to f(x)\) \( f^{-1}(x) \leftarrow \)\( - 3 \)\( \leftarrow \)\( \div 4 \)\( \leftarrow x \) \( f^{-1}(x) = \frac{x}{4} - 3 \) A quicker way of finding the inverse of \(f(x)\) is to replace the \(f(x)\) with \(x\) on the left side of the equals sign and replace the \(x\) with \( f^{-1}(x) \) on the right side of the equals sign. Then rearrange the equation to make \( f^{-1}(x) \) the subject. A composite function contains two functions combined into a single function. One function is applied to the result of the other function. You should evaluate the function closest to \(x\) first. Example if \(f(x)=2x+7\) and \(g(x)=5x^2\) find \(fg(3)\) \(g(3) = 5 \times 3^2\) \(g(3) = 5 \times 9\) \(g(3) = 45\) \(f(45) = 2 \times 45 + 7\) \(f(45) = 97\) so \( fg(3) = 97\) Example if \(f(x)=x+2\) and \(g(x)=3x^2\) find \(gf(x)\) \( gf(x) = 3(x+2)^2\) \( gf(x) = 3(x^2+4x+4) \) \( gf(x) = 3x^2+12x+12 \) Example Find \(f(x-2)\) if \(f(x)=5x^2+3\) \(f(x-2) =5(x-2)^2+3\) \(f(x-2) =5(x^2-4x+4)+3\) \(f(x-2) =5x^2-20x+20+3\) \(f(x-2) =5x^2-20x+23\) TI-nSpire: Don't wait until you have finished the exercise before you click on the 'Check' button. Click it often as you work through the questions to see if you are answering them correctly. You can double-click the 'Check' button to make it float at the bottom of your screen. Close
I have a problem transforming from one system to another when the direction of motion is changed. To demonstrate the problem I'll set up an easy example with intuitive numbers: enlarge ↵ left: external observer (correct), right: moving observer (obviously wrong); v=c/2 I have a square with side length $S = 1 \text{ Ls}$ (lightsecond), and therefore the circumference $U = 4 \text{ Ls}$. In its upper right I place a runner, which runs around the square counterclockwise with the velocity $v=\frac{c}{2}$. He emits one red photon to the front, and a blue one in the clockwise direction. If I ask for the time and the location where the first photon outpaces the runner, this is easy from the view of an external observer who is at rest relative to the route: The blue photon passes the runner at $t = \frac{U}{c+v} = \frac{8}{3} \text{ sec}$. The place where the first impact happens is $t\cdot v = \frac{4}{3} \text{ Ls}$, the upper third on the left side of the square. We note that the blue photon meets the red photon in the lower right corner; after $2 \text{ sec}$ both have travelled $2 \text{ Ls}$ and therefore exactly 2 side lengths. So far so good. But now I'm starting to struggle: If I try to transform the scene into the system of the runner with $v=\frac{c}{2}$, I first transform the $\{1\}\times \{1\}$ square into a rectangle with the side lengths $\{1\} \times \left\{ \sqrt{1-\frac{v^2}{c^2}} \right\}$ = $\{1\}\times\{0.866\}$ - because of the Lorentz transformation the leghts in direction of movement shall contract. The red photon has $c$ relative to the runner, while the route is moving towards him with $v$. So from the view of the runner the photon is moving with $c+v = 1.5 c$ relative to the route. After the point where the photon is turning to its left and therefore changes its direction from horizontal to vertical, its vertical velocity must be $\sqrt{c^2-v^2}$, so the total velocity relative to the runner can be $c$ (Pythagoras). Now I calculate: The Lorentz-contracted side length $S' = \frac{S}{\gamma} = \frac{\sqrt{3}}{2} = 0.866 \text{ Ls}$. I divide those through $c+v$ to get the time until the red photon makes its first turn: $$\tau_1 = \frac{S/ \gamma}{c+v} = \frac{1\cdot \sqrt{1-(\frac{1}{2})^2}}{1+\frac{1}{2}}$$ $$\tau_1 = \frac{1}{\sqrt{3}} \text{ sec}$$ Because at this time the runner is still moving horizontal, but the photon vertical, the length of the vertical side is uncontracted relative to the runner, thus $S = 1 \text{ Ls}$. The time until the red photon reaches the lower left corner is therefore $S/(c^2-v^2)$: $$\tau_2=\frac{S}{\sqrt{c^2-v^2}} = \frac{1}{\sqrt{1^2-(\frac{1}{2})^2}}$$ $$\tau_2= \frac{2}{\sqrt{3}} \text{ sec}$$ The total time until the red photon reaches the lower left corner is then $$\tau_1+\tau_2 = \sqrt{3} \text{ sec}$$ This is also exactly the time the runner needs to travel his upper contracted side: $$\frac{S}{\gamma \cdot v} = \sqrt{3} \text{ sec}$$ Here the problem becomes obvious: From the view of the external observer (left image) we know that the red photon meets the blue photon in the lower left corner. Runner and photon start moving towards each other while both start from opposite directions on the same path; the runner moving straight down, and the photon straight up. Because the runner's velocity relative to the route $v=\frac{c}{2}$, the blue photons velocity relative to the route is $c-v$ which is also $\frac{c}{2}$. Runner and photon now have the same speed in opposite directions (so the photon has $c$ relative to the runner). Because of this they now must meet on the half way; but from the perspective of the external observer we know that they meet not half way, but in the upper third. What did I do wrong? Can anyone find my mistake? Clueless, Yukterez Post Script: The only explanation I might think of is that the photons make a jump when the runner changes direction (or at least what seems like a jump for the runner when his deceleration and acceleration times are infinitesimaly short). I'm not sure if this is physically correct, nor do I have any idea how to calculate the jumped distances correctly without tricking around... What I did here was a kind of a cheat; I solved for the distance where the photon would have to start so it can meet the runner at the right spot (but I have calculated the right spot in the outer observers system, see left image).
The Annals of Statistics Ann. Statist. Volume 16, Number 1 (1988), 356-366. Asymptotic Behavior of Likelihood Methods for Exponential Families when the Number of Parameters Tends to Infinity Abstract Consider a sample of size $n$ from a regular exponential family in $p_n$ dimensions. Let $\hat\theta_n$ denote the maximum likelihood estimator, and consider the case where $p_n$ tends to infinity with $n$ and where $\{\theta_n\}$ is a sequence of parameter values in $R^{p_n}$. Moment conditions are provided under which $\|\hat\theta_n - \theta_n\| = O_p(\sqrt{p_n/n})$ and $\|\hat\theta_n - \theta_n - \overline{X}_n\| = O_p (p_n/n)$, where $\overline{X}_n$ is the sample mean. The latter result provides normal approximation results when $p^2_n/n \rightarrow 0$. It is shown by example that even for a single coordinate of $(\hat\theta_n - \theta_n), p^2_n/n \rightarrow 0$ may be needed for normal approximation. However, if $p^{3/2}_n/n \rightarrow 0$, the likelihood ratio test statistic $\Lambda$ for a simple hypothesis has a chi-square approximation in the sense that $(-2 \log \Lambda - p_n)/\sqrt{2p_n} \rightarrow_D \mathscr{N}(0, 1)$. Article information Source Ann. Statist., Volume 16, Number 1 (1988), 356-366. Dates First available in Project Euclid: 12 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aos/1176350710 Digital Object Identifier doi:10.1214/aos/1176350710 Mathematical Reviews number (MathSciNet) MR924876 Zentralblatt MATH identifier 0637.62026 JSTOR links.jstor.org Citation Portnoy, Stephen. Asymptotic Behavior of Likelihood Methods for Exponential Families when the Number of Parameters Tends to Infinity. Ann. Statist. 16 (1988), no. 1, 356--366. doi:10.1214/aos/1176350710. https://projecteuclid.org/euclid.aos/1176350710
Search Now showing items 1-2 of 2 D-meson nuclear modification factor and elliptic flow measurements in Pb–Pb collisions at $\sqrt {s_{NN}}$ = 5.02TeV with ALICE at the LHC (Elsevier, 2017-11) ALICE measured the nuclear modification factor ($R_{AA}$) and elliptic flow ($\nu_{2}$) of D mesons ($D^{0}$, $D^{+}$, $D^{⁎+}$ and $D^{s+}$) in semi-central Pb–Pb collisions at $\sqrt{s_{NN}} =5.02$ TeV. The increased ... ALICE measurement of the $J/\psi$ nuclear modification factor at mid-rapidity in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV (Elsevier, 2017-11) ALICE at the LHC provides unique capabilities to study charmonium production at low transverse momenta ( p T ). At central rapidity, ( |y|<0.8 ), ALICE can reconstruct J/ ψ via their decay into two electrons down to zero ...
Suppose we have 2 inputs a and b , output is y=f(a,b).In the long run, let us suppose profits are maximized at a* and b*.Profit is py-wa-kb[p is price and w and k are constants]. Now for max profit, profit equals py-wa*-kb*.Now for constant/increasing returns to scale firms, doubling input clearly doubles or more the profit which means there can be no finite a* and b* for such firms which does not make logical sense to me. Where am i wrong? Suppose we have 2 inputs a and b , output is Are you assuming a perfectly competitive market? If so, then a profit-maximizing firm with constant returns to scale (CRS) production can only feasibly earn zero economic profits as $P = MC$ in the long-run. If not, then either $P > MC$ and the firm continues to produce indefinitely until it hits capacity constraints or $P < MC$ and the firm ceases to operate since producing any units would be unprofitable (in which case doubling inputs would be even more unprofitable). To see this, it's easiest to consider the firm's cost function $C(w, r, y)$. Since the firm's production is CRS, the firm's average cost of producing $y$ is constant and thus $C(w, r, y) = C(w, r, 1)y$. The firm's unconstrained profit maximization problem is thus \begin{align*} \max_{y} \prod = Py - C(w,r,1)y. \end{align*} Assuming an interior solution, the first-order condition is \begin{gather*} \frac{\partial \Pi}{\partial y} = P - C(w,r,1) = 0 \\ \implies P = C(w,r,1). \end{gather*} So at $P = C(w, r, 1)$, profits are zero and output $y$ is indeterminate. If instead $P > C(w,r,1)$, for instance, then we are no longer at an interior solution and it is clear that the firm will continue to produce and use more inputs up until capacity constraints bind.
Search Now showing items 1-10 of 24 Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV (Springer, 2015-01-10) The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ... Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV (Springer Berlin Heidelberg, 2015-04-09) The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV (Springer, 2015-05-27) The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ... Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (American Physical Society, 2015-03) We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV (American Physical Society, 2015-06) The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ... Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2015-11) The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ... K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2015-02) The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ...
9.2.1 - Confidence Intervals Given that the populations are known to be normally distributed, or if both sample sizes are at least 30, then the sampling distribution can be approximated using the \(t\) distribution, and the formulas below may be used. Here you will be introduced to the formulas to construct a confidence interval using the \(t\) distribution. Minitab Express will do all of these calculations for you, however, it uses a more sophisticated method to compute the degrees of freedom so answers may vary slightly, particularly with smaller sample sizes. General Form of a Confidence Interval \(point \;estimate \pm (multiplier) (standard \;error)\) Here, the point estimate is the difference between the two mean, \(\overline X _1 - \overline X_2\). Standard Error \(\sqrt{\frac{s_1^2}{n_1}+\frac{s_2^2}{n_2}}\) Confidence Interval for Two Independent Means \((\bar{x}_1-\bar{x}_2) \pm t^\ast{ \sqrt{\frac{s_1^2}{n_1}+\frac{s_2^2}{n_2}}}\) The degrees of freedom can be approximated as the smallest sample size minus one. Estimated Degrees of Freedom \(df=smallest\;n-1\) Example: Exam Scores by Learner Type Section A STAT 200 instructor wants to know how traditional students and adult learners differ in terms of their final exam scores. She collected the following data from a sample of students: Traditional Students Adult Learners \(\overline x\) 41.48 40.79 \(s\) 6.03 6.79 \(n\) 239 138 She wants to construct a 95% confidence interval to estimate the mean difference. The point estimate, or "best estimate," is the difference in sample means: \(\overline x _1 - \overline x_2 = 41.48-40.79=0.69\) The standard error can be computed next: \(\sqrt{\frac{s_1^2}{n_1}+\frac{s_2^2}{n_2}}=\sqrt{\frac{6.03^2}{239}+\frac{6.79^2}{138}}=0.697\) To find the multiplier, we need to construct a t distribution with \(df=smaller\;n-1=138-1=137\) to find the t scores that separate the middle 95% of the distribution from the outer 5% of the distribution: \(t^*=1.97743\) Now, we can combine all of these values to construct our confidence interval: \(point \;estimate \pm (multiplier) (standard \;error)\) \(0.69 \pm 1.97743 (0.697)\) \(0.69 \pm 1.379\) The margin of error is 1.379 \([-0.689, 2.069]\) We are 95% confident that the mean difference in traditional students' and adult learners' final exam scores is between -0.689 points and +2.069 points.
A particle moves along the x-axis so that at time t its position is given by $x(t) = t^3-6t^2+9t+11$ during what time intervals is the particle moving to the left? so I know that we need the velocity for that and we can get that after taking the derivative but I don't know what to do after that the velocity would than be $v(t) = 3t^2-12t+9$ how could I find the intervals Fix $c\in\{0,1,\dots\}$, let $K\geq c$ be an integer, and define $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$.I believe I have numerically discovered that$$\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n \quad \text{ as } K\to\infty$$but cannot ... So, the whole discussion is about some polynomial $p(A)$, for $A$ an $n\times n$ matrix with entries in $\mathbf{C}$, and eigenvalues $\lambda_1,\ldots, \lambda_k$. Anyways, part (a) is talking about proving that $p(\lambda_1),\ldots, p(\lambda_k)$ are eigenvalues of $p(A)$. That's basically routine computation. No problem there. The next bit is to compute the dimension of the eigenspaces $E(p(A), p(\lambda_i))$. Seems like this bit follows from the same argument. An eigenvector for $A$ is an eigenvector for $p(A)$, so the rest seems to follow. Finally, the last part is to find the characteristic polynomial of $p(A)$. I guess this means in terms of the characteristic polynomial of $A$. Well, we do know what the eigenvalues are... The so-called Spectral Mapping Theorem tells us that the eigenvalues of $p(A)$ are exactly the $p(\lambda_i)$. Usually, by the time you start talking about complex numbers you consider the real numbers as a subset of them, since a and b are real in a + bi. But you could define it that way and call it a "standard form" like ax + by = c for linear equations :-) @Riker "a + bi where a and b are integers" Complex numbers a + bi where a and b are integers are called Gaussian integers. I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd... Does anyone know if $T: V \to R^n$ is an inner product space isomorphism if $T(v) = (v)_S$, where $S$ is a basis for $V$? My book isn't saying so explicitly, but there was a theorem saying that an inner product isomorphism exists, and another theorem kind of suggesting that it should work. @TobiasKildetoft Sorry, I meant that they should be equal (accidently sent this before writing my answer. Writing it now) Isn't there this theorem saying that if $v,w \in V$ ($V$ being an inner product space), then $||v|| = ||(v)_S||$? (where the left norm is defined as the norm in $V$ and the right norm is the euclidean norm) I thought that this would somehow result from isomorphism @AlessandroCodenotti Actually, such a $f$ in fact needs to be surjective. Take any $y \in Y$; the maximal ideal of $k[Y]$ corresponding to that is $(Y_1 - y_1, \cdots, Y_n - y_n)$. The ideal corresponding to the subvariety $f^{-1}(y) \subset X$ in $k[X]$ is then nothing but $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$. If this is empty, weak Nullstellensatz kicks in to say that there are $g_1, \cdots, g_n \in k[X]$ such that $\sum_i (f^* Y_i - y_i)g_i = 1$. Well, better to say that $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$ is the trivial ideal I guess. Hmm, I'm stuck again O(n) acts transitively on S^(n-1) with stabilizer at a point O(n-1) For any transitive G action on a set X with stabilizer H, G/H $\cong$ X set theoretically. In this case, as the action is a smooth action by a Lie group, you can prove this set-theoretic bijection gives a diffeomorphism
Previously I asked some questions on quantum coherence on physics stackexchange(PSE) (https://physics.stackexchange.com/questions/424578/what-is-quantum-coherence-what-does-it-really-signify-and-what-does-it-tell-us) and the answer helped me to clear the concept. I did some more literature review to understand the basic facts in relation to density matrix. But I have some more questions which I posted on PSE but no one seems to answer them. I am asking these question in the context of Nuclear Magnetic Resonance(NMR), Electron Paramagnetic Resonance(EPR) or Quantum Optics. 1. Taking Basis states as $|S_z^+\rangle = \begin{bmatrix}1\\ 0 \end{bmatrix}$ and $|S_z^-\rangle = \begin{bmatrix}0\\ 1 \end{bmatrix}$, I construct a superposition state $|S_x^+\rangle=\frac{|S_z^+\rangle+|S_z^-\rangle}{\sqrt{2}}$ for which the density matrix is given as $\rho_{S_x^+}=\frac{1}{2}\begin{bmatrix}1&1\\ 1&1\end{bmatrix}$. So the pure superposition state $|S_x^+\rangle$ is clearly coherent and it can be inferred that system coherently oscillates between the states $|S_z^+\rangle$ and $|S_z^-\rangle$ if measurement is done repeatedly with each time system being prepared in state $|S_x^+\rangle$. The question here is of terminology, should I say that states $|S_z^+\rangle$ and $|S_z^-\rangle$ are coherent or should I say that state $|S_x^+\rangle$ is coherent. 2. For an ensemble of identical atoms in a superposition state $|\phi\rangle=\alpha|S_z^+\rangle+\beta|S_z^-\rangle$ where $|\alpha|^2$ and $|\beta|^2$ are not equal(although $|\alpha|^2 + |\beta|^2 = 1$) and gives the populations of $|S_z^+\rangle$ and $|S_z^-\rangle$ respectively, we can drive a transition between the two basis states and we call this phenomenon as population transfer (please correct me if I am wrong). Mathematically, the field induced transitions changes the values of $|\alpha|^2$ & $|\beta|^2$. However there can be one more phenomenon going on here called which is distinct from population transfer because polarization really counts the total spin magnetization of a state in context of NMR and EPR and may be Quantum Optics too. polarization transfer Now the third phenomenon i.e. (no classical analogue) can't occur between two states but needs three level system say $|1\rangle$, $|2\rangle$ and $|3\rangle$ and the field driven transitions between states $|1\rangle$ & $|2\rangle$ and states $|1\rangle$ & $|3\rangle$ somehow creates transition of states $|2\rangle$ & $|3\rangle$ even though there is no driving field of frequency corresponding to $|2\rangle$ & $|3\rangle$ transition. coherence transfer The last two phenomenon written in bolds are what I do not understand. Any insight into them will be very helpful and mathematics over them will be highly appreciated. http://140.117.34.2/faculty/phy/sw_ding/teaching/nmri01_underg/nmri01_underg_lecture04.ppt also tries to explain the difference between population transfer, polarization transfer and coherence transfer through density matrices but I cannot see much physical explanation over the phenomenons.
A particle moves along the x-axis so that at time t its position is given by $x(t) = t^3-6t^2+9t+11$ during what time intervals is the particle moving to the left? so I know that we need the velocity for that and we can get that after taking the derivative but I don't know what to do after that the velocity would than be $v(t) = 3t^2-12t+9$ how could I find the intervals Fix $c\in\{0,1,\dots\}$, let $K\geq c$ be an integer, and define $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$.I believe I have numerically discovered that$$\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n \quad \text{ as } K\to\infty$$but cannot ... So, the whole discussion is about some polynomial $p(A)$, for $A$ an $n\times n$ matrix with entries in $\mathbf{C}$, and eigenvalues $\lambda_1,\ldots, \lambda_k$. Anyways, part (a) is talking about proving that $p(\lambda_1),\ldots, p(\lambda_k)$ are eigenvalues of $p(A)$. That's basically routine computation. No problem there. The next bit is to compute the dimension of the eigenspaces $E(p(A), p(\lambda_i))$. Seems like this bit follows from the same argument. An eigenvector for $A$ is an eigenvector for $p(A)$, so the rest seems to follow. Finally, the last part is to find the characteristic polynomial of $p(A)$. I guess this means in terms of the characteristic polynomial of $A$. Well, we do know what the eigenvalues are... The so-called Spectral Mapping Theorem tells us that the eigenvalues of $p(A)$ are exactly the $p(\lambda_i)$. Usually, by the time you start talking about complex numbers you consider the real numbers as a subset of them, since a and b are real in a + bi. But you could define it that way and call it a "standard form" like ax + by = c for linear equations :-) @Riker "a + bi where a and b are integers" Complex numbers a + bi where a and b are integers are called Gaussian integers. I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd... Does anyone know if $T: V \to R^n$ is an inner product space isomorphism if $T(v) = (v)_S$, where $S$ is a basis for $V$? My book isn't saying so explicitly, but there was a theorem saying that an inner product isomorphism exists, and another theorem kind of suggesting that it should work. @TobiasKildetoft Sorry, I meant that they should be equal (accidently sent this before writing my answer. Writing it now) Isn't there this theorem saying that if $v,w \in V$ ($V$ being an inner product space), then $||v|| = ||(v)_S||$? (where the left norm is defined as the norm in $V$ and the right norm is the euclidean norm) I thought that this would somehow result from isomorphism @AlessandroCodenotti Actually, such a $f$ in fact needs to be surjective. Take any $y \in Y$; the maximal ideal of $k[Y]$ corresponding to that is $(Y_1 - y_1, \cdots, Y_n - y_n)$. The ideal corresponding to the subvariety $f^{-1}(y) \subset X$ in $k[X]$ is then nothing but $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$. If this is empty, weak Nullstellensatz kicks in to say that there are $g_1, \cdots, g_n \in k[X]$ such that $\sum_i (f^* Y_i - y_i)g_i = 1$. Well, better to say that $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$ is the trivial ideal I guess. Hmm, I'm stuck again O(n) acts transitively on S^(n-1) with stabilizer at a point O(n-1) For any transitive G action on a set X with stabilizer H, G/H $\cong$ X set theoretically. In this case, as the action is a smooth action by a Lie group, you can prove this set-theoretic bijection gives a diffeomorphism
0. Caveat Lector: This was done before I drank my morning coffee, so there may be some errors in the reasoning (well, the physical reasoning, the mathematics should be kosher). 1. Perfect Fluid. So we have two stress-energy tensors here. One is the stress energy tensor for a perfect fluid$$\tag{1}T^{\alpha\beta}_{\text{fluid}} = \rho \, u^\alpha \, u^\beta + p \, h^{\alpha\beta}$$where we have the worldlines of the fluid's particles have velocity $u^\alpha$ the projection tensor $h_{\alpha\beta} = g_{\alpha\beta} + u_\alpha \, u_\beta$ projects other tensors onto hyperplane elements orthogonal to $u^\alpha$ the matter density is given by the scalar function $\rho$, the pressure is given by the scalar function $p$. We'd need extra terms if there were heat flow or shear involved. 2. Scalar Field. Now, we have another distinct stress-energy tensor for a massless scalar field:$$\tag{2}T^{\mu \nu}_{\text{scalar}} =\partial^{\mu}\phi\, \partial^{\nu}\phi-\frac{1}{2}g^{\mu \nu}\partial_{\rho}\phi\,\partial^{\rho}\phi$$We would use this equation when modeling, e.g., massless pions (or some other massless spin-0 field). 3. Problem: Are these two related? Now if we take our matter density to be, in the appropriate units,$$\tag{3a} \rho = 1 + \frac{1}{2}\partial_{\rho}\phi\,\partial^{\rho}\phi $$and the pressure$$\tag{3b} p = \frac{-1}{2}\partial_{\rho}\phi\,\partial^{\rho}\phi $$then (2) resembles (1). This is after pretending $\partial^{\mu}\phi=u^{\mu}$, which terrifies the original poster (but that's what condensed matter physicists do, so I suppose I could end here content). Is this kosher? We should first note if we wanted to take the derivative of some function along the worldline $x^{\mu}(s)$ with respect to the "proper time" (length) $s$ we have$$\tag{4} \frac{\mathrm{d}f}{\mathrm{d}s}=\frac{\mathrm{d}x^{\mu}}{\mathrm{d}s}\frac{\partial f}{\partial x^{\mu}}$$by the chain rule. For general relativity, we use the "comma-goes-to-semicolon" rule, but for a scalar quantity $f$ we have$$ \nabla_{\mu}f = \partial_{\mu}f.$$(If this is not obvious, the reader should consider it an exercise to prove it to him or herself.) The punchline: identifying $\partial^{\mu}\phi=u^{\mu}$ is kosher. How? Observe in Equation (4) the guy in front, the $\mathrm{d}x^{\mu}/\mathrm{d}s$ is just some vector. So in the very, very special case that equations (3a) and (3b) hold, and $\mathrm{d}x^{\mu}/\mathrm{d}s=(1,0,0,0)$, we see that we can indeed recover the first stress-energy tensor as a special case of the scalar field's stress-energy tensor.
Use this forum to ask questions and talk about GPT theory. Post Reply 6 posts • Page 1of 1 Steven, I believe its in your paper (otherwise..on one of your other posts) where you state that electric potential can be linked to an atom's velocity. As I understand it, temperature is also an atoms velocity. As it pertains to your fusion device, you Ionize Deuterium at a voltage below "ground' in the hopes of getting very slow ions so that they will fuse. If you had ions in an ion trap, and made them cold, wouldn't that also slow them down? Or am I missing something? Cheers, Brendan I believe its in your paper (otherwise..on one of your other posts) where you state that electric potential can be linked to an atom's velocity. As I understand it, temperature is also an atoms velocity. As it pertains to your fusion device, you Ionize Deuterium at a voltage below "ground' in the hopes of getting very slow ions so that they will fuse. If you had ions in an ion trap, and made them cold, wouldn't that also slow them down? Or am I missing something? Cheers, Brendan Steven Sesselmann Site Admin Posts:104 Joined:Thu Jul 17, 2014 9:41 pm Location:Sydney - Australia Contact: Brendan, you are quite right, the conclusion I have come to is that the relative velocity between two bodies is proportional to their relative surface potentials, and I am completely stumped as to why I am having such a hard time convincing others of this. My simplistic view that the displacement function can be modelled as a sine wave in the potential-time plane, this being the case, velocity is simply the cosine or derivative. The relation is really quite trivial;Brendan Bogan wrote:I believe its in your paper (otherwise..on one of your other posts) where you state that electric potential can be linked to an atom's velocity. As I understand it, temperature is also an atoms velocity. \[\Delta v = c * (\frac{\Delta V}{\Phi})\] Where ∆v is relative velocity and ∆V is relative potential. Your conclusion is essentially correct, but the act of cooling down a deuterium ion is no trivial matter. Atomic nuclei can be considered as super bouncy balls, so I'm not sure how you would go about maintaining a plasma and at the same time cool the ions down.Brendan Bogan wrote:As it pertains to your fusion device, you Ionize Deuterium at a voltage below "ground' in the hopes of getting very slow ions so that they will fuse. If you had ions in an ion trap, and made them cold, wouldn't that also slow them down? Or am I missing something? Far simpler in my opinion (although yet unsuccessful) is to ionise the deuterium atom at grid potential, somehow let the deuterium atom settle down at grid potential before removing the electron. I link to an excel spreadsheet which calculates the ionisation voltage required to make any atom stand still. https://www.dropbox.com/s/anjk309npsfts ... .xlsx?dl=0 To retrieve the data for any atom, just enter it in the form 1-H, 2-H, 3-H etc.. where the number indicates the atomic mass number. So that's the theory, but doing this practically has it's challenges, My various approaches have had mixed success, but I still have other ideas yet to be tested when time and money permits. I am also very happy for others to try my approach, so best of luck... Steven Steven Sesselmann Only a person mad enough to think he can change the world, can actually do it... Seems like in your experiments, the biggest difficulty has been either running equipment at -55kv, or separating the electronics from the -55kv plasma. How is your latest vacuum chamber configuration coming along? Seems like the last update on it was quite awhile ago. Have you ever heard of doppler cooling? Essentially, they use lasers the supercool ions in an ion trap. https://en.wikipedia.org/wiki/Doppler_cooling My thought would be that it would be pretty easy for an amateur to come up with a penning trap and a rotating magnetic wall to keep the plasma stable since there are no electrons in the way. I've not done the math on what wavelength of light you'd need for Deuterium, so maybe its not feasible, but if it's a reasonable power wavelength then you could doppler cool the ions in the trap to get them to fuse. Just a random thought, that may be easier than super high (er..low?) voltages. What do you think? How is your latest vacuum chamber configuration coming along? Seems like the last update on it was quite awhile ago. Have you ever heard of doppler cooling? Essentially, they use lasers the supercool ions in an ion trap. https://en.wikipedia.org/wiki/Doppler_cooling My thought would be that it would be pretty easy for an amateur to come up with a penning trap and a rotating magnetic wall to keep the plasma stable since there are no electrons in the way. I've not done the math on what wavelength of light you'd need for Deuterium, so maybe its not feasible, but if it's a reasonable power wavelength then you could doppler cool the ions in the trap to get them to fuse. Just a random thought, that may be easier than super high (er..low?) voltages. What do you think? With this new approach to how to fuse Deuterium (which must work to some extent since you did have neutron production in your last test, right?), do you have a formula for the maximum velocity difference that still allows the ions to fuse? Steven Sesselmann Site Admin Posts:104 Joined:Thu Jul 17, 2014 9:41 pm Location:Sydney - Australia Contact: YesBrendan Bogan wrote:Seems like in your experiments, the biggest difficulty has been either running equipment at -55kv, or separating the electronics from the -55kv plasma. Having some leak issues, but the latest configuration isn't really trying to ionise the deuterium at the cathode. It's just working as a grid less fusor in this form.How is your latest vacuum chamber configuration coming along? Yes, but it's way beyond what I can do in my lab.Have you ever heard of doppler cooling? Worth a try, but I think there may be simpler ways. How about using one of Andrew Seltzman's ion sources and floating it at -62kV ? This would require a special isolation transformer, but otherwise not difficult.My thought would be that it would be pretty easy for an amateur to come up with a penning trap and a rotating magnetic wall to keep the plasma stable since there are no electrons in the way. I've not done the math on what wavelength of light you'd need for Deuterium, so maybe its not feasible, but if it's a reasonable power wavelength then you could doppler cool the ions in the trap to get them to fuse. This configuration would look more like a one ended accelerator. Can sketch it for you if you don't see what I mean. Steven Sesselmann Only a person mad enough to think he can change the world, can actually do it... Steven Sesselmann Site Admin Posts:104 Joined:Thu Jul 17, 2014 9:41 pm Location:Sydney - Australia Contact: No, I don't have this, and even if I did, it wouldn't be very accurate. there will always be a Maxwellian distribution of particle velocities, but I guess the aim is to move the bell curve towards the left (lower velocity). i believe this is what happens in a regular fusor, just not that effectively.Brendan Bogan wrote:With this new approach to how to fuse Deuterium (which must work to some extent since you did have neutron production in your last test, right?), do you have a formula for the maximum velocity difference that still allows the ions to fuse? Steven Steven Sesselmann Only a person mad enough to think he can change the world, can actually do it...
The distance $d_K^H(x,y)$ between two points on the hyperboloid $H_K$ with curvature $K<0$ can be emulated on the distance $d_{-1}(x,y)$ of the hyperboloid $H_{-1}$ of curvature ($K=-1$) as follows: $$ d_K^H(x,y)=R\cdot d_{-1}^H(x/R,y/R) $$ where $R$ is the radius and is related to the curvature as follows: $R=\frac{1}{\sqrt{-K}}$. Do you know about a simple formula to do a similar emulation with the distance $d_K^D(x,y)$ on the Poincaré disk $D_K$ of curvature $K$? For $K=-1$ the distance on the Poincaré disk $D_{-1}$ is: $$ d_{-1}^P(x,y) = arccosh\left( 1+\frac{2||x-y||_2^2}{(1-||x||^2_2)(1-||y||_2^2)} \right) $$ So I'm looking for an expression of the form: $$ d_K^P(x,y)=\cdots d_1^P(\cdots x\cdots, \cdots y\cdots). $$ where the $(\cdots)$-parts are just replaced with some function or expression in terms of $K$ (or $R$). So far I've tried to project the points from the hyperboloid to the Poincaré disk. But it didn't turn out to be a nice expression.
Given ${a_n}$ is infinite sequence, and $0 < a_n < 1$, how to prove $$\prod_{i=1}^{\infty} (1-a_n) = 0 \text{ if and only if } \sum_{i=1}^{\infty} a_n = \infty$$ Thanks for your help. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Use $1-a_n \leq e^{-a_n}$ for $ \sum_{i=1}^{\infty} a_n = \infty \implies \prod_{i=1}^{\infty} (1-a_n) = 0$ For the other direction, define independent uniform random variables $(U_n)_{n\geq 1}$ and $A_n =\{U_n < a_n\}$, then we have $\prod_{n=1}^{+\infty}P(A_n^c) = 0$ Use Borel-Cantelli lemma like here enables to conclude. For a non-pobabilistic proof, see proof and the first comment here
This is a cute problem. I would look at it like this: if we are going to show $U$ is a ball, we have to identify a good candidate for its radius $R$ and its center $c$. For $R$, we should think it should be the largest possible radius of a ball contained in $U$. So let $A \subset \mathbb{R}$ be the set of all numbers $r$ such that $U$ contains a ball of radius $r$. Since $U$ is bounded, $A$ is bounded above, so it has a finite supremum. Take $R = \sup A$ to be the supremum. Now we want to identify the center. Choose a sequence $\{r_n\} \in A$ with $r_n \uparrow R$. Then for each $n$ there is a ball of radius $r_n$ contained in $U$; let $x_n$ be its center, so that $B(x_n, r_n) \subset U$. Intuitively, as $r_n$ gets bigger, the ball $B(x_n, r_n)$ should fill most of $U$, leaving little room to wiggle it around. So it's reasonable to guess that the points $x_n$ are not too far from each other. Indeed, you should try to show that $\{x_n\}$ is Cauchy. By completeness, it converges to some point which we will take as our $c$. In showing it is Cauchy, you can take advantage of the "two point" property of $U$ in the following way: Lemma. Suppose $x,y \in E$ and $s,t > 0$ are real numbers. For any $\epsilon > 0$, there exist points $x' \in B(x,s)$ and $y' \in B(y,t)$ such that $\|x' - y'\| \ge s+t+\|x-y\|-\epsilon$. To see how to choose $x', y'$, draw a picture of $x,y$ and the balls $B(x,s), B(y,t)$. Draw a line through $x$ and $y$. When trying to show $\{x_n\}$ is Cauchy, you can consider the balls $B(x_n, r_n)$ and $B(x_m, r_m)$. Choosing $x_n', x_m'$ as in the lemma, there is thus a ball containing $x_n', x_m'$ and contained in $U$. Hence its radius is at most $R$. This lets you bound $\|x_n' - x_m'\|$ and therefore also $\|x_n - x_m\|$. Now that $c,R$ have been defined, you can try to show that $U = B(c,R)$. The $\supseteq$ direction is just a triangle inequality argument. For the $\subseteq$ direction, you'll want to use the fact that $U$ is open, and the lemma and the "two point" property again.
Let $Z$ be a topological space. Given a subspace $A$ of $Z$, define an equivalence relation $R_A$ such that its equivalence classes are $\{x\}$ for $x\in Z\setminus A$ and $A$. Let $Z/A$ be the appropriate quotient space. Let $Z=\mathbb{R}$, and let $A_1 = [0,1]$, $A_2=(0,1)$, $A_3=[0,1)$. I need to find for which $i=1,2,3$ does the quotient space $\mathbb{R}/A_i$ is homeomorphic to $\mathbb{R}$. I found out that $\mathbb{R}/A_1$ is homeomorphic since the open interval $(-\infty,0)$ is homeomorphic to itself, the open interval $(1,\infty)$ is homeomorphic to $(0,\infty)$, and $\{A_1\}$ is homeomorphic to $\{0\}$. $\mathbb{R}/A_2$ is not homeomorphic since the singleton $\{A_2\}$ is an open interval since $A_2$ is an open interval. I am trying to come up for a solution for $A_3$, and also, to come up with a better explanation for $A_1$.
If $X$ is an irreducible algebraic variety (over $\mathbb C$), an algebraic vector bundle of rank $r$ over $X$ is a couple $(E,\pi)$ where $E$ is an algebraic variety and $\pi: E\longrightarrow X$ is a surjective morphism, with the following properties: 1) $\pi^{-1}(x)$ is a vector space isomorphic to $\mathbb C^r$ for every $x\in X$ 2) For every $x\in X$ there are an open neighborhood $U$ and an isomorphism of varieties $\phi_U:\pi^{-1}(U)\longrightarrow U\times\mathbb C^r$ such that: 2a) $ \pi_1\circ\phi_U=\pi$ where $\pi_1:U\times\mathbb C^r\rightarrow U$ is the canonical projection on $U$. 2b) $\phi_U|_{\pi^{-1}(x)}:\pi^{-1}(x)\longrightarrow \{x\}\times\mathbb C^r$ is an isomorphism of vector spaces. One can also construct an algebraic vector bundle by an open cover $\{U_\alpha\}$ of $X$ and some functions $g_{\alpha\beta}:U_\alpha\cap U_\beta\longrightarrow GL_r(\mathbb C)$ which satisfy some cocicle conditions. Now my question is the following: Are the functions $g_{\alpha\beta}$ morphisms between varieties? (remember that $GL_r(\mathbb C)$ is an open affine subvariety of $\mathbb C^{r^2}$) For example Cox in his book on Toric varieties says only that $g_{\alpha\beta}$ is a function. But for smooth manifolds, $g_{\alpha\beta}$ must be a smooth function, so by analogy I think that in the algebraic case $g_{\alpha\beta}$ should be a morphism.
Search Now showing items 1-2 of 2 D-meson nuclear modification factor and elliptic flow measurements in Pb–Pb collisions at $\sqrt {s_{NN}}$ = 5.02TeV with ALICE at the LHC (Elsevier, 2017-11) ALICE measured the nuclear modification factor ($R_{AA}$) and elliptic flow ($\nu_{2}$) of D mesons ($D^{0}$, $D^{+}$, $D^{⁎+}$ and $D^{s+}$) in semi-central Pb–Pb collisions at $\sqrt{s_{NN}} =5.02$ TeV. The increased ... ALICE measurement of the $J/\psi$ nuclear modification factor at mid-rapidity in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV (Elsevier, 2017-11) ALICE at the LHC provides unique capabilities to study charmonium production at low transverse momenta ( p T ). At central rapidity, ( |y|<0.8 ), ALICE can reconstruct J/ ψ via their decay into two electrons down to zero ...
A solid hemisphere of uniform density $\rho$ occupies the region $$x^2+y^2+z^2\le a^2,\qquad z\le0.$$ Find the gravitational potential due to the hemisphere at the point $(0,0,s)$ where $s\gt0$. A uniform rod of density $m$ per unit length lies on the $z$-axis between $(0,0,c)$ and $(0,0,d)$ where $d\gt c\gt0$. Show that the force exerted on the rod by the hemisphere is $$\psi(c)-\psi(d)$$ where $$\psi(\lambda)=\frac{2\pi Gm\rho}{3}\left(\frac{a^3+\lambda^3-\left(a^2+\lambda^2\right)^{3/2}}{\lambda}\right).$$ So to determine the potential at $(0,0,s)$, I evaluated the following integral over the region $R$: $$\iiint_R\frac{\rho\ G}{\sqrt{x^2+y^2+(z-s)^2}}\,dx\,dy\,dz$$ Using a spherical substitution, the integral becomes: $$\int_{r=0}^{a}\int_{\theta=\frac{\pi}{2}}^{\pi}\int_{\phi = 0}^{2 \pi}\frac{r^2\sin(\theta)}{\sqrt{r^2+s^2-2rs\cos(\theta)}}d\phi d\theta dr$$ Which evaluates to $$\varphi(s)=\frac{\pi\rho G}{3}\left(\frac{2a^3+3a^2s+s^3-\sqrt{(s^2+a^2)^3}}{s}\right)$$ As for the second part, my understanding is that all we have to do is evaluate $$m\int_{c}^{d}\varphi^{'}(s)\,ds=m\varphi(d)-m\varphi(c)$$ My question is: is there anything wrong with my $\varphi$ or is my method from the second half wrong? Thanks!
Least Squares SVM Least Squares Support Vector Machines are a modification of the classical Support Vector Machine, please see Suykens et. al for a complete background. LSSVM Regression¶ In case of LSSVM regression one solves (by applying the KKT conditions) the following constrained optimization problem. Leading to a predictive model of the form. Where the values \alpha \ \& \ b are the solution of Here K is the N \times N kernel matrix whose entries are given by K_{kl} = \varphi(x_k)^\intercal\varphi(x_l), \ \ k,l = 1, \cdots, N and I is the identity matrix of order N. LSSVM Classification¶ In case of LSSVM for binary classification one solves (by applying the KKT conditions) the following constrained optimization problem. Leading to a classifier of the form. Where the values \alpha \ \& \ b are the solution of Here \Omega is the N \times N matrix whose entries are given by and I is the identity matrix of order N. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 // Create the training data set val data: Stream[(DenseVector[Double], Double)] = ... val numPoints = data.length val num_features = data.head._1.length // Create an implicit vector field for the creation of the stationary // radial basis function kernel implicit val field = VectorField(num_features) val kern = new RBFKernel(2.0) //Create the model val lssvmModel = new DLSSVM(data, numPoints, kern, modelTask = "regression") //Set the regularization parameter and learn the model model.setRegParam(1.5).learn()
To begin with, The Amplituhedron formalism only works for a specific theory, N=4 SYM in the planar limit (only planar Feynmann diagrams are considered).Because of supersymmetry, you can classify scattering processes with two parameters: $n$ and $k$. n is the number of particles involved, and k is, roughly speaking, the number of spin flips in the process. In addition, there is also the number of loop $L$ at which you want to perform your calculation. Be aware that at loop level you are not computing the (super)amplitude, but rather the integrand of the amplitude. That is, thanks to the planar limit, you can uniquely define a function dependent on external data + virtual momenta that has to be integrated in order to obtain the amplitude. To clarify, is what you obtain if you swap $\sum_{diagrams} \int_{loops} = \int_{loops} \sum_{diagrams}$, you can do that with no ambiguity because the planarity allows to choose a scheme to fix the loop variables. This integrand is what is produced by the Amplituhedron at loop level. Now, for any $n,k$ and $L$ and for fixed external data $Z$ (a $(k+4) \times n$ matrix which ultimately encodes the external momenta and supermomenta) you build an amplituhedron $A_{n,k,L}$ in a standard way. At tree level ($L=0$) it is the subset of $G_{+}(k,k+4)$ of points $Y$ written as $Y = C . Z$, for any $C$ in $G_{+}(k,n)$. At loop level is a little bit more complicate, but nothing terrible. An important feature of the Amplituhedron is that you can "triangulate" it in a very interesting way. (By triangulate we mean divide it in zones that have in common only their boundary.) In fact, in a prievous work it was shown that the amplitude/integrand for $n,k,L$ is written as a sum over certain on-shell diagrams, which in turn label certain cells in the Grassmannian $G_{+}(k,n)$. (The BCFW rules are used to obtain this expression of the amplitudes.)If you consider the image of these cells under the map $Y = C . Z$ you obtain a triangulation for the Amplituhedron! Now these Amplituhedra have codimension one "boundaries". These are zero-loci of non-negative functions defined on the Amplituhedra. A crucial point will be that these boundaries may be written as product of other Amplituhedra. You can define a (unique?) volume form on the Amplituhedron by the condition that it should have logarithmic singularities (i.e. like 1/x) at these codimension one boundaries. A concrete way to do so is to consider a triangulation of the Amplituhedra, for example the one deriving from BCFW recursion relations as above. On every "triangle" you are already given a form that has log singularities at the face of the "triangle". This is so because the cells of $G_{+}(k,n)$ used in this triangulation are equipped with simple positive charts $(\alpha_1, ... , \alpha_d)$ that allows to reach a boundary of the cell by setting certain $\alpha$ to zero. Therefore the forms $\Pi d\alpha/\alpha$ have the desired property cell-wise. You just sum these forms over the cells: the "spurious" poles, associated to cell-boundaries that are not amplituhedron-boundaries cancels (they are shared by two cells) and you are left with the right log singularities. But you can also obtain this volume form in other ways, following directing by the definition, as explained in the paper. Finally, you evaluate the volume form obtained above in a particular point of the amplituhedra, and what you obtain is the amplitude/integrand. The procedure seems a little bit complicate and abstract, but actually the idea is quite simple. The geometry of the amplituhedron captures the intricate factorization properties of amplitudes/integrands required by Locality and Unitarity: The boundaries of an amplituhedron factorizes in the corresponding smaller amplituhedra. Therefore a form with logarithmic singularities at these boundaries will have the right poles and factorizations as well.This post imported from StackExchange Physics at 2015-11-25 11:20 (UTC), posted by SE-user giulio bullsaver
"Abstract: We prove, once and for all, that people who don't use superspace are really out of it. This includes QCDers, who always either wave their hands or gamble with lettuce (Monte Zuma calculations). Besides, all nonsupersymmetric theories have divergences which lead to problems with things like renormalons, instantons, anomalons, and other phenomenons. Also, they can't hide from gravity forever." Can a gravitational field possess momentum? A gravitational wave can certainly possess momentum just like a light wave has momentum, but we generally think of a gravitational field as a static object, like an electrostatic field. "You have to understand that compared to other professions such as programming or engineering, ethical standards in academia are in the gutter. I have worked with many different kinds of people in my life, in the U.S. and in Japan. I have only encountered one group more corrupt than academic scientists: the mafia members who ran Las Vegas hotels where I used to install computer equipment. " so Ive got a small bottle that I filled up with salt. I put it on the scale and it's mass is 83g. I've also got a jup of water that has 500g of water. I put the bottle in the jug and it sank to the bottom. I have to figure out how much salt to take out of the bottle such that the weight force of the bottle equals the buoyancy force. For the buoyancy do I: density of water * volume of water displaced * gravity acceleration? so: mass of bottle * gravity = volume of water displaced * density of water * gravity? @EmilioPisanty The measurement operators than I suggested in the comments of the post are fine but I additionally would like to control the width of the Poisson Distribution (much like we can do for the normal distribution using variance). Do you know that this can be achieved while still maintaining the completeness condition $$\int A^{\dagger}_{C}A_CdC = 1$$? As a workaround while this request is pending, there exist several client-side workarounds that can be used to enable LaTeX rendering in chat, including:ChatJax, a set of bookmarklets by robjohn to enable dynamic MathJax support in chat. Commonly used in the Mathematics chat room.An altern... You're always welcome to ask. One of the reasons I hang around in the chat room is because I'm happy to answer this sort of question. Obviously I'm sometimes busy doing other stuff, but if I have the spare time I'm always happy to answer. Though as it happens I have to go now - lunch time! :-) @JohnRennie It's possible to do it using the energy method. Just we need to carefully write down the potential function which is $U(r)=\frac{1}{2}\frac{mg}{R}r^2$ with zero point at the center of the earth. Anonymous Also I don't particularly like this SHM problem because it causes a lot of misconceptions. The motion is SHM only under particular conditions :P I see with concern the close queue has not shrunk considerably in the last week and is still at 73 items. This may be an effect of increased traffic but not increased reviewing or something else, I'm not sure Not sure about that, but the converse is certainly false :P Derrida has received a lot of criticism from the experts on the fields he tried to comment on I personally do not know much about postmodernist philosophy, so I shall not comment on it myself I do have strong affirmative opinions on textual interpretation, made disjoint from authoritorial intent, however, which is a central part of Deconstruction theory. But I think that dates back to Heidegger. I can see why a man of that generation would be leaned towards that idea. I do too.
Given the Lie algebra, what is the systematic way to construct the matrix representation of the generators of the desired dimension? I ask this question here because it is the physicists for whom representation of groups is more important than mathematicians. Let us, for example, take $SU(2)$ for concreteness. Starting from the generic parametrization of a $3\times 3$ unitary matrix $U$ with $\det U=1$, and using the formula of generators $$J^i=-i\Big(\frac{\partial U}{\partial \theta_i}\Big)_{\{\theta_i=0\}}$$ one can find the $3\times 3$ matrix representation of the generators. However, I'm looking for something else. Given the Lie algebra $[J^i, J^j]=i\epsilon^{ijk}J^k$, is there a way that one can explicitly construct (? notby guess or trial) the $3\times 3$ representations of $\{J^i\}$ Will the same procedure apply to solve other Lie algebras appearing in physics such as that of $SO(3,1)$ (or $SL(2,\mathbb{C})$)?
Sometimes we are interested in more than one linear combination or variable. In this case we may be interested in the association between those two linear combinations. More specifically, we can consider the covariance between two linear combinations of the data. Consider the pair of linear combinations: \(Y_1 = \sum_{j=1}^{p}c_jX_j \;\;\; \text{and} \;\;\; Y_2 = \sum_{k=1}^{p}d_kX_k\) Here \(Y_{1}\) and \(Y_{2}\) are two distinct linear combinations. Both variables \(Y_{1}\) and \(Y_{2}\) are going to be random and so they will be potentially correlated. We can assess the association between these variables using the covariance as the two vectors c and d are distinct. The population covariance between \(Y_{1}\) and \(Y_{2}\) is obtained by summing over all pairs of variables. We then multiply respective coefficients from the two linear combinations as \(d_{j}\) times \(d_{k}\) times the covariances between j and k. Population Covariance between two linear combinations \(cov(Y_1, Y_2) = \sum_{j=1}^{p}\sum_{k=1}^{p}c_jd_k\sigma_{jk}\) We can then estimate the population covariance by using the sample covariance. This is obtained by simply substituting the sample covariances between the pairs of variables for the population covariances between the pairs of variables. Sample Covariance between two linear combinations \(s_{Y_1,Y_2}= \sum_{j=1}^{p}\sum_{k=1}^{p}c_jd_ks_{jk}\) Correlation The population correlation between variables \(Y_{1}\) and \(Y_{2}\) can be obtained by using the usual formula of the covariance between \(Y_{1}\) and \(Y_{2}\) divided by the standard deviation for the two variables as shown below. Population Correlation between two linear combinations \(\rho_{Y_1,Y_2} = \dfrac{\sigma_{Y_1,Y_2}}{\sigma_{Y_1}\sigma_{Y_2}}\) This population correlation is estimated by the sample correlation where we simply substitute in the sample quantities for the population quantities as below Sample Correlation between two linear combinations \(r_{Y_1,Y_2} = \dfrac{s_{Y_1, Y_2}}{s_{Y_1}s_{Y_2}}\) Example 2-5: Women’s Health Survey (Pop. Covariance and Correlation) Section Here is the matrix of the data as was shown previously. \(S = \left(\begin{array}{RRRRR}157829.4 & 940.1 & 6075.8 & 102411.1 & 6701.6 \\ 940.1 & 35.8 & 114.1 & 2383.2 & 137.7 \\ 6075.8 & 114.1 & 934.9 & 7330.1 & 477.2 \\ 102411.1 & 2383.2 & 7330.1 & 2668452.4 & 22063.3 \\ 6701.6 & 137.7 & 477.2 & 22063.3 & 5416.3 \end{array}\right)\) We may wish to define the total intake of vitamins A and C in mg as before. \(Y _ { 1 } = 0.001 X _ { 4 } + X _ { 5 }\) and we may also want to take a look at the total intake of calcium and iron: \(Y _ { 2 } = X _ { 1 } + X _ { 2 }\) Then the sample covariance between \(Y_{1}\) and \(Y_{2}\) can then be obtained by looking at the covariances between each pair of the component variables time the respective coefficients. So in this case we are looking at pairing \(X_{1}\) and \(X_{4}\), \(X_{1}\) and \(X_{5}\), \(X_{2}\) and \(X_{4}\), and \(X_{2}\) and \(X_{5}\). You will notice that in the expression below \(s_{41}\), \(s_{42}\), \(s_{51}\) and \(s_{52}\) all appear. The variables are taken from the matrix above and substituting them into the expression and the math is carried out below. \begin{align} s_{Y_1, Y_2} & = 0.001s_{41} + 0.001s_{42} + s_{51}+s_{52}\\& = 0.001 \times 102411.1 + 0.001 \times 2383.2 + 6701.6 +137.7\\ & = 102.4 + 2.4 + 6701.6 + 137.7\\ & = 6944.1 \end{align} You should be able at this point to be able to confirm that the sample variance of \(Y_{2}\) is 159,745.4 as shown below: \begin{align} s^2_{Y_2} & = s_{11}+s_{22}+2s_{12}\\ & = 157829.4 + 35.8 + 2 \times 940.1\\ & = 157829.4 + 35.8 + 1880.2 \\ & = 159745.4 \end{align} And, if we care to obtain the sample correlation between \(Y_{1}\) and \(Y_{2}\), we take the sample covariance that we just obtained and divide by the square root of the product of the two component variances, 5463.1, for \(Y_{1}\), which we obtained earlier, and 159745.4, which we just obtained above. Following this math through, we end up with a correlation of about 0.235 as shown below. \begin{align} r_{Y_1,Y_2} &= \dfrac{s_{Y_1, Y_2}}{s_{Y_1}s_{Y_2}}\\ &= \dfrac{6944.1}{\sqrt{5463.1 \times 159745.4}}\\&=0.235 \end{align}
2018-08-25 06:58 Recent developments of the CERN RD50 collaboration / Menichelli, David (U. Florence (main) ; INFN, Florence)/CERN RD50 The objective of the RD50 collaboration is to develop radiation hard semiconductor detectors for very high luminosity colliders, particularly to face the requirements of the possible upgrade of the large hadron collider (LHC) at CERN. Some of the RD50 most recent results about silicon detectors are reported in this paper, with special reference to: (i) the progresses in the characterization of lattice defects responsible for carrier trapping; (ii) charge collection efficiency of n-in-p microstrip detectors, irradiated with neutrons, as measured with different readout electronics; (iii) charge collection efficiency of single-type column 3D detectors, after proton and neutron irradiations, including position-sensitive measurement; (iv) simulations of irradiated double-sided and full-3D detectors, as well as the state of their production process.. 2008 - 5 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 596 (2008) 48-52 In : 8th International Conference on Large Scale Applications and Radiation Hardness of Semiconductor Detectors, Florence, Italy, 27 - 29 Jun 2007, pp.48-52 詳細記錄 - 相似記錄 2018-08-25 06:58 詳細記錄 - 相似記錄 2018-08-25 06:58 Performance of irradiated bulk SiC detectors / Cunningham, W (Glasgow U.) ; Melone, J (Glasgow U.) ; Horn, M (Glasgow U.) ; Kazukauskas, V (Vilnius U.) ; Roy, P (Glasgow U.) ; Doherty, F (Glasgow U.) ; Glaser, M (CERN) ; Vaitkus, J (Vilnius U.) ; Rahman, M (Glasgow U.)/CERN RD50 Silicon carbide (SiC) is a wide bandgap material with many excellent properties for future use as a detector medium. We present here the performance of irradiated planar detector diodes made from 100-$\mu \rm{m}$-thick semi-insulating SiC from Cree. [...] 2003 - 5 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 509 (2003) 127-131 In : 4th International Workshop on Radiation Imaging Detectors, Amsterdam, The Netherlands, 8 - 12 Sep 2002, pp.127-131 詳細記錄 - 相似記錄 2018-08-24 06:19 Measurements and simulations of charge collection efficiency of p$^+$/n junction SiC detectors / Moscatelli, F (IMM, Bologna ; U. Perugia (main) ; INFN, Perugia) ; Scorzoni, A (U. Perugia (main) ; INFN, Perugia ; IMM, Bologna) ; Poggi, A (Perugia U.) ; Bruzzi, M (Florence U.) ; Lagomarsino, S (Florence U.) ; Mersi, S (Florence U.) ; Sciortino, Silvio (Florence U.) ; Nipoti, R (IMM, Bologna) Due to its excellent electrical and physical properties, silicon carbide can represent a good alternative to Si in applications like the inner tracking detectors of particle physics experiments (RD50, LHCC 2002–2003, 15 February 2002, CERN, Ginevra). In this work p$^+$/n SiC diodes realised on a medium-doped ($1 \times 10^{15} \rm{cm}^{−3}$), 40 $\mu \rm{m}$ thick epitaxial layer are exploited as detectors and measurements of their charge collection properties under $\beta$ particle radiation from a $^{90}$Sr source are presented. [...] 2005 - 4 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 546 (2005) 218-221 In : 6th International Workshop on Radiation Imaging Detectors, Glasgow, UK, 25-29 Jul 2004, pp.218-221 詳細記錄 - 相似記錄 2018-08-24 06:19 Measurement of trapping time constants in proton-irradiated silicon pad detectors / Krasel, O (Dortmund U.) ; Gossling, C (Dortmund U.) ; Klingenberg, R (Dortmund U.) ; Rajek, S (Dortmund U.) ; Wunstorf, R (Dortmund U.) Silicon pad-detectors fabricated from oxygenated silicon were irradiated with 24-GeV/c protons with fluences between $2 \cdot 10^{13} \ n_{\rm{eq}}/\rm{cm}^2$ and $9 \cdot 10^{14} \ n_{\rm{eq}}/\rm{cm}^2$. The transient current technique was used to measure the trapping probability for holes and electrons. [...] 2004 - 8 p. - Published in : IEEE Trans. Nucl. Sci. 51 (2004) 3055-3062 In : 50th IEEE 2003 Nuclear Science Symposium, Medical Imaging Conference, 13th International Workshop on Room Temperature Semiconductor Detectors and Symposium on Nuclear Power Systems, Portland, OR, USA, 19 - 25 Oct 2003, pp.3055-3062 詳細記錄 - 相似記錄 2018-08-24 06:19 Lithium ion irradiation effects on epitaxial silicon detectors / Candelori, A (INFN, Padua ; Padua U.) ; Bisello, D (INFN, Padua ; Padua U.) ; Rando, R (INFN, Padua ; Padua U.) ; Schramm, A (Hamburg U., Inst. Exp. Phys. II) ; Contarato, D (Hamburg U., Inst. Exp. Phys. II) ; Fretwurst, E (Hamburg U., Inst. Exp. Phys. II) ; Lindstrom, G (Hamburg U., Inst. Exp. Phys. II) ; Wyss, J (Cassino U. ; INFN, Pisa) Diodes manufactured on a thin and highly doped epitaxial silicon layer grown on a Czochralski silicon substrate have been irradiated by high energy lithium ions in order to investigate the effects of high bulk damage levels. This information is useful for possible developments of pixel detectors in future very high luminosity colliders because these new devices present superior radiation hardness than nowadays silicon detectors. [...] 2004 - 7 p. - Published in : IEEE Trans. Nucl. Sci. 51 (2004) 1766-1772 In : 13th IEEE-NPSS Real Time Conference 2003, Montreal, Canada, 18 - 23 May 2003, pp.1766-1772 詳細記錄 - 相似記錄 2018-08-24 06:19 Radiation hardness of different silicon materials after high-energy electron irradiation / Dittongo, S (Trieste U. ; INFN, Trieste) ; Bosisio, L (Trieste U. ; INFN, Trieste) ; Ciacchi, M (Trieste U.) ; Contarato, D (Hamburg U., Inst. Exp. Phys. II) ; D'Auria, G (Sincrotrone Trieste) ; Fretwurst, E (Hamburg U., Inst. Exp. Phys. II) ; Lindstrom, G (Hamburg U., Inst. Exp. Phys. II) The radiation hardness of diodes fabricated on standard and diffusion-oxygenated float-zone, Czochralski and epitaxial silicon substrates has been compared after irradiation with 900 MeV electrons up to a fluence of $2.1 \times 10^{15} \ \rm{e} / cm^2$. The variation of the effective dopant concentration, the current related damage constant $\alpha$ and their annealing behavior, as well as the charge collection efficiency of the irradiated devices have been investigated.. 2004 - 7 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 530 (2004) 110-116 In : 6th International Conference on Large Scale Applications and Radiation Hardness of Semiconductor Detectors, Florence, Italy, 29 Sep - 1 Oct 2003, pp.110-116 詳細記錄 - 相似記錄 2018-08-24 06:19 Recovery of charge collection in heavily irradiated silicon diodes with continuous hole injection / Cindro, V (Stefan Inst., Ljubljana) ; Mandić, I (Stefan Inst., Ljubljana) ; Kramberger, G (Stefan Inst., Ljubljana) ; Mikuž, M (Stefan Inst., Ljubljana ; Ljubljana U.) ; Zavrtanik, M (Ljubljana U.) Holes were continuously injected into irradiated diodes by light illumination of the n$^+$-side. The charge of holes trapped in the radiation-induced levels modified the effective space charge. [...] 2004 - 3 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 518 (2004) 343-345 In : 9th Pisa Meeting on Advanced Detectors, La Biodola, Italy, 25 - 31 May 2003, pp.343-345 詳細記錄 - 相似記錄 2018-08-24 06:19 First results on charge collection efficiency of heavily irradiated microstrip sensors fabricated on oxygenated p-type silicon / Casse, G (Liverpool U.) ; Allport, P P (Liverpool U.) ; Martí i Garcia, S (CSIC, Catalunya) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Turner, P R (Liverpool U.) Heavy hadron irradiation leads to type inversion of n-type silicon detectors. After type inversion, the charge collected at low bias voltages by silicon microstrip detectors is higher when read out from the n-side compared to p-side read out. [...] 2004 - 3 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 518 (2004) 340-342 In : 9th Pisa Meeting on Advanced Detectors, La Biodola, Italy, 25 - 31 May 2003, pp.340-342 詳細記錄 - 相似記錄 2018-08-23 11:31 Formation and annealing of boron-oxygen defects in irradiated silicon and silicon-germanium n$^+$–p structures / Makarenko, L F (Belarus State U.) ; Lastovskii, S B (Minsk, Inst. Phys.) ; Korshunov, F P (Minsk, Inst. Phys.) ; Moll, M (CERN) ; Pintilie, I (Bucharest, Nat. Inst. Mat. Sci.) ; Abrosimov, N V (Unlisted, DE) New findings on the formation and annealing of interstitial boron-interstitial oxygen complex ($\rm{B_iO_i}$) in p-type silicon are presented. Different types of n+−p structures irradiated with electrons and alpha-particles have been used for DLTS and MCTS studies. [...] 2015 - 4 p. - Published in : AIP Conf. Proc. 1583 (2015) 123-126 詳細記錄 - 相似記錄
Let $X_n(\Bbb{Z})$ be the simplicial complex whose vertex set is $\Bbb{Z}$ and such that the vertices $v_0,...,v_k$ span a $k$-simplex if and only if $|v_i-v_j| \le n$ for every $i,j$. Prove that $X_n(\Bbb{Z})$ is $n$-dimensional... no kidding, my maths is foundations (basic logic but not pedantic), calc 1 which I'm pretty used to work with, analytic geometry and basic linear algebra (by basic I mean matrices and systems of equations only Anyway, I would assume if removing $ fixes it, then you probably have an open math expression somewhere before it, meaning you didn't close it with $ earlier. What's the full expression you're trying to get? If it's just the frac, then your code should be fine This is my first time chatting here in Math Stack Exchange. So I am not sure if this is frowned upon but just a quick question: I am trying to prove that a proper subgroup of $\mathbb{Z}^n$ is isomorphic to $\mathbb{Z}^k$, where $k \le n$. So we must have $rank(A) = rank(\mathbb{Z}^k)$ , right? For four proper fractions $a, b, c, d$ X writes $a+ b + c >3(abc)^{1/3}$. Y also added that $a + b + c> 3(abcd)^{1/3}$. Z says that the above inequalities hold only if a, b,c are positive. (a) Both X and Y are right but not Z. (b) Only Z is right (c) Only X is right (d) Neither of them is absolutely right. Yes, @TedShifrin the order of $GL(2,p)$ is $p(p+1)(p-1)^2$. But I found this on a classification of groups of order $p^2qr$. There order of $H$ should be $qr$ and it is present as $G = C_{p}^2 \rtimes H$. I want to know that whether we can know the structure of $H$ that can be present? Like can we think $H=C_q \times C_r$ or something like that from the given data? When we say it embeds into $GL(2,p)$ does that mean we can say $H=C_q \times C_r$? or $H=C_q \rtimes C_r$? or should we consider all possibilities? When considering finite groups $G$ of order, $|G|=p^2qr$, where $p,q,r$ are distinct primes, let $F$ be a Fitting subgroup of $G$. Then $F$ and $G/F$ are both non-trivial and $G/F$ acts faithfully on $\bar{F}:=F/ \phi(F)$ so that no non-trivial normal subgroup of $G/F$ stabilizes a series through $\bar{F}$. And when $|F|=pr$. In this case $\phi(F)=1$ and $Aut(F)=C_{p-1} \times C_{r-1}$. Thus $G/F$ is abelian and $G/F \cong C_{p} \times C_{q}$. In this case how can I write G using notations/symbols? Is it like $G \cong (C_{p} \times C_{r}) \rtimes (C_{p} \times C_{q})$? First question: Then it is, $G= F \rtimes (C_p \times C_q)$. But how do we write $F$ ? Do we have to think of all the possibilities of $F$ of order $pr$ and write as $G= (C_p \times C_r) \rtimes (C_p \times C_q)$ or $G= (C_p \rtimes C_r) \rtimes (C_p \times C_q)$ etc.? As a second case we can consider the case where $C_q$ acts trivially on $C_p$. So then how to write $G$ using notations? There it is also mentioned that we can distinguish among 2 cases. First, suppose that the sylow $q$-subgroup of $G/F$ acts non trivially on the sylow $p$-subgroup of $F$. Then $q|(p-1) and $G$ splits over $F$. Thus the group has the form $F \rtimes G/F$. A presentation $\langle S\mid R\rangle$ is a Dehn presentation if for some $n\in\Bbb N$ there are words $u_1,\cdots,u_n$ and $v_1,\cdots, v_n$ such that $R=\{u_iv_i^{-1}\}$, $|u_i|>|v_i|$ and for all words $w$ in $(S\cup S^{-1})^\ast$ representing the trivial element of the group one of the $u_i$ is a subword of $w$ If you have such a presentation there's a trivial algorithm to solve the word problem: Take a word $w$, check if it has $u_i$ as a subword, in that case replace it by $v_i$, keep doing so until you hit the trivial word or find no $u_i$ as a subword There is good motivation for such a definition here So I don't know how to do it precisely for hyperbolic groups, but if $S$ is a surface of genus $g \geq 2$, to get a geodesic representative for a class $[\alpha] \in \pi_1(S)$ where $\alpha$ is an embedded loop, one lifts it to $\widetilde{\alpha}$ in $\Bbb H^2$ by the locally isometric universal covering, and then the deck transformation corresponding to $[\alpha]$ is an isometry of $\Bbb H^2$ which preserves the embedded arc $\widetilde{\alpha}$ It has to be an isometry fixing a geodesic $\gamma$ with endpoints at the boundary being the same as the endpoints of $\widetilde{\alpha}$. Consider the homotopy of $\widetilde{\alpha}$ to $\gamma$ by straightline homotopy, but straightlines being the hyperbolic geodesics. This is $\pi_1(S)$-equivariant, so projects to a homotopy of $\alpha$ and the image of $\gamma$ (which is a geodesic in $S$) downstairs, and you have your desired representative I don't know how to interpret this coarsely in $\pi_1(S)$ @anakhro Well, they print in bulk, and on really cheap paper, almost transparent and very thin, and offset machine is really cheaper per page than a printer, you know, but you should be printing in bulk, its all economy of scale. @ParasKhosla Yes, I am Indian, and trying to get in some good masters progam in math. Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants.== Branches of algebraic graph theory ===== Using linear algebra ===The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Lap... I can probably guess that they are using symmetries and permutation groups on graphs in this course. For example, orbits and studying the automorphism groups of graphs. @anakhro I have heard really good thing about Palka. Also, if you do not worry about little sacrifice of rigor (e.g. counterclockwise orientation based on your intuition, rather than, on winding numbers, etc.), Howie's Complex analysis is good. It is teeming with typos here and there, but you will be fine, i think. Also, thisbook contains all the solutions in appendix! Got a simple question: I gotta find kernel of linear transformation $F(P)=xP^{''}(x) + (x+1)P^{'''}(x)$ where $F: \mathbb{R}_3[x] \to \mathbb{R}_3[x]$, so I think it would be just $\ker (F) = \{ ax+b : a,b \in \mathbb{R} \}$ since only polynomials of degree at most 1 would give zero polynomial in this case @chandx you're looking for all the $G = P''$ such that $xG + (x+1)G' = 0$; if $G \neq 0$ you can solve the DE to get $G'/G = -x/(x+1) = -1 + 1/(x+1) \implies \ln G = -x + \ln(1+x) \implies G = (1+x)e^(-x) + C$ which is obviously not a polyonomial, so $G = 0$ and thus $P = ax + b$ could you suppose that $\operatorname{deg} P \geq 2$ and show that you wouldn't have nonzero polynomials? Sure.
I know MRS is independent of prices and compares the ratio of goods which the consumer will exchange one good for the other good. But is the marginal rate of substitution affected by the consumption of those goods or not? It seems trivial but can't get my head around it. Different consumer preferences will lead to different properties of the consumer's willingness to trade one good for another. For instance, suppose that consumers preferences are linear in the consumption of either good: $U=\beta_x x + \beta_y y$ where $\beta_x$,$\beta_y$ are positive constants The marginal utility of $x$ would be $\beta_x$ and the marginal utility of $y$ would be $\beta_y$, and the MRS would be $\frac{\beta_x}{\beta_y}$, also a constant. This tells us the consumer would be willing to trade between $x$ and $y$ at a constant rate. An alternative would be consumption of each good entering utility in a multiplicative fashion (Cobb Douglas), for instance $U=xy$. The marginal utility of $x$ would be $y$ and the marginal utility of $y$ would be $x$ and so the MRS would be $\frac{y}{x}$, which indeed would depend on consumption. In this example, the MRS decreases as consumption of $x$ rises. As the consumer enjoys more $x$ he is willing to give up less and less $y$ for another unit of $x$. So it's not entirely trivial whether the MRS depends on consumption, whether it does or not depends on how you model consumer's preferences. It depends. Hessian gives a good answer, but I think we can say something more. A utility function $u(x)$ assigns to each bundle of goods $x=(x_1,\ldots,x_n)$ a real number. Along an indifference curve, we have \begin{equation}u(x)=u_0\tag{1}\end{equation} for some real constant $u_0$. Now, the marginal rate of substitution $MRS_{ij}$ of good $i$ for good $j$ exists at a given point $x$ whenever $u_i(x)\neq 0$, for then the implicit function theorem implies that $(1)$ implicitly defines a function $f^i$ such that $x_i=f^i(x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_n)$, and that we have $$u(x_1,\ldots,x_{i-1},f^i(x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_n),x_{i+1},\ldots,x_n)=u_0.\tag{2}$$ Differentiating $(2)$ with respect to $x_j$ where $j\neq i$ gives us $$-\frac{\partial f^i}{\partial x_j}\Big|_{u\text{ constant at }u_0}=\frac{u_i(x)}{u_j(x)}=MRS_{ij}.\tag{3}$$ Note that the partial derivatives $u_i(x)$ and $u_j(x)$ are dependent on the bundle of goods $x$, so it is logically possible that $MRS_{ij}$ varies with $x$. If $MRS_{ij}$ does not vary with $x_j$ and is thus constant with respect to $x_j$ at some level $k$, it follows from $(3)$ by integration with respect to $x_j$ on both sides that $$f^i(x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_n)=-kx_j+m$$ for some real constant $m$ independent of $x_j$. Thus, in this case, the indifference curve in $(x_i,x_j)$-space is a straight line.
I'm solving this BVP problem. $u''(x) + u^2(x) = \frac{15}{4} |x|^{1/2} + |x|^5$, $u(-1)=u(1)=1$, $x \in A=(-1,1)$. The solution is $u(x)=|x|^{5/2}$. What can we say about the convergence order to the analytical solution? I have to solve this problem with finite centred differences, of order two. My discretization for $$u''(x_i)=\frac{u(x_{i+1}) - 2u(x_i) + u(x_{i-1})}{h^2} + \tau^{2}_{i},\text{ with }\tau^{2}_{i} = \frac{h^2 u^{(4)}(\bar{x_i})}{12},$$ for an $\bar{x_i} \in (x_{i-1},x_{i+1})$. Since $x \in A$, the analytical solution $u(x)$ is not globally differentiable in $A$. There should be the problem. Since the local error $\tau^{2}_{i}$ depends on the value of $u^{(4)}(\bar{xi})$, I computed it and get $u^{(4)}(x)=\frac{-15}{(16 x^{3/2})}$, assuming $x$ is real. Particularly, for $x$ really close to $0$, it gets really high values, so the local error is no more a power of $h^2$, but it's scaled for a "non trascurable" constant and therefor it's no more of the second order. Here is what I get numerically. The first image is the output of the BVP problem, while the second one is the error-graphic ($m$ is the number of nodes, so $h=\frac{2}{(m-1)}$) Now I would like to say that the order is $\frac{3}{2}$, but I don't know how to go on. Maybe I should expand with Taylor the denominator(?). What would you say? P.S I'm Italian, hope to have written in a comprehensible way.
Formal asymptotic expansions for symmetric ancient ovals in mean curvature flow 1. Department of Mathematics, UW Madison, Madison, WI53705, United States For any $p, q$ with $p+q=n$, $p\geq1$, $q\geq2$ we find a formal ancient solution which is a small perturbation of an ellipsoid. For $t\to-\infty$ the solution becomes increasingly astigmatic: $q$ of its major axes have length $\approx\sqrt{2(q-1)(-t)}$, while the other $p$ axes have length $\approx \sqrt{-2t\log(-t)}$. We conjecture that an analysis similar to that in [2] will lead to a rigorous construction of ancient solutions to MCF with the asymptotics described in this paper. Mathematics Subject Classification:Primary: 53C44, 35K59; Secondary: 35C2. Citation:Sigurd Angenent. Formal asymptotic expansions for symmetric ancient ovals in mean curvature flow. Networks & Heterogeneous Media, 2013, 8 (1) : 1-8. doi: 10.3934/nhm.2013.8.1 References: [1] S. B. Angenent and J. J. L. Velázquez, [2] Sigurd Angenent, Cristina Caputo and Dan Knopf, [3] Sigurd Angenent, [4] Panagiota Daskalopoulos, Richard Hamilton and Natasa Sesum, [5] M. Gage and R. S. Hamilton, show all references References: [1] S. B. Angenent and J. J. L. Velázquez, [2] Sigurd Angenent, Cristina Caputo and Dan Knopf, [3] Sigurd Angenent, [4] Panagiota Daskalopoulos, Richard Hamilton and Natasa Sesum, [5] M. Gage and R. S. Hamilton, [1] [2] Daehwan Kim, Juncheol Pyo. Existence and asymptotic behavior of helicoidal translating solitons of the mean curvature flow. [3] [4] Chiara Corsato, Colette De Coster, Pierpaolo Omari. Radially symmetric solutions of an anisotropic mean curvature equation modeling the corneal shape. [5] Hongjie Ju, Jian Lu, Huaiyu Jian. Translating solutions to mean curvature flow with a forcing term in Minkowski space. [6] [7] Alessio Pomponio. Oscillating solutions for prescribed mean curvature equations: euclidean and lorentz-minkowski cases. [8] Diego Castellaneta, Alberto Farina, Enrico Valdinoci. A pointwise gradient estimate for solutions of singular and degenerate pde's in possibly unbounded domains with nonnegative mean curvature. [9] Ruyun Ma, Man Xu. Connected components of positive solutions for a Dirichlet problem involving the mean curvature operator in Minkowski space. [10] Jun Wang, Wei Wei, Jinju Xu. Translating solutions of non-parametric mean curvature flows with capillary-type boundary value problems. [11] [12] Fioralba Cakoni, Shari Moskow, Scott Rome. Asymptotic expansions of transmission eigenvalues for small perturbations of media with generally signed contrast. [13] G. Kamberov. Prescribing mean curvature: existence and uniqueness problems. [14] [15] Tobias H. Colding and Bruce Kleiner. Singularity structure in mean curvature flow of mean-convex sets. [16] Dimitra Antonopoulou, Georgia Karali. A nonlinear partial differential equation for the volume preserving mean curvature flow. [17] Jinju Xu. A new proof of gradient estimates for mean curvature equations with oblique boundary conditions. [18] Nicolas Dirr, Federica Dragoni, Max von Renesse. Evolution by mean curvature flow in sub-Riemannian geometries: A stochastic approach. [19] Yoshikazu Giga, Yukihiro Seki, Noriaki Umeda. On decay rate of quenching profile at space infinity for axisymmetric mean curvature flow. [20] 2018 Impact Factor: 0.871 Tools Metrics Other articles by authors [Back to Top]
For starters, it is incorrect to think of the ground state being substantially more populated in NMR spectroscopy. Because the energy difference between aligned and misaligned nuclear spins in the external magnetic field is very low, we can say that both states are almost equally populated due to thermal excitation alone. For a $500~\mathrm{MHz}$ experiment, the energy difference can be calculated to be $0.21~\mathrm{J \cdot mol^{-1}}$ which corresponds to a population ratio as follows: $$\frac{n_\mathrm{ex}}{n_\mathrm{g}} = \exp \left(\frac{\Delta E}{RT}\right) = 0.999916$$ Next, consider the extremely high symmetry of the $\ce{H2}$ molecule: It has the point group $D_{\infty\mathrm h}$, one of the most symmetric ones around. Due to this high symmetry, we cannot distinguish between either proton at all — both can be transformed into each other by a rotation as defined by the point group. Thus, the two nuclei are magnetically equivalent — not only must their chemical shift be identical but also the coupling to the other one. Think of it this way: Assume, that the spin transition of one proton happened at a slightly different frequency than the other’s. That would mean that one proton had to be different from the other in some way (a different transition energy is equivalent to a different chemical shift is equivalent to a different environment). But how would you explain a different environment on one side of the $\ce{H2}$ molecule compared to the other? Correct: You cannot. Now we could still assume that the energy required to excite a nuclear spin whose neighbour is parallel be different from one whose neighbour is antiparallel. However, no matter how we excite, the transition is always $\ce{parallel <=> antiparallel}$. And since we cannot distinguish between the hydrogens, we also cannot distinguish which one pointed in which direction, so those two energies must be identical. Therefore, we only see one peak and no $^1J_\ce{HH}$ coupling.
Previously, we have discussed briefly the simple linear regression. Here we will discuss multiple regression or multivariable regression and how to get the solution of the multivariable regression. At the end of the post, we will provide the python code from scratch for multivariable regression. Motivation A single variable linear regression model can learn to predict an output variable \(y\) when there is only one input variable, \(x\) and there is a linear relationship between \(y\) and \(x\), that is, \(y \approx w_0 + w_1 x\). Well, that might not be a great predictive model for most cases. For example, let’s assume we are going to begin a real estate business and we are going to use machine learning to predict house prices. In particular, we have some houses that we want to list for sale, but we don’t know the value of these houses. So, we’re going to look at other houses that sold in the recent past. Looking at how much they’ve sold and the different characteristics of those houses, we will use that data to inform our listing price for our house that we’d like to sell. Now, there are many aspects depend on the house price. But very first we might think about the relationship between the square foot and the price of the house and find a simple linear regression between them. But we might go into the data set and notice there are these other houses, that have the very similar square footage but they’re just the fundamentally different house. For example, one house only has one bathroom but the other house has three bathrooms. So the other house, of course, should have a higher value than the one with just one bathroom. So, we need to add more input to our regression model. Price Bed_rooms Bath_rooms … Sqft_living 221900 3 1 … 1180 538000 3 2.25 … 2570 180000 2 1 … 770 604000 4 3 … 1960 510000 3 2 … 1680 1.23E+06 4 4.5 … 5420 257500 3 2.25 … 1715 So instead of just looking at square feet and using that to predict the house value, we’re going to look after other inputs as well. For example, we’re going to record the number of bathrooms in the house and we’re going to use both of these two inputs to predict the house price. In particular, in this higher dimensional space, we’re going to fit some function that models the relationship between the number of square feet and the number of bathrooms and the output, the value of the house. And so, in particular, one simple function that we can think about is just modeling this function as $$f(x) = w_0 + w_1 * x_1 + w_2*x_2$$ where \(x_1\) is the number of square feet and \(x_2\) is the number of bathrooms. We have just talked about square feet and number of bathrooms as the inputs that we’re looking at for our regression model. But, associated with any house, there are lots of different attributes and lots of things that we can use as inputs to our regression model and here the multivariable regression comes into play. Model When we have these multiple inputs, the simplest models we can think of is just a function directly of the inputs themselves. Input \(\textbf{x}\) is a d-dim vector and output y is a scalar $$\textbf{x} = (\textbf{x}[1], \textbf{x}[2], \dots , \textbf{x}[d])$$ where \(\textbf{x}[1]\), \( \textbf{x}[2] \), \(\dots\), \(\textbf{x}[d]\) are the arrays containing different features e.g. number of square foot, number of bathrooms, number of bedrooms, etc. Taking these inputs and plugging those directly entirely into our linear model with the noise term, \(\epsilon_i\) we get output \(y_i\) in the \(i^{ \text{th}}\) data point: $$y_i = w_0 + w_1 \textbf{x}_i[1] + w_2 \textbf{x}_i[2] + … + w_d \textbf{x}_i[d] + \epsilon_i$$ where the first feature in our model is just one, the constant feature. The second feature is the first input, for example, the number of square feet and the third feature is our second input, for example, the number of bathrooms. And this goes on and on till we get to our last input, which is the little d+1 feature, for example, maybe lot size. For generically, instead of just a simple hyperplane e.g. a single line, we can fit a polynomial or we can fit some D-dimensional curve. $$\begin{aligned}y_i &= w_0 h_0( \textbf{x}_i)+ w_1 h_1( \textbf{x}_i) + … + w_D h_D( \textbf{x}_i) + \epsilon_i \\ &= \sum_{j=0}^D w_j h_j( \textbf{x}_i) + \epsilon_i\end{aligned}$$ Because we’re gonna assume that there’s some capital D different features of these multiple inputs. So just as an example, maybe our zero feature is just that one constant term and that’s pretty typical. That just shifts up and down where this curve leads in the space and maybe our first feature might be just our first input like in the hyperplane example which is quite fit. And the second feature, it could be the second input like in our hyperplane example or could be some other function of any of the inputs. Maybe we want to take the log of the seventh input, which happens to be the number of bedrooms, times just the number of bathrooms. So, in this case, our second feature of the model is relating log number of bathrooms times number, log number of bedrooms times number of bathrooms to the output and then we get all the way up to our capital D feature which is some function of any of our inputs to our regression model. $$\begin{aligned} feature \; 1 &= h_0(\textbf{x}) \dots e.g., 1 \\ feature \; 2 &= h_1(\textbf{x}) \dots e.g. , \textbf{x}[1] =sq. \;ft. \\ feature \; 3 &= h_2(\textbf{x}) \dots \textbf{x}[2] = \#bath \; or, \log(\textbf{x}[7]) \textbf{x}[2] = \log(\#bed) * \#bath \\ \vdots \\ feature \; D+1 &= h_D( \textbf{x}) \dots \text{some other function of} \; \textbf{x}[1], \dots, \textbf{x}[d]\end{aligned}$$So this is our generic multiple regression model with multiple features. Matrix Like the simple linear regression, we’re going to talk about two different algorithms. One is just a closed-form solution and the other is gradient descent and there are gonna be multiple steps that we have to take to build up to deriving these algorithms and the first is simply to rewrite our regression model in the matrix notation. So, we will begin with rewriting our multiple regression model in matrix notation for just a single observation $$ y_i= \sum_{j=0}^D w_j h_j({x}_i) + \epsilon_i $$ and we are gonna write this in matrix notation: $$\begin{aligned} y_i &= \begin{bmatrix}w_0 & w_1 & w_2 & … & w_D\end{bmatrix}\begin{bmatrix} h_0(x_i) \\ h_1(x_i) \\ h_2(x_i) \\ … \\ h_D(x_i)\end{bmatrix} + \epsilon_i \\ &= w_0 h_0({x}_i)+ w_1 h_1({x}_i) + … + w_D h_D({x}_i) + \epsilon_i \\ &= \textbf{w}^T\textbf{h}(\textbf{x}_i) + \epsilon_i \end{aligned}$$ In particular we’re going to think of vectors always as being defined as columns and if it defines a row, then we’re going to call that the transpose. Now, we are going to rewrite our model for all the observations together. $$\begin{bmatrix}y_1 \\ y_2 \\ y_3 \\ \vdots \\ y_N \end{bmatrix} = \begin{bmatrix} h_0(x_1) & h_1(x_1) & \dots & h_D(x_1) \\ h_0(x_2) & h_1(x_2) & \dots & h_D(x_2) \\ \vdots & \vdots & \ddots & \vdots \\ h_0(x_N) & h_1(x_N) & \dots & h_D(x_N) \end{bmatrix} \begin{bmatrix}w_0 \\ w_1 \\ w_2 \\ \vdots \\ w_D \end{bmatrix} + \begin{bmatrix}\epsilon_1 \\ \epsilon _2 \\ \epsilon _3 \\ \vdots \\ \epsilon _N \end{bmatrix} $$ So, we get $$\textbf{y} = \textbf{Hw} + \mathbf{\epsilon}$$ here, we can write our entire regression model for \(N\) observations as this \(\textbf{y}\) vector and it is equal to the \(H\) matrix times this \(\textbf{w}\) vector plus \(\epsilon\) vector that represents all the errors in our model. So this is the matrix notation for our model of \(N\) observations. Quality Metric In simple linear regression model, we have used Residual Sum Squares(RSS) as cost function. For any given fit, we define the residual sum of squares(RSS) of our parameter: $$\begin{aligned}RSS(w_0, w_1) &= \sum_{i=1}^N(y_i – [w_0 + w_1 x_i ])^2 \\ &= \sum_{i=1}^N(y_i – \hat{y}_i(w_0, w_1)) \end{aligned}$$where \( \hat{y}_i\) is the predicted value for \(y_i\) and \(w_0\) and \(w_1\) are the intercept and slope respectively. Now we will explain the residual sum of squares in the case of multiple regression. The residual is the difference between the actual observation and the predicted value. So what is our predicted value for the \(i^{\textbf{th}}\) observation? Well in our vector notation, what we do is we take each one of the weights in our model and then we multiply our features for that observation by that factor. So $$\begin{aligned} \hat{y}_i &= \begin{bmatrix} h_0(x_i) & h_1(x_i) & h_2(x_i) & \dots & h_D(x_i) \end{bmatrix}\begin{bmatrix} w_0 \\ w_1 \\ w_2 \\ \vdots \\ w_D \end{bmatrix} \\ &= \textbf{h}^T( \textbf{x}_i) . \textbf{w}\end{aligned}$$ What is our predicted value for the ith observation. So our RSS for multiple regression is going to be: $$\begin{aligned}RSS(\textbf{w}) &= \sum_{i = 1}^N (y_i – h(\textbf{x}_i)^T \textbf{w})^2 \\ &= (\textbf{y} – \textbf{Hw})^T (\textbf{y} – \textbf{Hw}) \end{aligned}$$ So why are these two things equivalent? Well, we’re gonna break up the explanation into parts. We know that \(\hat{\textbf{y}}\), the vector of all of our end predicted observations is equal to \(H\) times \(w\) or \(\hat{\textbf{y}} = \textbf{H} * \textbf{w}\) implies: $$\textbf{y} – \textbf{H}.\textbf{w} = \textbf{y} – \hat{\textbf{y}}$$ this is equivalent of looking at our vector of actual observed values and subtracting our vector of predicted values. So we take all our house sales prices, and we look at all the predicted house prices, given a set of parameters, w, and we subtract them. What is that vector? $$ \textbf{y} – \hat{\textbf{y}} = \begin{bmatrix}residual_1 \\ residual_2\\ \vdots \\residual_N \end{bmatrix}$$That vector is the vector of residuals because the result of this is the difference between our first house sale and our predicted house sale, we call that the residual for the first prediction, and likewise for the second, and all the way up to our \(n^\text{th}\) observation. So the term \(\textbf{y} – \textbf{H}.\textbf{w}\), is equivalent to the vector of the residuals from our predictions. So, $$\begin{aligned} (\textbf{y} – \textbf{Hw})^T (\textbf{y} – \textbf{Hw}) &= \begin{bmatrix}residual_1 & residual_2 & \dots & residual_N \end{bmatrix} \begin{bmatrix}residual_1 \\ residual_2\\ \vdots \\residual_N \end{bmatrix} \\ &= (residual_1^2 + residual_2^2 + \dots + residual_N^2 ) \\ &= \sum _{i=1}^N residual_i^2 \\ &= RSS(\textbf{w}) \end{aligned} $$ By definition, that is exactly what residual sum of squares is using these \(\textbf{w}\) parameters. Closed form solution for Multiple Regression Now we’re onto the final important step of the derivation, which is taking the gradient. The gradient was important both for our closed form solution as well as, of course, for the gradient descent algorithm. So, the gradient $$\begin{aligned} \nabla RSS(\textbf{w}) &= \nabla[ (\textbf{y} – \textbf{Hw})^T (\textbf{y} – \textbf{Hw})] \\ &= -2\textbf{H}^T(\textbf{y} – \textbf{Hw}) \end{aligned}$$ From calculus we know that, at the minimum the gradient will be zero. So, for closed form solution we take our gradient, and set it equal to zero, and solve for \(w\) $$\begin{aligned} \nabla RSS(\textbf{w}) = -2&\textbf{H}^T(\textbf{y} – \textbf{Hw}) = 0 \\ = -2&\textbf{H}^T \textbf{y} + 2\textbf{H}^T\textbf{Hw} = 0 \\ &\textbf{H}^T\textbf{Hw} = \textbf{H}^T\textbf{y} \\ \hat{w} = (&\textbf{H}^T \textbf{H})^{-1} \textbf{H}^T\textbf{y} \end{aligned}$$ we have a whole collection of different parameters, \(w_0\), \(w_1\) and all the way up to \(w_D\) multiplying all the features we’re using in our multiple regression model. And in one line we are able to write the solution to the fit using matrix notation. This motivates why we went through all this work to write things in this matrix notation because it allows us to have this nice closed form solution for all of our parameters written very compactly. Gradient Descent solution for Multiple Regression The other alternative approach and maybe more useful and simpler method is the Gradient Descent method where we’re walking down the surface of residual sum of squares and trying to get to the minimum. Of course, we might overshoot it and go back and forth but that’s a general idea that we’re doing this iterative procedure. $$\begin{aligned}while \; not \; co&nverged: \\ \textbf{w}^{(t+1)} \leftarrow &\textbf{w}^{(t)} – \eta \nabla RSS(\textbf{w}^{(t)}) \\ \leftarrow &\textbf{w}^{(t)} + 2\eta\textbf{H}^T(\textbf{y} – \textbf{Hw})\end{aligned}$$ what this version of the algorithm is doing is it’s taking our entire \(\textbf{w}\) vector, all the regression coefficients in our model, and updating them all at once using this matrix notation shown here. Now that we have finished the theoretical part of the tutorial now you can see the code and try to understand different blocks of the code. Also published on Medium.
We will use the value of $10^{12}\,\text{kg}$, proposed in OP, for black hole initial mass for estimates. This value is large enough that the evaporation time through Hawking radiation for such a black hole is longer than the current age on Universe, moreover, the accretion rates if such a black hole is placed inside the Earth would be larger than the loss of mass through Hawking radiation by at least an order of magnitude, so such a black hole would be gradually consuming Earth, with its mass growing exponentially. Characteristic time for such growth through accretion could be estimated through the Eddington luminosity limit:$$\tau_\text{E} = \frac{\eta}{4\pi} \, \frac{\sigma_\text{T} c}{ G m_p} \simeq 2.6\times10^7 \,\text{yr}.$$Here $\sigma_\text{T}$ is Thompson scattering cross-section, $m_p$ is the mass of a proton and $\eta$ is the efficiency of conversion of accreting mass into radiation, which we assume to be about $5\,\%$. Of course, mass absorption would slow our black hole down, but the main mechanism which would be responsible for the dampening of black hole oscillations is the dynamical friction (a.k.a. gravitational drag), because our black hole could lose its velocity not only through direct absorption of mass/energy but also through long-range gravitational interaction with the surrounding medium. Characteristic decay timescale of such process is shorter (for oscillations which would reach the surface of the Earth) and could be estimated as$$\tau_\text{d}=\frac{v^3}{9\pi G^2 M \rho \ln \Lambda} \simeq 2.5\times 10^6 \times\left(\frac{M}{10^{12}\,\text{kg}} \right)^{-1}\left(\frac{v}{8 \, \text{km/s}}\right)^3 \, \text{yr} ,$$where $M$ is the black hole mass, $v$ is its average velocity (for which we assume value of $8\,\text{km/s}$), $\rho$ is an average density of the material through which the black hole is moving ($\rho\simeq 5 \, \text{g/cm}^3$), and $ \ln \Lambda$ is the gravitational Coulomb logarithm. We see that the dampening effect from the dynamical friction is proportional to the black hole mass (decay timescales are shorter for larger black hole), and is inversely proportional to the cube of black hole velocity, so once the black hole lost most of its velocity and is oscillating around the core, the dampening is greatly increased and the decay timescales would be much shorter. Within a several million years our black hole would be oscillating with much smaller amplitude deeply in the Earth's core where it will be growing doubling in mass every dozen million years. And by the time the black hole accretion rates become large enough for the accompanying radiation to heat up the Earth surface to make it uninhabitable, several dozen million years would pass.
Descriptive modelling of utterance transformations in chains: short-term linguistic evolution in a large-scale online experiment Sébastien Lerique IXXI, École Normale Supérieure de Lyon HBES 2018, Amsterdam Camille Roth médialab, Science Po, Paris Adamic et al. (2016) Online opinion dynamics Transmission chains for cultural evolution theories Short-term cultural evolution Reviews by Mesoudi and Whiten (2008) Whiten et al. (2016) Leskovec et al. (2009) Boyd & Richerson (1985, 2005) Sperber (1996) Hierarchicalisation (Mesoudi and Whiten, 2004) Social & survival information (Stubbersfield et al., 2015) Negativity (Bebbington et al., 2017) Manual Automated (computational) In vitro In vivo Moussaïd et al. (2015) Lauf et al. (2013) Lerique & Roth (2018) Danescu-Niculescu-Mizil et al. (2012) Quotes in 2011 news on Strauss-Kahn Claidière et al. (2014) Cornish et al. (2013) Word list recall (Zaromb et al. 2006) Simple sentence recall (Potter & Lombardi 1990) Evolution of linguistic content No ecological content Experiment setup Control over experimental setting Fast iterations Scale similar to in vivo Browser-based Story transformations At Dover, the finale of the bailiffs' convention. Their duties, said a speaker, are "delicate, dangerous, and insufficiently compensated." depth in branch At Dover, the finale of the bailiffs convention,their duty said a speaker are delicate, dangerous and detailed At Dover, at a Bailiffs convention. a speaker said that their duty was to patience, and determination In Dover, at a Bailiffs convention, the speaker said that their duty was to patience. In Dover, at a Bailiffs Convention, the speak said their duty was to patience At Dover, the finale of the bailiffs' convention. Their duties, said a speaker, are "delicate, dangerous, and insufficiently compensated." At Dover, the finale of the bailiffs convention,their duty said a speaker are delicate, dangerous and detailed Modelling transformations Needleman and Wunsch (1970) AGAACT- | ||-G-AC-G AGAACTGACG Finding her son, Alvin, 69, hanged, Mrs Hunt, of Brighton, was so depressed she could not cut him downFinding her son Arthur 69 hanged Mrs Brown from Brighton was so upset she could not cut him down Finding her son Alvin 69 hanged Mrs Hunt of - - Brighton, was so depressed she could not cut him downFinding her son Arthur 69 hanged Mrs - - Brown from Brighton was so upset she could not cut him down Apply to utterances using NLP At Dover, the finale of the bailiffs convention, their duty said a speaker are delicate, dangerous and detailedAt Dover, at a Bailiffs convention. a speaker said that their duty was to patience, and determination At Dover the finale of the - - bailiffs convention - - - - their duty At Dover - - - - at a Bailiffs convention a speaker said that their duty said a speaker are delicate dangerous - - - and detailed - - - - - - - was to patience and - determination At Dover the finale of the - - bailiffs convention |-Exchange-1------| their dutyAt Dover - - - - at a Bailiffs convention a speaker said that their duty said a speaker are delicate dangerous - - - and detailed -|-Exchange-1------------------------| was to patience and - determination said a speaker are delicate dangerous |-E2----||E2| a speaker - - - said that said -said that \(\hookrightarrow E_1\) \(\hookrightarrow E_2\) Extend to build recursive deep alignments Deep sequence alignments Transformation diagrams Results Deletion Insertion Replacement Position in \(u\) \(|u|_w\) Number of operations vs. utterance length Susceptibility vs. position in utterance Deletions tend to gate other operations Insertions relate to preceding deletions Stubbersfield et al. (2015) Bebbington et al. (2017) Links the low-level with contrasted outcomes Conclusion In vivo & oral applications (social networks) Parsimonious explanations of higher level evolution Semantic parses, long-lived chains with recurring changes Lots to do Quantitative analysis of changes Inner structure of transformations WEIRD participants Written text No controlled context Caveats Very Soon™ on arXiv.org Challenges with meaning Can you think of anything else, Barbara, they might have told me about that party? I've spoken to the other children who were there that day. S B Abuser The Devil's Advocate (1997) ? Strong pragmatics (Scott-Phillips, 2017) Access to context Theory of the constitution of meaning Challenges Seeds #participants #root utterances tree size Duration Spam rate Usable reformulations 53 49 2 x 70 54 50 25/batch 48 49 70 64min 43min 37min/batch 22.4% + 3.5% 0.8% + 0.6% 1% + 0.1% 1980 2411 3506 Pilot 1 Exp. A Exp. A' Exp. A Memorable/non-memorable quote pairs (Danescu-Niculescu-Mizil et al., 2012) Exp. A' Nouvelles en trois lignes (Fénéon, 1906) “Hanging on to the door, a traveller a tad overweight caused his carriage to topple, in Bromley, and fractured his skull.” “Three bears driven down from the heights of the Pyrenees by snow have been decimating the sheep of the valley.” “A dozen hawkers who had been announcing news of a nonexistent anarchist bombing at King's Cross have been arrested.” Live experiment Alignment optimisation \(\theta_{open}\) \(\theta_{extend}\) \(\theta_{mismatch}\) \(\theta_{exchange}\) by hand All transformations Hand-coded training set size? Train the \(\theta_*\) on hand-coded alignments Simulate the training process: imagine we know the optimal \(\theta\) 1. Sample \(\theta^0 \in [-1, 0]^3\) to generate artificial alignments for all transformations 2. From those, sample \(n\) training alignments 3. Brute-force \(\hat{\theta}_1, ..., \hat{\theta}_m\) estimators of \(\theta_0\) 4. Evaluate the number of errors per transformation on the test set Test set 10x 10x \(\Longrightarrow\) 100-200 hand-coded alignments yield \(\leq\) 1 error/transformation Gap open cost \(\rightarrow \theta_{open}\) Gap extend cost \(\rightarrow \theta_{extend}\) Item match-mismatch Example data Immediately after I become president I will confront this economic challenge head-on by taking all necessary steps immediately after I become a president I will confront this economic challenge Immediately after I become president, I will tackle this economic challenge head-on by taking all the necessary steps This crisis did not develop overnight and it will not be solved overnight the crisis did not developed overnight, and it will be not solved overnight original This, crisis, did, not, develop, overnight, and, it, will, not, be, solved, overnight this, crisis, did, not, develop, overnight, and, it, will, not, be, solved, overnight this, crisis, did, not, develop, overnight, and, it, will, not, be, solved, overnight crisi, develop, overnight, solv, overnight tokenize lowercase & length > 2 stopwords stem The crisis didn't happen today won't be solved by midnight. crisi, happen, today, solv, midnight d = 0,6 Utterance-to-utterance distance Aggregate trends Size reduction Transmissibility Variability Lexical evolution - POS Step-wise Susceptibility Lexical evolution (1) Step-wise Susceptibility Feature variation Lexical evolution (2) Along the branches Descriptive modelling of utterance transformations in chains: short-term linguistic evolution in a large-scale online experiment By Sébastien Lerique
Let $X_n(\Bbb{Z})$ be the simplicial complex whose vertex set is $\Bbb{Z}$ and such that the vertices $v_0,...,v_k$ span a $k$-simplex if and only if $|v_i-v_j| \le n$ for every $i,j$. Prove that $X_n(\Bbb{Z})$ is $n$-dimensional... no kidding, my maths is foundations (basic logic but not pedantic), calc 1 which I'm pretty used to work with, analytic geometry and basic linear algebra (by basic I mean matrices and systems of equations only Anyway, I would assume if removing $ fixes it, then you probably have an open math expression somewhere before it, meaning you didn't close it with $ earlier. What's the full expression you're trying to get? If it's just the frac, then your code should be fine This is my first time chatting here in Math Stack Exchange. So I am not sure if this is frowned upon but just a quick question: I am trying to prove that a proper subgroup of $\mathbb{Z}^n$ is isomorphic to $\mathbb{Z}^k$, where $k \le n$. So we must have $rank(A) = rank(\mathbb{Z}^k)$ , right? For four proper fractions $a, b, c, d$ X writes $a+ b + c >3(abc)^{1/3}$. Y also added that $a + b + c> 3(abcd)^{1/3}$. Z says that the above inequalities hold only if a, b,c are positive. (a) Both X and Y are right but not Z. (b) Only Z is right (c) Only X is right (d) Neither of them is absolutely right. Yes, @TedShifrin the order of $GL(2,p)$ is $p(p+1)(p-1)^2$. But I found this on a classification of groups of order $p^2qr$. There order of $H$ should be $qr$ and it is present as $G = C_{p}^2 \rtimes H$. I want to know that whether we can know the structure of $H$ that can be present? Like can we think $H=C_q \times C_r$ or something like that from the given data? When we say it embeds into $GL(2,p)$ does that mean we can say $H=C_q \times C_r$? or $H=C_q \rtimes C_r$? or should we consider all possibilities? When considering finite groups $G$ of order, $|G|=p^2qr$, where $p,q,r$ are distinct primes, let $F$ be a Fitting subgroup of $G$. Then $F$ and $G/F$ are both non-trivial and $G/F$ acts faithfully on $\bar{F}:=F/ \phi(F)$ so that no non-trivial normal subgroup of $G/F$ stabilizes a series through $\bar{F}$. And when $|F|=pr$. In this case $\phi(F)=1$ and $Aut(F)=C_{p-1} \times C_{r-1}$. Thus $G/F$ is abelian and $G/F \cong C_{p} \times C_{q}$. In this case how can I write G using notations/symbols? Is it like $G \cong (C_{p} \times C_{r}) \rtimes (C_{p} \times C_{q})$? First question: Then it is, $G= F \rtimes (C_p \times C_q)$. But how do we write $F$ ? Do we have to think of all the possibilities of $F$ of order $pr$ and write as $G= (C_p \times C_r) \rtimes (C_p \times C_q)$ or $G= (C_p \rtimes C_r) \rtimes (C_p \times C_q)$ etc.? As a second case we can consider the case where $C_q$ acts trivially on $C_p$. So then how to write $G$ using notations? There it is also mentioned that we can distinguish among 2 cases. First, suppose that the sylow $q$-subgroup of $G/F$ acts non trivially on the sylow $p$-subgroup of $F$. Then $q|(p-1) and $G$ splits over $F$. Thus the group has the form $F \rtimes G/F$. A presentation $\langle S\mid R\rangle$ is a Dehn presentation if for some $n\in\Bbb N$ there are words $u_1,\cdots,u_n$ and $v_1,\cdots, v_n$ such that $R=\{u_iv_i^{-1}\}$, $|u_i|>|v_i|$ and for all words $w$ in $(S\cup S^{-1})^\ast$ representing the trivial element of the group one of the $u_i$ is a subword of $w$ If you have such a presentation there's a trivial algorithm to solve the word problem: Take a word $w$, check if it has $u_i$ as a subword, in that case replace it by $v_i$, keep doing so until you hit the trivial word or find no $u_i$ as a subword There is good motivation for such a definition here So I don't know how to do it precisely for hyperbolic groups, but if $S$ is a surface of genus $g \geq 2$, to get a geodesic representative for a class $[\alpha] \in \pi_1(S)$ where $\alpha$ is an embedded loop, one lifts it to $\widetilde{\alpha}$ in $\Bbb H^2$ by the locally isometric universal covering, and then the deck transformation corresponding to $[\alpha]$ is an isometry of $\Bbb H^2$ which preserves the embedded arc $\widetilde{\alpha}$ It has to be an isometry fixing a geodesic $\gamma$ with endpoints at the boundary being the same as the endpoints of $\widetilde{\alpha}$. Consider the homotopy of $\widetilde{\alpha}$ to $\gamma$ by straightline homotopy, but straightlines being the hyperbolic geodesics. This is $\pi_1(S)$-equivariant, so projects to a homotopy of $\alpha$ and the image of $\gamma$ (which is a geodesic in $S$) downstairs, and you have your desired representative I don't know how to interpret this coarsely in $\pi_1(S)$ @anakhro Well, they print in bulk, and on really cheap paper, almost transparent and very thin, and offset machine is really cheaper per page than a printer, you know, but you should be printing in bulk, its all economy of scale. @ParasKhosla Yes, I am Indian, and trying to get in some good masters progam in math. Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants.== Branches of algebraic graph theory ===== Using linear algebra ===The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Lap... I can probably guess that they are using symmetries and permutation groups on graphs in this course. For example, orbits and studying the automorphism groups of graphs. @anakhro I have heard really good thing about Palka. Also, if you do not worry about little sacrifice of rigor (e.g. counterclockwise orientation based on your intuition, rather than, on winding numbers, etc.), Howie's Complex analysis is good. It is teeming with typos here and there, but you will be fine, i think. Also, thisbook contains all the solutions in appendix! Got a simple question: I gotta find kernel of linear transformation $F(P)=xP^{''}(x) + (x+1)P^{'''}(x)$ where $F: \mathbb{R}_3[x] \to \mathbb{R}_3[x]$, so I think it would be just $\ker (F) = \{ ax+b : a,b \in \mathbb{R} \}$ since only polynomials of degree at most 1 would give zero polynomial in this case @chandx you're looking for all the $G = P''$ such that $xG + (x+1)G' = 0$; if $G \neq 0$ you can solve the DE to get $G'/G = -x/(x+1) = -1 + 1/(x+1) \implies \ln G = -x + \ln(1+x) \implies G = (1+x)e^(-x) + C$ which is obviously not a polyonomial, so $G = 0$ and thus $P = ax + b$ could you suppose that $\operatorname{deg} P \geq 2$ and show that you wouldn't have nonzero polynomials? Sure.
2.2.5 - Measures of Spread Variance and standard deviation are measures of variability. The standard deviation is the most commonly used measure of variability when data are quantitative and approximately normally distributed. When computing the standard deviation by hand, it is necessary to first compute the variance. The standard deviation is equal to the square root of the variance. Here, you will learn how to compute these values by hand. After this lesson, you will always be computing standard deviation using software such as Minitab Express. Standard Deviation Roughly the average difference between individual data values and the mean. The standard deviation of a sample is denoted as \(s\). The standard deviation of a population is denoted as \(\sigma\). Sample Standard Deviation \(s=\sqrt{\dfrac{\sum (x-\overline{x})^{2}}{n-1}}\) In order to compute the standard deviation for a sample we first compute deviations. The sum of the squared deviations (SS) divided by \(n-1\), this is the variance (\(s^2\)). The square root of the variance is the standard deviation: \(\sqrt{s^2}=s\). Deviation An individual score minus the mean. Sum of Squared Deviations Deviations squared and added together. This is also known as the sum of squares or SS. Variance Approximately the average of all of the squared deviations; for a sample represented as \(s^{2}\). Sum of Squares \(SS={\sum (x-\overline{x})^{2}}\) Sample Variance \(s^{2}=\dfrac{\sum (x-\overline{x})^{2}}{n-1}\) There are a number of methods for calculating the standard deviation. If you look through different textbooks or search online, you may find different formulas and procedures. To compute the standard deviation for a sample, we will use the formulas above and the following steps: Step 1: Compute the sample mean: \(\overline{x} = \frac{\sum x}{n}\). Step 2: Subtract the sample mean from each individual value: \(x-\overline{x}\), these are the deviations. Step 3: Square each deviation: \((x-\overline{x})^{2}\), these are the squared deviations. Step 4: Add the squared deviations: \(\sum (x-\overline{x})^{2}\), this is the sum of squares. Step 5: Divide the sum of squares by \(n-1\): \(\frac{\sum (x-\overline{x})^{2}}{n-1}\), this is the sample variance \((s^{2})\). Step 6: Take the square root of the sample variance: \(\sqrt{\frac{\sum (x-\overline{x})^{2}}{n-1}}\), this is the sample standard deviation. Example: Hours Spent Studying Section A professor asks a sample of 7 students how many hours they spent studying for the final. Their responses are: 5, 7, 8, 9, 9, 11, and 13. Step 1: Compute the mean \(\overline{x} = \dfrac{\sum x}{n}=\dfrac{5+7+8+9+9+11+13}{7}=8.857\) Step 2: Compute the deviations \(x\) \(x - \overline{x}\) 5 \(5 - 8.857 = -3.857\) 7 \(7 - 8.857 = -1.857\) 8 \(8 - 8.857 = -0.857\) 9 \(9 - 8.857 = 0.143\) 9 \(9 - 8.857 = 0.143\) 11 \(11 - 8.857 = 2.143\) 13 \(13 - 8.857 = 4.143\) Step 3: Square the deviations \(x\) \(x - \overline{x}\) \((x-\overline{x})^{2}\) 5 \(5 - 8.857 = -3.857\) \(-3.857^{2} = 14.876\) 7 \(7 - 8.857 = -1.857\) \(-1.857^{2} = 3.448\) 8 \(8 - 8.857 = -0.857\) \(-0.857^{2} = 0.734\) 9 \(9 - 8.857 = 0.143\) \(0.143^{2} = 0.0020\) 9 \(9 - 8.857 = 0.143\) \(0.143^{2} = 0.0020\) 11 \(11 - 8.857 = 2.143\) \(2.143^{2} = 4.592\) 13 \(13 - 8.857 = 4.143\) \(4.143^{2} = 17.164\) Step 4: Sum the squared deviations \(SS=\sum (x-\overline{x})^{2}=14.876+3.448+0.734+.020+.020+4.592+17.164=40.854\) The sum of squares is 40.854 Step 5: Divide by n - 1 to compute the variance \(s^{2}=\dfrac{\sum (x-\overline{x})^{2}}{n-1}=\dfrac{40.854}{7-1}=6.809\) The variance is 6.809 Step 6: Take the square root of the variance \(s=\sqrt{s^{2}}=\sqrt{6.809}=2.609\) The standard deviation is 2.609
Karl Theodor Wilhelm Weierstrass (German: Weierstraß; 31 October 1815 – 19 February 1897) was a German mathematician often cited as the "father of modern analysis". Despite leaving university without a degree, he studied mathematics and trained as a teacher, eventually teaching mathematics, physics, botany and gymnastics. Weierstrass formalized the definition of the continuity of a function, and used it and the concept of uniform convergence to prove the Bolzano–Weierstrass theorem and Heine–Borel theorem. Contents Biography 1 Mathematical contributions 2 Soundness of calculus 2.1 Calculus of variations 2.2 Other analytical theorems 2.3 Selected works 3 Students 4 Honours and awards 5 See also 6 References 7 External links 8 Biography Weierstrass was born in Ostenfelde, part of Ennigerloh, Province of Westphalia. [1] Weierstrass was the son of Wilhelm Weierstrass, a government official, and Theodora Vonderforst. His interest in mathematics began while he was a Gymnasium student at Theodorianum in Paderborn. He was sent to the University of Bonn upon graduation to prepare for a government position. Because his studies were to be in the fields of law, economics, and finance, he was immediately in conflict with his hopes to study mathematics. He resolved the conflict by paying little heed to his planned course of study, but continued private study in mathematics. The outcome was to leave the university without a degree. After that he studied mathematics at the University of Münster (which was even at this time very famous for mathematics) and his father was able to obtain a place for him in a teacher training school in Münster. Later he was certified as a teacher in that city. During this period of study, Weierstrass attended the lectures of Christoph Gudermann and became interested in elliptic functions. In 1843 he taught in Deutsch-Krone in Westprussia and since 1848 he taught at the Lyceum Hosianum in Braunsberg. Besides mathematics he also taught physics, botanics and gymnastics. [1] Weierstrass may have had an illegitimate child named Franz with the widow of his friend Carl Wilhelm Borchardt. [2] After 1850 Weierstrass suffered from a long period of illness, but was able to publish papers that brought him fame and distinction. In 1856 he took a chair at the Gewerbeinstitut, which later became the Technical University of Berlin. In 1864 he became professor at the Friedrich-Wilhelms-Universität Berlin, which later became the Humboldt Universität zu Berlin. He was immobile for the last three years of his life, and died in Berlin from pneumonia. Mathematical contributions Soundness of calculus Weierstrass was interested in the soundness of calculus, and at the time, there were somewhat ambiguous definitions regarding the foundations of calculus, and hence important theorems could not be proven with sufficient rigour. While Bolzano had developed a reasonably rigorous definition of a limit as early as 1817 (and possibly even earlier) his work remained unknown to most of the mathematical community until years later, and many had only vague definitions of limits and continuity of functions. Delta-epsilon proofs are first found in the works of Cauchy in the 1820s. [3] [4] Cauchy did not clearly distinguish between continuity and uniform continuity on an interval. Notably, in his 1821 Cours d'analyse, Cauchy argued that the (pointwise) limit of (pointwise) continuous functions was itself (pointwise) continuous, a statement interpreted as being incorrect by many scholars. The correct statement is rather that the uniform limit of continuous functions is continuous (also, the uniform limit of uniformly continuous functions is uniformly continuous). This required the concept of uniform convergence, which was first observed by Weierstrass's advisor, Christoph Gudermann, in an 1838 paper, where Gudermann noted the phenomenon but did not define it or elaborate on it. Weierstrass saw the importance of the concept, and both formalized it and applied it widely throughout the foundations of calculus. The formal definition of continuity of a function, as formulated by Weierstrass, is as follows: \displaystyle f(x) is continuous at \displaystyle x = x_0 if \displaystyle \forall \ \varepsilon > 0\ \exists\ \delta > 0 such that for every x in the domain of f, \displaystyle \ |x-x_0| < \delta \Rightarrow |f(x) - f(x_0)| < \varepsilon. Using this definition and the concept of uniform convergence, Weierstrass was able to write proofs of several theorems such as the intermediate value theorem, the Bolzano–Weierstrass theorem, and Heine–Borel theorem. Calculus of variations Weierstrass also made significant advancements in the field of calculus of variations. Other analytical theorems See also List of things named after Karl Weierstrass. Selected works Zur Theorie der Abelschen Funktionen (1854) Theorie der Abelschen Funktionen (1856) Abhandlungen-1// Math. Werke. Bd. 1. Berlin, 1894 Abhandlungen-2// Math. Werke. Bd. 2. Berlin, 1895 Abhandlungen-3// Math. Werke. Bd. 3. Berlin, 1903 Vorl. ueber die Theorie der Abelschen Transcendenten// Math. Werke. Bd. 4. Berlin, 1902 Vorl. ueber Variationsrechnung// Math. Werke. Bd. 7. Leipzig, 1927 Students Honours and awards The lunar crater Weierstrass is named after him. See also References ^ a b O'Connor, J. J.; Robertson, E. F. (October 1998). "Karl Theodor Wilhelm Weierstrass". School of Mathematics and Statistics, University of St Andrews, Scotland. Retrieved 7 September 2014. ^ Biermann, Kurt-R.; Schubring, Gert (1996). "Einige Nachträge zur Biographie von Karl Weierstraß. (German) [Some postscripts to the biography of Karl Weierstrass]". History of mathematics. San Diego, CA: Academic Press. pp. 65–91. ^ Grabiner, Judith V. (March 1983), "Who Gave You the Epsilon? Cauchy and the Origins of Rigorous Calculus" (PDF), The American Mathematical Monthly 90 (3): 185–194, ^ External links . Karl Weierstrass at the Mathematics Genealogy Project Digitalized versions of Weierstrass's original publications are freely available online from the library of the Berlin Brandenburgische Akademie der Wissenschaften. Works by Karl Weierstrass at Project Gutenberg Works by or about Karl Weierstrass at Internet Archive This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002. Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles. By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
I would like your help to understand the concept of expansion of an information structure in the incomplete information game at p.6-9 this paper. Let me summarise the game as described in the paper. There are $N\in \mathbb{N}$ players, with $i$ denoting a generic player. There is a finite set of states $\Theta$, with $\theta$ denoting a generic state. A basic game $G$ consists of for each player $i$, a finite set of actions $A_i$, where we write $A\equiv A_1\times A_2\times ... \times A_N$, and a utility function $u_i: A\times \Theta \rightarrow \mathbb{R}$. a full support prior $\psi\in \Delta(\Theta)$. An information structure $S$ consists of for each player $i$, a finite set of signals $T_i$, where we write $T\equiv T_1\times T_2\times ... \times T_N$. a signal distribution $\pi: \Theta \rightarrow \Delta(T)$. A decision rule of the incomplete information game $(G,S)$ is a mapping $$ \sigma: T\times \Theta\rightarrow \Delta(A) $$ Expansion: Consider two information structures, $S^1\equiv (T^1, \pi^1)$ and $S^2\equiv (T^2, \pi^2)$. We say that $S^*\equiv (T^*, \pi^*)$ is a combination of $S^1$ and $S^2$ if $T_i^*=T_i^1\times T_i^2$ $\forall i$. $\pi^*:\Theta \rightarrow \Delta(T^1\times T^2)$ has $\pi^1$ and $\pi^2$ as marginals. An information structure $S^*$ is an expansion of an information structure $S^1$ if there exists an information structure $S^2$ such that $S^*$ is a combination of $S^1$ and $S^2$. My question: The game, as it is described by the authors, seems to assume that, before receiving the signal $T_i$, each player $i$ knows nothingabout what will be the realisation of the state. I call this as the baseline level of informationassumed. (For example, in other contexts, one may assume that the state is a vector of size $N\times 1$ and, before receiving the signal $T_i$, each player $i$ knows the realisation of the $i$th component of such a vector. This would correspond to another kind of baseline level of information) Let $\underline{S}$ denote the information structure that is totally uninformative, i.e., it does not add anything to the baseline level of information assumed (also called DEGENERATE at p.26 of the linked paper). In other words, $\underline{S}$ consists of (a) for each player $i$, a finite set of signals $T_i$, where we write $T\equiv T_1\times T_2\times ... \times T_N$. (b) a signal distribution $\pi: \Theta \rightarrow \Delta(T)$ such that $\pi(\cdot|\theta)=\tilde{\pi}$ $\forall \theta \in \Theta$ for some $\tilde{\pi}\in \Delta(T)$. In other words, the conditional probability is equal to the unconditional one and our belief on the probability distribution of the state is not updated. Notice that there are manyways to characterise the uninformative information structure (just by varying $T$ and $\tilde{\pi}$). Let $\mathcal{S}$ denote the collection of all possible information structures. More precisely, $$ \mathcal{S}\equiv \{S| T \text{ is a separable metric space}, \text{ $\pi:\Theta \rightarrow \Delta(T)$ is a probability measure on $(T,\mathcal{B}(T))$}\} $$ where $\mathcal{B}(\cdot)$ denotes Borel sigma algebra. Note that $\mathcal{S}$ contains all possible ways to characterise the uninformative information structure. Question:Can we show that, for a given $\underline{S}$, each $S\in \mathcal{S}$ is an expansion of $\underline{S}$ (including $S=\underline{S}$)? This seems to me to hold at the light of Theorem 1 combined with reading at p.26 of the paper " Now consider the case where the original information structure is degenerate (there is only one signal which represents the prior over the states of the world). In this case, the set of Bayes correlated equilibria correspond to joint distributions of actions and states that could arise under rational choice by a decision maker with" anyinformation structure
Quick question...For some reason I'm having trouble finding an identity or discussion for the commutator of the gamma matrices at the moment...i.e $$\gamma^u\gamma^v-\gamma^v \gamma^u$$ but I am not finding this anywhere. I have an idea of what it may be, but then again I'm not always right. Can anyone fill me in here fill me in? (I already know the anticommutator, i.e $$\gamma^u\gamma^v+\gamma^v\gamma^u=2g^{uv}I.)$$ Although the Clifford algebra $\{\gamma^\mu,\gamma^\nu\}$ is the most famous, there is an expression for the commutator: $$[\gamma^\mu,\gamma^\nu] = 2\gamma^\mu \gamma^\nu - 2 \eta^{\mu\nu}$$ The matrix defined by $[\gamma^\mu,\gamma^\nu]$ actually has a purpose: it forms a representation of the Lorentz algebra. If we define $S^{\mu\nu}$ as $1/4$ the commutator, then we have, $$[S^{\mu\nu},S^{\rho\sigma}] = \eta^{\nu\rho}S^{\mu\sigma} - \eta^{\mu\rho}S^{\nu\sigma} + \eta^{\mu\sigma} - \eta^{\nu\sigma}S^{\mu\rho}= \eta^{\rho[\nu}S^{\mu]\sigma} + \eta^{\sigma [\mu}S^{\nu]\rho}$$ which is the Lorentz algebra. One can verify this by simply using the first commutator, and the rule for the commutator involving a product. There is a particularly important use for the commutator, namely defining $\sigma^{\mu\nu} = \frac{i}{2} [\gamma^\mu,\gamma^\nu]$, the action of a spin-$\frac32$ particle is given by, $$\mathcal L = -\frac{1}{2}\bar{\psi}_\mu \left( \varepsilon^{\mu\lambda \sigma \nu} \gamma_5 \gamma_\lambda \partial_\sigma -im\sigma^{\mu\nu}\right)\psi_v,$$ which can be used to describe the superpartner to the graviton, namely the gravitino, thus making it necessary for supergravity theories. The OP wrote in a comment on the commutators of gamma-matrices: "to clarify a discussion of what it represents would be useful. I read its related to the Lie Algebra somewhere but as to further details (as in details beyond being a commutator)... not finding them." In some sense, the Dirac gamma matrices can be identified with mutually orthogonal unit vectors (orts) of the Cartesian basis in 3+1 spacetime, with their anticommutators corresponding to scalar products of the orts (this approach is used, e.g., in the book "The Theory of Spinors" by Cartan, the discoverer of spinors). Then the non-vanishing commutators of gamma-matrices (say, in the form $\sigma^{\mu\nu}=\frac{1}{2}[\gamma^\mu,\gamma^\nu]$) can be identified with the so called bivectors (2-dimensional planes in the spacetime spanned by two orts). @TheDarkSide asked if the commutator is useful anywhere. Some uses are mentioned in other answers, but let me tell you how it was useful for me. Some time ago, I showed that the Dirac equation (which is a system of four first-order equations for four components of the Dirac spinor) is generally equivalent to just one fourth-order equation for one component of the Dirac spinor (http://akhmeteli.org/wp-content/uploads/2011/08/JMAPAQ528082303_1.pdf, published in the Journal of Mathematical Physics). Recently, I derived a relativistically covariant form of the equation ((https://arxiv.org/abs/1502.02351 , eq. (27)), where the following linear combination of the commutators of gamma-matrices is heavily used: $F=\frac{1}{2}F_{\mu\nu}\sigma^{\mu\nu}$, where $F_{\mu\nu}$ is the electromagnetic field. Ok, $\gamma^u\gamma^v=-\gamma^v\gamma^u$ for $u\neq v$ Therfore $\gamma^u\gamma^v-\gamma^v\gamma^u=2\gamma^u\gamma ^v$ for $u \neq v$ The above answers are good, yet I'm suprised that no one has mentioned that the commutator of the Dirac matrices is required in the description of ANY fermion in a general non-flat space. In such a space the Dirac equation reads (utilizing the tetrad formulation): $$\left(i\gamma^{a}e_{a}^{\mu}D_{\mu}-m\right)\psi=0$$ Where: $$D_{\mu}=\partial_{\mu}-\frac{i}{4}\omega_{\mu}^{ab}\sigma_{ab}$$ where $\omega$ is the so-called spin connection and $\sigma$ is defined as in JamalS's answer above. If you want to do quantum mechanics with fermions in curved spacetime, you will certainly need to utilize the commutator of Dirac matrices.
Assume $$g: R^n \times R^m \rightarrow R^n$$ $$h: R^n \times R^m \rightarrow R$$ $$(x,y) \in R^n \times R^m$$ I would like to show that the following vectors are linearly independent: \begin{equation} \left[ \begin{array}{c} \frac{\partial g_1}{\partial x_1}\\ \vdots\\ \frac{\partial g_1}{\partial x_n}\\ \frac{\partial g_1}{\partial y_1}\\ \vdots\\ \frac{\partial g_1}{\partial y_m}\\ \end{array} \right] \dots \left[ \begin{array}{c} \frac{\partial g_n}{\partial x_1}\\ \vdots\\ \frac{\partial g_n}{\partial x_n}\\ \frac{\partial g_n}{\partial y_1}\\ \vdots\\ \frac{\partial g_n}{\partial y_m}\\ \end{array} \right] \left[ \begin{array}{c} \frac{\partial h}{\partial x_1}\\ \vdots\\ \frac{\partial h}{\partial x_n}\\ \frac{\partial h}{\partial y_1}\\ \vdots\\ \frac{\partial h}{\partial y_m}\\ \end{array} \right] \end{equation} I know, that the matrix with the columns \begin{equation} \left[ \begin{array}{c} \frac{\partial g_1}{\partial x_1}\\ \vdots\\ \frac{\partial g_1}{\partial x_n} \end{array} \right] \dots \left[ \begin{array}{c} \frac{\partial g_n}{\partial x_1}\\ \vdots\\ \frac{\partial g_n}{\partial x_n} \end{array} \right] \end{equation} already is invertible. Because of that, I know that the first $n$ vectors are linearly independent. To show that all $n+1$ vectors are linearly independent, I have to show that \begin{equation} \left[ \begin{array}{c} \frac{\partial g_1}{\partial y_1}\\ \vdots\\ \frac{\partial g_1}{\partial y_m}\\ \end{array} \right] \dots \left[ \begin{array}{c} \frac{\partial g_n}{\partial y_1}\\ \vdots\\ \frac{\partial g_n}{\partial y_m}\\ \end{array} \right] \left[ \begin{array}{c} \frac{\partial h}{\partial y_1}\\ \vdots\\ \frac{\partial h}{\partial y_m}\\ \end{array} \right] \end{equation} also is linearly independent, right? This is equivalent to $\nabla_y h= g_y^\top a$ with $a \in R^n$ not having any solutions. How can I find conditions for which this is true?
2.2.8 - z-scores Often we want to describe one observation in relation to the distribution of all observations. We can do this using a z-score. z-score Distance between an individual score and the mean in standard deviation units; also known as a standardized score. z-score \(z=\dfrac{x - \overline{x}}{s}\) \(x\) = original data value \(\overline{x}\) = mean of the original distribution \(s\) = standard deviation of the original distribution This equation could also be rewritten in terms of population values: \(z=\frac{x-\mu}{\sigma}\) z-distribution A bell-shaped distribution with a mean of 0 and standard deviation of 1, also known as the standard normal distribution. Example: Milk Section A study of 66,831 dairy cows found that the mean milk yield was 12.5 kg per milking with a standard deviation of 4.3 kg per milking (data from Berry, et al., 2013). A cow produces 18.1 kg per milking. What is this cow’s z-score? \(z=\frac{x-\overline{x}}{s} =\frac{18.1-12.5}{4.3}=1.302\) This cow’s z-score is 1.302; her milk production was 1.302 standard deviations above the mean. A cow produces 12.5 kg per milking. What is this cow’s z-score? \(z=\frac{x-\overline{x}}{s} =\frac{12.5-12.5}{4.3}=0\) This cow’s z-score is 0; her milk production was the same as the mean. A cow produces 8 kg per milking. What is this cow’s z-score? \(z=\frac{x-\overline{x}}{s} =\frac{8-12.5}{4.3}=-1.047\) This cow’s z-score is -1.047; her milk production was 1.047 standard deviations below the mean. Example: BMI of Boys Section A recent study examined the relationship between sedentary behavior and academic performance in youth. In a sample of 582 boys, the average weight was 49.8 kg with a standard deviation of 15.7 kg (data from Esteban-Cornejo, et al., 2015). A boy in this sample weighs 73.35 kg. What is this boy's z-score? \(z=\frac{x-\overline{x}}{s} =\frac{73.35-49.8}{15.7}=1.5\) This boy's z-score is 1.5; he weighs 1.5 standard deviations above the mean. A boy in this sample weighs 38.5 kg. What is this boy's z-score? \(z=\frac{x-\overline{x}}{s} =\frac{38.5-49.8}{15.7}=-0.720\) This boy's z score is -0.720; he weighs 0.720 standard deviations less than the mean. Computing z-scores Section Type in the answer you think is correct - then click the 'Check' button to see how you did. Click the right arrow to proceed to the next question. When you have completed all of the questions you will see how many you got right and the correct answers. For each question, compute the z-score. Type in the answer you think is correct - then click the 'Check' button to see how you did. Click the right arrow to proceed to the next question. When you have completed all of the questions you will see how many you got right and the correct answers. For each question, compute the z-score. Berry, D. P., Coyne, J., Boughlan, B., Burke, M., McCarthy, J., Enright, B., Cromie, A. R., McParland, S. (2013). Genetics of milking characteristics in dairy cows. Animal, 7(11), 1750-1758. Esteban-Cornejo, I., Martinez-Gomez, D., Sallis, J. F., Cabanas-Sanchez, V., Fernandez-Santos, J., Costro-Pinero, J., & Veiga, O. L. (2015). Objectively measured and self-reported leisure-time sedentary behavior and academic performance in youth: The UP&DOWN Study. Preventive Medicine, 77, 106-111.
→ → → → Browse Dissertations and Theses - Mathematics by Title Now showing items 850-869 of 1147 application/pdfPDF (1MB) (1970) application/pdfPDF (3MB) application/pdfPDF (3MB) (1988)Let Y$\sp{\rm (n)}$ be the 1 x n matrix containing indeterminate entries $\{$Y$\sb1,\dots$,Y$\sb{\rm n}\}$ and X$\sp{\rm (n)}$ be the n x n alternating matrix containing indeterminate entries $\{$X$\sb{\rm ij}\vert$1 $\leq$ ... application/pdfPDF (2MB) (1980)This thesis concerns the relationship between pairs of projectively equivalent Riemannian or Lorentz metrics that share some property along a hypersurface of a manifold. The first chapter is devoted to the construction of ... application/pdfPDF (2MB) application/pdfPDF (1MB) (1993)A holomorphic mapping f from a bounded domain D in $\doubc\sp{n}$ to a bounded domain $\Omega$ in $\doubc\sp{N}$ is proper if the sequence $\{f(zj)\}$ tends to the boundary of $\Omega$ for every sequence $\{zj\}$ which ... application/pdfPDF (2MB) (1990)A holomorphic mapping f from a bounded domain $\Omega$ in C$\sp{\rm n}$ to a bounded domain $\Omega\sp\prime$ in C$\sp{\rm N}$ is proper if and only if (f(z$\sb\nu$)) tends to the boundary b$\Omega\sp\prime$ for each ... application/pdfPDF (2MB) application/pdfPDF (3MB) (2014-09-16)Let $\mathcal{A}$ be a finite subset of $\mathbb{N}$ including $0$ and $f_\mathcal{A}(n)$ be the number of ways to write $n=\sum_{i=0}^{\infty}\epsilon_i2^i$, where $\epsilon_i\in\mathcal{A}$. The sequence $\left(f_\mat ... application/pdfPDF (1MB) application/pdfPDF (2MB) (2012-05-22)In this dissertation, we consider an applied problem, namely, pursuit-evasion games. These problems are related to robotics, control theory and computer simulations. We want to find the solution curves of differential ... application/pdfPDF (283kB) (2001)In Chapter 5, we prove two other equations on page 45 of Ramanujan's lost notebook by using the Bauer-Muir transformation and by using the method of successive approximation. Also, we prove several continued fractions of ... application/pdfPDF (3MB) (2017-07-10)We study tau-functions given as matrix elements for the action of loop groups, $\widehat{GL_n}$ on $n$-component fermionic Fock space. In the simplest case, $n=2$, the tau-functions are equal to Hankel determinants and ... application/pdfPDF (1MB) application/pdfPDF (4MB) (2004)The second part of the thesis is devoted to the qualitative analysis of weighted operators, where the weights are obtained by almost everywhere sampling in a stationary stochastic process. The major theme of our investigation ... application/pdfPDF (5MB) (2010-08-20)In this thesis, we first use the ${\mathbb C^*}^2$-action on the Hilbert scheme of two points on a Hirzebruch surface to compute all one-pointed and some two-pointed Gromov-Witten invariants via virtual localization, then ... application/pdfPDF (551kB) (1967) application/pdfPDF (3MB) (2017-04-21)We introduce and study quasi-elliptic cohomology, a theory related to Tate K-theory but built over the ring $\mathbb{Z}[q^{\pm}]$. In Chapter 2 we build an orbifold version of the theory, inspired by Devoto's equivariant ... application/pdfPDF (2MB) application/pdfPDF (2MB)
So what are spin-networks? Briefly, they are graphs with representations ("spins") of some gauge group (generally SU(2) or SL(2,C) in LQG) living on each edge. At each non-trivial vertex, one has three or more edges meeting up. What is the simplest purpose of the intertwiner? It is to ensure that angular momentum is conserved at each vertex. For the case of four-valent edge we have four spins: $(j_1,j_2,j_3,j_4)$. There is a simple visual picture of the intertwiner in this case. Picture a tetrahedron enclosing the given vertex, such that each edge pierces precisely one face of the tetrahedron. Now, the natural prescription for what happens when a surface is punctured by a spin is to associate the Casimir of that spin $ \mathbf{J}^2 $ with the puncture. The Casimir for spin $j$ has eigenvalues $ j (j+1) $. You can also see these as energy eigenvalues for the quantum rotor model. These eigenvalues are identified with the area associated with a puncture. In order for the said edges and vertices to correspond to a consistent geometry it is important that certain constraints be satisfied. For instance, for a triangle we require that the edge lengths satisfy the triangle inequality $ a + b \lt c $ and the angles should add up to $ \angle a + \angle b + \angle c = \kappa \pi$, with $\kappa = 1$ if the triangle is embedded in a flat space and $\kappa \ne 1$ denoting the deviation of the space from zero curvature (positively or negatively curved). In a similar manner, for a classical tetrahedron, now it is the sums of the areas of the faces which should satisfy "closure" constraints. For a quantum tetrahedron these constraints translate into relations between the operators $j_i$ which endow the faces with area. Now for a triangle giving its three edge lengths $(a,b,c)$ completely fixes the angles and there is no more freedom. However, specifying all four areas of a tetrahedron does not fix all the freedom. The tetrahedron can still be bent and distorted in ways that preserve the closure constraints (not so for a triangle!). These are the physical degrees of freedom that an intertwiner possesses - the various shapes that are consistent with a tetrahedron with face areas given by the spins, or more generally a polyhedron for n-valent edges. Some of the key players in this arena include, among others, Laurent Friedel, Eugenio Bianchi, E. Magliaro, C. Perini, F. Conrady, J. Engle, Rovelli, R. Pereira, K. Krasnov and Etera Livine. I hope this provides some intuition for these structures. Also, I should add, that at present I am working on a review article on LQG for and by "the bewildered". This post imported from StackExchange Physics at 2014-04-01 16:52 (UCT), posted by SE-user user346 I reserve the right to use any or all of the contents of my answers to this and other questions on physics.se in said work, with proper acknowledgements to all who contribute with questions and comments. This legalese is necessary so nobody comes after me with a bullsh*t plagiarism charge when my article does appear :P
I read on the Wikipedia page that bounded variation (BV) functions have only jump-type discontinuities. Why is that? Suppose at some $a\in\mathbb{R}$, the limit $\lim_{x\rightarrow a^+}f(x)$ doesn't exist (or is infinite). Why would such a function $f$ not be BV? If $f$ is unbounded near $a$, it can clearly not have bounded variation, since $$V(f,[a,b]) \geqslant \lvert f(x)-f(a)\rvert$$ for all $x \in (a,b]$. If $-\infty < m = \liminf\limits_{x \searrow a} f(x) < \limsup\limits_{x\searrow a} f(x) = M < +\infty$, let $\varepsilon = (M-m)/3$. Then we can find sequences $x_n \searrow a$ and $y_n \searrow a$ with $x_n > y_n > x_{n+1}$ and $f(y_n) < m+\varepsilon$, $f(x_n) > M-\varepsilon$ for all $n$, so $$V(f,[a,b]) \geqslant \sum_{n=1}^N \lvert f(x_n) - f(y_n)\rvert \geqslant N\cdot \varepsilon.$$ Letting $N\to \infty$, we see that then $V(f,[a,b]) = +\infty$.
Stability analysis¶ The condition of initial stability for any floating structure is expressed using the metacentric heights (see stability). where \(\overline{GM_T}\) is the transverse metacentric height, \(\overline{GM_L}\) is the longitudinal metacentric height, these height can be computed as where \(K\) is the lowest point on the vertical going through the gravity center of the box, which means \(\overline{KG} = H/2\) \(B\) is the buoyancy center of the box. \(\overline{BM}\) is the transverse/longitudinal metacentric radius For a parallelepipedic box, these radii are given simply as where \(T\) is the draught. For a parallelepipedic box of uniform density \(c = \dfrac{\rho_{parallelepiped}}{\rho_{water}}\), this draught is The buoyancy center of the box is located at the center of the immersed volume The conditions of initial stability thus yield For the previous conditions (\((L,B,H) = (8,4,2)m\) and \(c = 0.5\)), these conditions are true for any density. Unstable box¶ For a box of dimensions \((L,B,H) = (5,5,5)m\), the conditions are true only for the density values given in the figure Fig. 28. For a density \(c = 0.5\), and an initial roll angle \(\phi = 2^{\circ}\), the metacentric heights computed on the box mesh are \(GM_T = GM_L = -0.41667\). Negative values induce unstable initial behavior of the box. Linear approximation¶ In linear approximation, the roll and pitch restoring coefficients are computed based on the metacentric heights: where \(V\) is the displacement volume. The roll solution in linear approximation is given in the figure Fig. 29 Nonlinear approximation¶ In nonlinear approximation, the hydrostatic force and torque are computed on the mesh, following the box in its motions. This means that the metacentric heights can be computed also for each position. A \(2^{\circ}\) initial roll angle will induce a roll and a pitch motion to reach a stable position, see below and figure Fig. 31 The damping coefficients are taken at \(1E5\) for the rotation degrees of freedom, in order to reduce the computation time. The metacentric heights are shown in figure Fig. 32 : they both start at the negative value, given above, and finish at a positive value, indicating that the box reached a stable position. Box with a growing density¶ The same box, with a varying density is considered, along with the nonlinear hydrostatic approximation : The following Fig. 33, Fig. 34 and Fig. 35 illustrates the behavior of the box. We can find the two density values \(c_1 = 0.211\) and \(c_2 = 0.789\) for which the metacentric heights become negative. The box turns over slightly after theses two density values, with a delay due to the inertia and damping forces. The first turn over ends up on the orientation previously observed (roll at 45 degrees and pitch around 33 degrees). For the second turn over, the box recovers its initial orientation (zero roll and pitch) but with a 15 degrees yaw angle.
If you need a pdf version of these notes you can get it here The 2014 video was corrupted..again. I have made some changes on my computer and hopefully the problem will not happen again.. In a transformer two coils are coupled by an iron core so that the flux through the two coils is the same. The iron core is usually layered with insulating materials to prevent eddy currents. When an AC voltage is applied to the primary coil the magnetic flux passing through it is related to the applied field by $V_{P}=N_{P}\frac{d\Phi_{B}}{dt}$ if we assume the coil has no resistance. The emf produced by the changing flux is $\mathcal{E}=-N_{P}\frac{d\Phi_{B}}{dt}$ and this means that the net potential drop in the coil is actually zero (as Kirchoff's rules say it must be if the resistance of the coil is zero). The voltage induced in the secondary coil will have magnitude $V_{S}=N_{S}\frac{d\Phi_{B}}{dt}$ We can thus see that $\frac{V_{S}}{V_{P}}=\frac{N_{S}}{N_{P}}$ If we assume there is no power loss (which is fairly accurate) then $I_{P}V_{P}=I_{S}V_{S}$ and $\frac{I_{S}}{I_{P}}=\frac{N_{P}}{N_{S}}$ In the example of a transformer we assumed that the flux induced in the secondary coil was equal to the flux in the primary coil. In general for two coils the relationship between the flux in one coil due to the current in another is described by a parameter called the mutual inductance. $\Phi_{21}$ is the magnetic flux in each loop of coil 2 created by the current in coil 1. The total flux in the second coil is then $N_{2}\Phi_{21}$ and is related to the current in coil 1, $I_{1}$ by $N_{2}\Phi_{21}=M_{21}I_{1}$ As, from Faraday's Law, the emf induced in coil 2 is $\mathcal{E}_{2}=-N_{2}\frac{d\Phi_{21}}{dt}$ so $\mathcal{E}_{2}=-M_{21}\frac{dI_{1}}{dt}$ The mutual inductance of coil 2 with respect to coil 1, $M_{21}$ does not depend on $I_{1}$, but it does depend on factors such as the size, shape and number of turns in each coil, their position relative to each other and whether there is some ferromagnetic material in the vicinity. In the reverse situation where a current flows in coil 2 $\mathcal{E}_{1}=-M_{12}\frac{dI_{2}}{dt}$ but in fact $M_{12}=M_{21}=M$ The mutual inductance is measured in Henrys ($\mathrm{H}$), $1\mathrm{H}=1\mathrm{\frac{Vs}{A}}=1\mathrm{\Omega s}$ To calculate the mutual inductance of the above situation we consider the magnetic field of a solenoid with cross-sectional area $A$ $B=\mu_{0}\frac{N_{1}}{l}I_{1}$ The magnetic flux through the loose coil due to the current in the solenoid is thus $\Phi_{21}=BA=\mu_{0}\frac{N_{1}}{l}I_{1}A$ and the mutual inductance is then $M=\frac{N_{2}\Phi_{21}}{I_{1}}=\frac{\mu_{0}N_{1}N_{2}A}{l}$ If we now consider a coil to which a time varying current is applied a changing magnetic field is produced, which induces an emf in the coil, which opposes the change in flux. The magnetic flux $\Phi_{B}$ passing through the coil is proportional to the current, and as we did for mutual inductance we can define a constant of proportionality between the current and the flux, the self-inductance $L$ $N\Phi_{B}=LI$ The emf $\mathcal{E}=-N\frac{d\Phi_{B}}{dt}=-L\frac{dI}{dt}$ The self-inductance is also measured in henrys. A component in a circuit that has significant inductance is shown by the symbol. When we draw this symbol it implies an inductor with negligible resistance (if the inductor has significant resistance we draw that as a resistor in series with the inductor). A large inductor does however reduce the AC current flowing through a circuit because the back emf generated opposes the applied potential so that the total potential across the inductor is small, and the current is also small. The degree to which an inductor opposes an AC current is called the reactance (more on that later!) . We can can calculate the self-inductance of a solenoid from it's field $B=\mu_{0}\frac{NI}{l}$ The flux in the solenoid is $\Phi_{B}=BA=\mu_{0}\frac{N_{1}IA}{l}$ so $L=\frac{N\Phi_{B}}{I}=\frac{\mu_{0}N^{2}A}{l}$ To find the inductance we need to know the total flux that is generated by the current. $\Phi_{B}=\int\vec{B}\cdot d\vec{A}$ From Ampere's law ($\oint\vec{B}\cdot d\vec{l}=\mu_{0}I$) $B=\frac{\mu_{0}I}{2\pi r}$ The magnetic flux through a rectangle of width $dr$ and length $l$ at a distance $r$ from the center $d\Phi_{B}=B(l\,dr)=\frac{\mu_{0}I}{2\pi r}l\,dr$ The total flux is $\Phi_{B}=\int d\Phi_{B}=\frac{\mu_{0}Il}{2\pi}\int_{r_{1}}^{r_{2}}\frac{dr}{r}=\frac{\mu_{0}Il}{2\pi}\ln\frac{r_{2}}{r_{1}}$ so the inductance is $L=\frac{\Phi_{B}}{I}=\frac{\mu_{0}l}{2\pi}\ln\frac{r_{2}}{r_{1}}$ and the inductance per unit length is $\frac{L}{l}=\frac{\mu_{0}}{2\pi}\ln\frac{r_{2}}{r_{1}}$ When a time varying current $I$ is being carried in an inductor $L$ the power being supplied to the inductor is $P=I\mathcal{E}=LI\frac{dI}{dt}$ The amount of work done in a time $dt$ is $dW=P\,dt=LI\,dI$ so the work done in increasing the current from zero to $I$ is $W=\int dW=\int_{0}^{I}LI\,dI=\frac{1}{2}LI^{2}$ The work done in going from zero to $I$ is equivalent to the energy stored in the magnetic field $U=\frac{1}{2}LI^{2}$ The formula $U=\frac{1}{2}LI^{2}$ can be applied to a solenoid ($L=\frac{\mu_{0}N^{2}A}{l}$ and $B=\mu_{0}\frac{NI}{l}$ ) $U=\frac{1}{2}\frac{\mu_{0}N^{2}A}{l}(\frac{Bl}{\mu_{0}N})^{2}=\frac{1}{2}\frac{B^{2}}{\mu_{0}}Al$ As the volume of the solenoid is $Al$ the energy density $u=\frac{1}{2}\frac{B^{2}}{\mu_{0}}$ This formula is actually generally applicable to any magnetic field, although when we consider energy in a magnetic medium $u=\frac{1}{2}\frac{B^{2}}{\mu\mu_{0}}$ where $\mu$ is the permeability of the material. This can be compared to the energy density of an electric field $u=\frac{1}{2}\varepsilon_{0}E^{2}$ or $u=\frac{1}{2}K\varepsilon_{0}E^{2}$ in a dielectric material with dielectric constant K. When we take a resistor an inductor in series and connect it to a battery then Kirchoff's loop rule tells us that $V_{0}-IR-L\frac{dI}{dt}=0$ which we can rearrange and integrate $\int_{I=0}^{I}\frac{dI}{V_{0}-IR}=\int_{0}^{t}\frac{dt}{L}$ $-\frac{1}{R}\ln(\frac{V_{0}-IR}{V_{0}})=\frac{t}{L}$ $I=\frac{V_{0}}{R}(1-e^{-t/\tau})=I_{0}(1-e^{-t/\tau})$ where $\tau=\frac{L}{R}$ If we then switch back to the closed loop that does not include the battery then Kirchoff's loop rule gives us $L\frac{dI}{dt}+RI=0$ $\int_{I_{0}}^{I}\frac{dI}{I}=-\int_{0}^{t}\frac{R}{L}dt$ $\ln\frac{I}{I_{0}}=-\frac{R}{L}t$ $I=I_{0}e^{-t/\tau}$ We can compare this to RC circuits and we see that the both have currents exponentially related to time, but the behavior is not quite the same, in our next lecture we will start to look at AC circuits where the time dependence of the current in inductors and capacitors is important in determining the AC behavior of a circuit.
The Split-plot ANOVA is perhaps the most traditional approach, for which hand calculations are not too unreasonable. It involves modeling the data using the linear model shown below: Model: \(Y_{ijk} = \mu + \alpha_i + \beta_{j(i)}+ \tau_k + (\alpha\tau)_{ik} + \epsilon_{ijk}\) Using this linear model we are going to assume that the data for treatment i for dog j at time k is equal to an overall mean μ plus the treatment effect \(\alpha_i\), the effect of the dog within that treatment \(\beta_{j \left( i \right)}\), the effect of time \(τ_k\), the effect of the interaction between time and treatment \(\left(ατ \right)_{ik}\), and the error \(\varepsilon_{ijk}\). Such that: \(\mu\) = overall mean \(\alpha_i\) = effect of treatment i \(\beta_{j \left( i \right)}\) = random effect of dog jreceiving treatment i \(\tau_{k}\)= effect of time k \(\left( \alpha \tau \right)_{ik}\) = treatment by time interaction \(\varepsilon_{ijk}\) = experimental error Assumptions: We are going to make the following assumptions about the data: 1. The errors \(\varepsilon_{ijk}\) are independently sampled from a normal distribution with mean 0 and variance \(\sigma^2_{\epsilon}\). 2. The individual dog effects \(\beta_{j \left( i \right)}\) are are also independently sampled from a normal distribution with mean 0 and variance \(\sigma^2_{\beta}\). 3. The effect of time does not depend on the dog; that is, there is no time by dog interaction. Generally, we need to have this assumption otherwise the results would depend on which animal you were looking at - which would mean that we could not predict much for new animals. With these assumptions, the random effect of dog and fixed effects for treatment and time, this is called a mixed effects model. The analysis is carried out in this Analysis of Variance Table shown below: Source d.f SS MS F Treatment \(a - 1\) \(SS_{\text {treat}}\) \(\dfrac { \mathrm { SS } _ { \text { treat } } } { a - 1 }\) \(\dfrac { \mathrm { MS } _ { \text { treat } } } { \mathrm { MS } _ { \text { error } ( a ) } }\) Error (a) \(N - a\) \(SS_{\text {error (a)} }\) \(\dfrac { S S _ { \text { error } ( a ) } } { ( N - a ) }\) Time \(t - 1\) \(SS_{\text {time}}\) \(\dfrac { S S _ { \text { time} } } { ( t - 1 ) }\) \(\dfrac { \mathrm { MS } _ { \text { time} } } { \mathrm { MS } _ { \text { error } ( a ) } }\) Treat x Time \(\left( a - 1 ) ( t - 1 \right)\) \(SS_{\text {treat x time}}\) \(\dfrac { S S _ { \text { treat x times } ( b ) } } { ( a - 1 ) ( t - 1 ) }\) \(\dfrac { \mathrm { MS } _ { \text { treat x time } } } { \mathrm { MS } _ { \text { error } ( b ) } }\) Error (b) \(\left( N - a ) ( t - 1 \right)\) \(SS_{\text {error (b) }}\) \(\dfrac { S S _ { \text { error } ( b ) } } { ( N - a ) ( t - 1 ) }\) Total \(Nt - 1\) \(SS_{\text {total}}\) where, a: the number of treatments N: the total number of all experimental units t: number of time points The sources of the variation include treatment; Error (a); the effect of Time; the interaction between time and treatment; and Error (b). Error (a) is the effect of subjects within treatments and Error (b) is the individual error in the model. All these add up to the total. Sum of Squares Formulas Here are the formulas that are used to calculate the various Sums of Squares involved: \(\begin{array}{lll}SS_{total}& =& \sum_{i=1}^{a}\sum_{j=1}^{n_i}\sum_{k=1}^{t}Y^2_{ijk}-Nt\bar{y}^2_{...}\\SS_{treat} &= &t\sum_{i=1}^{a}n_i\bar{y}^2_{i..} - Nt\bar{y}^2_{...}\\SS_{error(a)}& =& t\sum_{i=1}^{a}\sum_{j=1}^{n_i}\bar{y}^2_{ij.} - t\sum_{i=1}^{a}n_i\bar{y}^2_{i..}\\SS_{time}& =& N\sum_{k=1}^{t}\bar{y}^2_{..k}-Nt\bar{y}^2_{...}\\SS_{\text{treat x time}} &=& \sum_{i=1}^{a}\sum_{k=1}^{t}n_i\bar{y}^2_{i.k} - Nt\bar{y}^2_{...}-SS_{treat} -SS_{time}\end{array}\) Mean Square (MS) is always derived by dividing the Sum of Square term by the corresponding degrees of freedom. To get the main effects for the treatment we compare the MS treatment to MS error ( a) We will compare these results with the results we get from the MANOVA, the next approach covered in this lesson.
Example 9-2: Section Download the text file containing the data: dog1.txt Using SAS We will use the following SAS program below to illustrate this procedure. Download the SAS Program here: dog2.sas View the video explanation of the SAS code. Using Minitab Currently not available in Minitab Analysis Run the SAS program inspecting how the program applies this procedure.Note in the output where values of interest are located. The results are copied from the SAS output into this table here: Source d.f. SS MS F Treatment 3 19.923 6.641 6.00 Error (a) 32 35.397 1.106 Time 3 6.204 2.068 11.15 Interaction 9 3.440 0.382 2.06 Error (b) 96 17.800 0.185 Total 143 82.320 Hypotheses Tests Now that we have the results from the analysis, the first thing that we want to look at is the interaction between treatment and time. We want to determine here if the effect of treatment depends on time. Therefore, we will start with: The interaction between treatment and time, or: \(H_0\colon (\alpha\tau)_{ik} = 0 \) for all \( i = 1,2, \dots, a;\) \(k = 1,2, \dots, t\) Here we need to look at the treatment by interaction term whose F-value is reported at 2.06. We want to compare this to an F-distribution with ( a- 1)( t- 1) = 9 and ( N- a)( t- 1) = 96 degrees of freedom. The numerator d.f.of 9 is tied to the source variation due to the interaction, while the denominator d.f.is tied to the source of variation due to error(b). We can reject \(H_0\) at level alpha; if \(F = \dfrac{MS_{\text{treat x time}}}{MS_{error(b)}} > F_{(a-1)(t-1), (N-a)(t-1), \alpha}\) Therefore, we want to compare this to an Fwith 9 and 96 degrees of freedom. Here we see that this is significant with a p-value of 0.0406. Result:We can conclude that the effect of treatment depends on time ( F= 2.06; d. f.= 9, 96; p= 0.0406) Next Steps... Because the interaction between treatment and time is significant, the next step in the analysis would be to further explore the nature of that interaction using something called profile plots, (we will look at this later...). If the interaction between treatment and time was not significant, the next step in the analysis would be to test for the main effects of treatment and time. Let's suppose that we had not found a significant interaction. Let's do this so that you can see what it would look like to consider the effects of treatment. Consider testing the null hypothesis that there are no treatment effects, or \(H_0\colon \alpha_1 = \alpha_2 = \dots = \alpha_a = 0\) To test this null hypothesis, we compute the F-ratio between the Mean Square for Treatment and Mean Square for Error (a). We then reject our Hat level α if o \(F = \dfrac{MS_{treat}}{MS_{error(a)}} > F_{a-1, N-a, a}\) Here, the numerator degrees of freedom is equal to the number of degrees of freedom a- 1 = 3 for treatment, while the denominator degrees of freedom is equal to the number of degrees of freedom N- a= 32 for Error(a). Result:We can conclude that the treatment significantly affects the mean coronary sinus potassium over the t= 4 sampling times ( F= 6.00; d. f.= 3,32; p= 0.0023). Consider testing the effects of time: \(H_0\colon \tau_1 = \tau_2 = \dots = \tau_t = 0\) To test this null hypothesis, we compute the F-ratio between Mean Square for Time and Mean Square for Error(b). We then reject Hat level \(\alpha\); if o \(F = \dfrac{MS_{time}}{MS_{error(b)}} > F_{t-1, (N-a)(t-1), \alpha}\) Here, the numerator degrees of freedom is equal to the number of degrees of freedom t- 1 = 3 for time, while the denominator degrees of freedom is equal to the number of degrees of freedom ( N- a)( t- 1) = 96 for Error(b). Result:We can conclude that coronary sinus potassium varies significantly over time ( F= 11.15; d. f.= 3, 96; p< 0.0001).
I (think) I've found the Heisenberg Lie algebra representation through quantization. Where we have $q \mapsto q$ and $p \mapsto -i \hbar \frac{\partial}{\partial q}$. So this is only a Lie algebra representation. $\rho_*: \mathfrak{h} \to \text{End}(V) $. Where $V$ is the hilbert space (representation space). And $\mathfrak{h}$ is the Heisenberg Lie algebra. How do I lift this to a lie group representation $\rho: H \to \text{Gl}(V)$ . Where $H$ is the Heisenberg group? And how do I show it's unitary?
I'm trying to fit a hierarchical model using jags, and the rjags package. My outcome variable is y, which is a sequence of bernoulli trials. I have 38 human subjects which are performing under two categories: P and M. Based on my analysis, every speaker has a probability of success in category P of $\theta_p$ and a probability of success in category M of $\theta_p\times\theta_m$. I'm also assuming that there is some community level hyperparameter for P and M: $\mu_p$ and $\mu_m$. So, for every speaker: $\theta_p \sim beta(\mu_p\times\kappa_p, (1-\mu_p)\times\kappa_p)$ and $\theta_m \sim beta(\mu_m\times\kappa_m, (1-\mu_m)\times\kappa_m)$ where $\kappa_p$ and $\kappa_m$ control how peaked the distribution is around $\mu_p$ and $\mu_m$. Also $\mu_p \sim beta(A_p, B_p)$, $\mu_m \sim beta(A_m, B_m)$. Here's my jags model: model{## y = N bernoulli trials## Each speaker has a theta value for each categoryfor(i in 1:length(y)){ y[i] ~ dbern( theta[ speaker[i],category[i]])}## Category P has theta Ptheta## Category M has theta Ptheta * Mtheta## No observed data for pure Mtheta#### Kp and Km represent how similar speakers are to each other ## for Ptheta and Mthetafor(j in 1:max(speaker)){ theta[j,1] ~ dbeta(Pmu*Kp, (1-Pmu)*Kp) catM[j] ~ dbeta(Mmu*Km, (1-Mmu)*Km) theta[j,2] <- theta[j,1] * catM[j]}## Priors for Pmu and MmuPmu ~ dbeta(Ap,Bp)Mmu ~ dbeta(Am,Bm)## Priors for Kp and KmKp ~ dgamma(1,1/50)Km ~ dgamma(1,1/50)## Hyperpriors for Pmu and MmuAp ~ dgamma(1,1/50)Bp ~ dgamma(1,1/50)Am ~ dgamma(1,1/50)Bm ~ dgamma(1,1/50)} The issue I have is that when I run this model with 5000 iterations for adapting, then take 1000 samples, Mmu and Km have converged to single values. I've been running it with 4 chains, and each chain doesn't have the same value, but within each chain there is just a single value. I'm pretty new to fitting hierarchical models using MCMC methods, so I'm wondering how bad this is. Should I take this as a sign that this model is hopeless to fit, that something is wrong with my priors, or is this par for the course? Edit: In case it matters, the value for $\mu_m$ it converged to (averaged across chains) was 0.91 and $\kappa_m$ was 1.78
EDIT: see below the original question Having got a great task of re-TeX-ing, i.e. transforming a PDF original of a lost LaTeX source back into LaTeX, I've got the recommendation to use bussproof.sty to recreate a bunch of proofs. https://github.com/leanprover/logic_and_proof/blob/master/bussproofs.styhttps://www.math.ucsd.edu/~sbuss/ResearchWeb/bussproofs/BussGuide2_Smith2012.pdf So far so good - it works nicely (generally spoken). However, as many of these proofs are pretty lengthy, there is need for reference to single lines of each proof.That's why it's neccessary to enumerate each line, preferrably with numbers left-aligned (see the original), automatically increasing, and with the possibility of starting from a certain number (not always from 1.) as some of the proofs have to be continued after a bigger portion of text - with something similar to \begin{enumerate}[label=\roman*.,start=3] or ...[resume]. The normal labelling for each line should be possible to be able to refer to.I cannot use the bussproof labels for enumeration as a workaround because they are in the height of inference lines and are just next to the line (left or right aligned).This is what I could manage so far (without enumeration):A very short code example: \documentclass[11pt]{article}\usepackage{bussproofs}\begin{document}no enumeration of single lines of the proof :(\begin{prooftree} \Axiom$\Gamma\left[\langle\mbox{A}\rangle\right] \fCenter\ \Rightarrow \mbox{C}$ \RightLabel{($\ast$)} \UnaryInf$\Gamma\left[\mbox{A}\right] \fCenter\ \Rightarrow \mbox{C}$\end{prooftree}\end{document} Does anybody have an idea how to set the left-aligned enumeration within these proofs? EDIT: using the nice solution given by Alan Munn, some other problems occur: It's still a bit of rocket science for me as I am not familiar with tikz and related packages, but I guess that the most magic happens there... The first tests with your solution lead me to the following observations: (1) Wrong order of line enumeration: The busproofs are kind of too smart, because as soon as proofs get more complicated, some axioms have to be specified on quite distant lines (compared to a simple straight-forward hierarchy). I include one of the simple (!) examples that already shows that the lines 1 and 2 are not in the correct order any more (blame it on the bussproofs!). \def\ciOneL{$\circ_{1l}$}\def\ciTwo{$\circ_2$}\begin{prooftree} % switch to \Numberstrue automatically as soon as the first \lnum is used \AxiomC{\lnum v$_1$ $\Rightarrow$ tv$/_2$vp} % 2. line left \AxiomC{\lnum v$_2$ $\Rightarrow$ tv} % 1. line mid \AxiomC{np$_3$ $\Rightarrow$ np} % 1. line right \RightLabel{\scriptsize{[$/_{1l}$E]}} \BinaryInfC{v$_2$ \ciOneL\ np$_3$ $\Rightarrow$ vp} % 2. line mid \RightLabel{\scriptsize{[$/_2$E]}} \BinaryInfC{\lnum (v$_1$ \ciTwo\ (v$_2$ \ciOneL\ np$_3$)) $\Rightarrow$ tv} % 3. line left \AxiomC{np$_2$ $\Rightarrow$ np} % 3. line right \RightLabel{\scriptsize{[$/_{1l}$E]}} \BinaryInfC{\lnum ((v$_1$ \ciTwo\ (v$_2$ \ciOneL\ np$_3$)) \ciOneL\ np$_2$) $\Rightarrow$ vp \label{proofline:correct-word-order}} % 4. line (mid) \RightLabel{\scriptsize{[MP]}} \UnaryInfC{\lnum ((v$_1$ \ciOneL\ np$_2$) \ciTwo\ (v$_2$ \ciOneL\ np$_3$)) $\Rightarrow$ vp} % 5. line (mid)\end{prooftree}Applying MP in line \ref{proofline:correct-word-order} yields the correct word order. I doubt that there is a simple solution to the mismatch of the order in which bussproofs puts the single axiom (and other) lines and the resulting line numbers. I have to see if I can "sell" this solution for the future readers of the document, hoping that they are not too irritated by the wrong order and pay more attention to a reference of the line in question as such and not the actual number of that line.(2) If after the beginning of a prooftree there is no \Numbersfalse included, then at least on \lnum is needed (e.g. starting the 1st axiom), otherwise TeXstudio issues an error message "Package pgf Error: No shape named n0 is known. \end{profftree}". No big deal, just make sure that there is either a \Numbersfalse or at least one \lnum included. (3) EDIT: partly solved: after adding % at the end of \lnum definition lines, it works for \Numberstrue, but the extra space is now visible for \Numbersfalse. Some extra spaceis created at the beginning of each proof line that starts with \lnum: if you pay attention to e.g. the line that includes an A, you'll notice that it is somewhat shifted to the right instead of being centered (the same for B, D and J). Can this be re-adjusted somewhere? (4) As I need a dot after the left-aligned numbers (e.g. "1." instead of "1", like in the original), I tried to find the right place (as a best guess) within that tikzpicture construct. It seems to be working when added after the at..., just after the \z, so making it to ... {\z.};. I suspect that this adds even more extra white space to (3). (5) EDIT: "Overprinting" occurs because of side effects of \ResumeProof; it is not so clearly visible in the short sample document. I suspect that that "tikz magic" always counts from 1( \foreach \z in {1,...,\theprooflinecount}), making all the unnecessary lines "overprinted" (=visible)... Is it somehow possible to limit the output of the lines in question only (highly probably >1, i.e. for the resumed proof for the lines e.g. 6. and 7.)? While experimenting with the dot addition problem (4) I noticed that this suspicious boldface-lookingnumbering of the second example proof "dissolves" in doubling the numbers - I cannot reproduce this revealing behaviour. Effects are (almost in)visible in the second proof; overprinting comes from the invocation of of \lnum after \ResumeProof, producing output with all the left-aligned line numbers which are smaller then the resumed line number (but otherwise these are empty lines) (6) Defining a label within an axiom (or other bussproof constructs) works nicely :-) That's why I added a sentence referencing to it.
Let $M$ be a subspace of $R^n$ and $z\notin M$. Show that the orthogonal proyection of $z$ in $M$ is $\bar x$ if and only if: $(z-\bar x,x)=0, $ $ \forall x \in M$ How can i prove it? I know that for a convex, closed and not empty $C \subset R^n$, there is an only $\bar x \in C$ orthogonal projection of $z$ in $C$, that: $(x-\bar x,\bar x-z)\geq 0, $ $ \forall x \in C$ But i dont know how can i used it for the proof.
$$\int\frac{\sqrt{a+x}}{\sqrt{a}+\sqrt{x}}\, dx$$ A few days ago I asked a similar (looking) question where all the pluses here were minuses. It could be much more easily manipulated than this one using difference of $2$ squares. The other method used to solve the previous problem is replacing $x$ with $\arccos^2x$. Out of curiosity I have attempted to use the latter method to solve this integral, and made a little progress with it. Just to make sure this integral has a solution, Wolframalpha found a solution that was not too bad. Replacing $x$ with $\arctan^2t$, the integral becomes $$\int\frac{2a \sec^3 t \tan t}{1+\tan t}\, dt$$ From here I tried to perfrom integration by parts, where $2a$ is taken outside of the integral, and numberator split into $sec x\tan x$ times $\sec^2x$, as $u$ and $v'$ respectively. $v$ can easily be found but $u'$ is messy. Integration by parts needed to be done again, also, which would have ended up really messy. So I tried to convert everything into sines and cosines, then letting $u$ equal $\cos t$, and $du$ equal to $-\sin tdt$. $$\int\frac{2a\sin t}{\cos^3 t \cos t +\sin t}\, dt$$ $$\int\frac{-2au}{u^3 u+\sqrt{1-u^2}}\, dt$$ However that doesn't look very pretty either. I don't think it is possible to perform Partial Fraction Decomposition on it. What are the right paths to take to solve this integral? Are there any 'tricks of the trade' I have missed along the way?
Euler’s formula states that $e^{i x} = \cos(x) + i \sin(x)$.I can see from the MacLaurin Expansion that this is indeed true; however, I don’t intuitively understand how raising $e$ to the power of $... I would like to proof that:$$z = |z|\big(\cos(\phi) + i \sin(\phi)\big)$$of course the second part can be shown using Euler's formula. That's why I would like to prove that:$$z = |z|e^{i \phi}$$I ... Pi is the ratio of circumference of a circle to its diameter.Okay. Got that, easy enough.Now, why does the following equality hold true?$$ \frac{\pi}{4} = 1 - \frac{1}{3} + \frac{1}{5} - \frac{1}{... So, most of us are familliar with Euler’s equation stating that $e^{i\pi}+1=0$. But I was wondering: how can an irrational number to the power of another irrational number equal a whole integer? And ... $e^{i\theta}=\cos\theta +i\sin\theta$. By that statement, we conclude that e^iθ is a circle on the complex plane for real values of $\theta$ because the second side of the equation $\cos\theta +i\sin\...
Today i ran onto this simple problem, which seemed to be interesting to me. Given the illustration bellow, the problem is states as: Two identical cylinders roll in between two identical planks. If the velocity of each cylinder is $\vec{v}$ and the velocity of the bottom plank is $\vec{u}$ ($|\vec{v}| > |\vec{u}|$), find the velocity the man standing on the top plank needs to obtain with respect to it[the top plank], for which he covers distance $s$ in $t$ seconds in the stationary reference frame ( observer's frame ). Will the velocity vector point right or left? ( $\vec{v} $ and $\vec{u} $ are both given with respect to the stationary reference frame ). All friction is to be neglected and the cylinders are assumed to be rotating without slipping. Interestingly, a naive approach would be to immediately write $$\vec{v_m} = \frac{\vec{s}}{t} - \vec{v}$$ since one might think that the velocity of the upper plank with respect to the stationary observer is$\vec{v}$. However, after inspecting the problem more thoroughly, I've come to the following conclusion: The velocity of the upper plank comes solely from the rotation of the cylinders. Angular velocity of each of the cylinders is$\ $ $\omega = \frac{v-u}{R}$ From here it seems that the velocity of the upper plank with respect to the stationary reference frame might be $$\vec{u}' =\vec{v} - \vec{u}$$ And the sought velocity is: $$\vec{v_m} = \frac{\vec{s}}{t} - \vec{u'}$$ Which seems to be ok intuitvely ( the greater the magnitude of $\vec{v}$, the smaller the relative speed of the man needs to be ). However, it's still bugging me and making me believe the motion of the bottom plank affects the motion of the upper plank ( other than in the elaborated way ). Is there something wrong with this reasoning? Note: Although this is not a school problem, I'll tag it as homework because this seems the kind of a problem that would appear in homework problems. EDIT: By "stationary reference frame" I mean the frame on which the observer is standing ( e.g. Earth ), observing motion of the system"
6.4 - Practical Significance In the last lesson you learned how to identify statistically significant differences. If the p-value is less than the \(\alpha\) level (typically 0.05), then we say that the results are statistically significant. In other words, results are said to be statistically significant when the observed difference is large enough to conclude that it is unlikely to have occurred by chance. Practical significance refers to the magnitude of the difference. This is also known as the effect size. Results are practically significant when the difference is large enough to be meaningful in real life. Example: SAT-Math Scores Section Research question: Are SAT-Math scores at one college greater than the known population mean of 500? \(H_0: \mu = 500\) \(H_a: \mu >500\) Data are collected from a random sample of 1,200 students at that college. In that sample, \(\overline{x}=506\). The population standard deviation is known to be 100. A one-sample mean test was performed and the resulting p-value was 0.0188. Because \(p \leq \alpha\), the null hypothesis should be rejected and these results are statistically significant. There is evidence that the population mean is greater than 500. However, the difference is not practically significant because the difference between an SAT-Math score 500 and an SAT-Math score of 506 is very small. With a standard deviation of 100, this difference is only \(\frac{506-500}{100}=0.06\) standard deviations. Example: Weight-Loss Program Section Researchers are studying a new weight-loss program. Using a large sample they construct a 95% confidence interval for the mean amount of weight loss after six months on the program to be [0.12, 0.20]. All measurements were taken in pounds. Note that this confidence interval does not contain 0, so we know that their results were statistically significant at a 0.05 alpha level. However, most people would say that the results are not practically significant because after six months on a weight-loss program we would want to lose more than 0.12 to 0.20 pounds. Example: Change in Self-Efficacy Section Research question: Do children who are given positive reinforcement in the form of verbal praise experience an increase in self-efficacy? \(H_0: \mu_d = 0\) \(H_a: \mu _d >0\) Where \(\mu_d\) is the change in self-efficacy measured as the self-efficacy after the intervention minus the initial self-efficacy. Data were collected from a sample of 30 children at one school. In that sample the mean increase in self-efficacy ratings was 10 points with a standard deviation of 3 points. A one-sample mean test was conducted on the differences and resulted in a p-value < 0.0001. The null hypothesis was rejected so the results were said to be statistically significant. To examine practical significance we would need to evaluate the magnitude of that increase. We know that the mean increase was 10 points, but without more information about the survey that was administered we don't know what that really means. We do, however, know that the standard deviation of the increase was 3. This means that the increase was \(\frac{10}{3}=3.333\) standard deviations. That is an increase of more than 3 standard deviations! Based on the mean increase and standard deviation of that increase, this appears to be a large increase in self-efficacy. Note that statistical significance is directly impacted by sample size. Recall that there is an inverse relationship between sample size and the standard error (i.e., standard deviation of the sampling distribution). Very small differences will be statistically significant with a very large sample size. Thus, when results are statistically significant it is important to also examine practical significance. Practical significance is not directly influenced by sample size. Effect Size Section For some tests there are commonly used measures of effect size. For example, when comparing the difference in means we often compute Cohen's \(d\) which is the difference between the two groups in standard deviation units: \[d=\frac{\overline x_1 - \overline x_2}{s_p}\] Where \(s_p\) is the pooled standard deviation \[s_p= \sqrt{\frac{(n_1-1)s_1^2 + (n_2 -1)s_2^2}{n_1+n_2-1}}\] Below are commonly used standards when interpreting Cohen's \(d\): Cohen's \(d\) Interpretation 0 - 0.2 Little or no effect 0.2 - 0.5 Small effect size 0.5 - 0.8 Medium effect size 0.8 or more Large effect size For correlation and regression we can compute \(r^2\) which is known as the coefficient of determination. This is the proportion of shared variation. We will learn more about \(r^2\) when we study simple linear regression and correlation at the end of this course.
On non-linear discrete boundary-value problems and semi-groups The following discrete boundary - value problem for non-linear system $x_k=\varphi_k(x_{k-1},y_k),\,y_{k-1}=\psi_k(x_{k-1},y_k)$, $k=\overline{1,\,N},\,\,N<\infty,\,\,x_0=a,\,\,y_N=b, $ is considered. Here the functions $\varphi_k(x,y),\,\psi_k(x,y)\geq0$ are monotone with respect to arguments $ x,\,y\geq0$, satisfing the condition of dissipativity or conservativity: $ \varphi_k(x,y)+\psi_k(x,y)\leq x+y+\gamma_k$, as well as two simple additional conditions. A relation of this problem with multistep processes is demonstrated. Existence and uniqueness of minimal solution of problem is proved. A semi-group approach to solving of problem is developed. The approach is adjoined with V.~Ambartsumian Principle of Invariance and R.Bellman method of On the Central Limit Theorem for Toeplitz Quadratic Forms of Stationary Sequences Let $X(t),$ $t = 0,\pm1,\ldots,$ be a real-valued stationary Gaussian sequence with spectral density function $f(\Lambda)$. The paper considers a question of applicability of central limit theorem (CLT) for Toeplitz type quadratic form $Q_n$ in variables $X(t)$, generated by an integrable even function $g(\Lambda)$. Assuming that $f(\Lambda)$ and $g(\Lambda)$ are regularly varying at $\Lambda=0$ of orders $\alpha$ and $\beta$ respectively, we prove CLT for standard normalized quadratic form $Q_n$ in the critical case $\alpha+\beta=1/2$. We also show that CLT is not valid under the single condition that the asymptotic variance of $Q_n$ is separated from zero and infinity. Generalization of the theorem of de Montessus de Bollore We prove a generalization of the theorem of de Montessus de Bollore for a large class of functional series investigated in [14]. In particular multivalued approximants for power series, as well as for series of Faber and Gegenbauer polynomials are considered. Numerical results are presented. Minimization of Errors of the Polynomial-Trigonometric Interpolation with Shifted Nodes. The polynomial-trigonometric interpolation based on the Krylov approach for a smooth function given on $[-1, 1]$ is defined on the union of $m$ shifted each other uniform grids with the equal number of points. The asymptotic errors of the interpolation in both uniform and $L_2$ metrics are investigated. It turned out that the corresponding errors can be minimized due to an optimal choice of the shift parameters. The study of asymptotic errors is based on the concept of the ''limit function" proposed by Vallee-Poussin. In particular cases of unions of two and three uniform grids the limit functions are found explicitly and the optimal shift parameters are calculated using MATHEMATICA 4.1 computer system. In various applications the problem on separation the original signal and the noise arises. In this paper we consider two cases, which naturally arise in applied problems. In the first case, the original signal permits linear prediction by its past behavior. In the second case the original signal is the values of some analytic function at a points from unit disk. In the both cases the noise is assumed to be a stationary process with zero mean value. Let us note that the first case arises in Physical phenomena consideration. The second case arises in Identification problems for linear systems. RECONSTRUCTION OF CONVEX BODIES FROM PROJECTION CURVATURE RADIUS FUNCTION In this article we pose the problem of existence and uniqueness of convex body for which the projection curvature radius function coincides with given function. We and a necessary and sufficient condition that ensures a positive answer to both questions and suggest an algorithm of construction of the body. Also we find a representation of the support function of a convex body by projection curvature radii. Weighted Classes of Regular Functions Area Integrable Over the Unit Disc This preprint contains some generalizations of the main theorems of M.M.Djrbashian of 1945--1948 which laid ground for the theory of $A^p_\alpha$ $($or initially $H^p(\alpha))$ spaces and his factorization theory of classes $N\{\omega\}$ exhausting all functions meromorphic in the unit disc. Also some later results on $A^p_\alpha$ spaces and Nevanlinna's weighted class are improved. The preprint contains the main analytic apparatus for generalizing almost all known results on $A^p_\alpha$ spaces within a new theory, where instead $(1-r^2)^\alpha dr$ $(-1<\alpha<+\infty$, $0<r<1)$ some weights of the form $d\omega(r^2)$ are used. The obtained results make evident that the theory of $A^p_\omega$ spaces and the factorization theory of M.M.Djrbashian are inseparable parts of a general theory of classes of regular functions associated with M.M.Djrbashian general integrodifferentiation. The author hopes that the publication of this preprint can lead to clarification of some priority misunderstandings in the field.
This exercise is inspired by exercises 83 and 100 of Chapter 10 in Giancoli's book A uniform disk ($R = 0.85 m$; $M =21.0 kg$) has a rope wrapped around it. You apply a constant force $F = 35 N$ to unwrap it (at the point of contact ground-disk) while walking 5.5 m. Ignore friction. a) How much has the center of mass of the disk moved? Explain. Now derive a formula that relates the distance you have walked and how much rope has been unwrapped when: b) You don't assume rolling without slipping. c) You assume rolling without slipping. a) I have two different answers here, which I guess one is wrong: a.1) Here there's only one force to consider in the direction of motion: $\vec F$. Thus the center of mass should also move forward. a.2) You are unwinding the rope out of the spool and thus exerting a torque $FR$ (I am taking counterclockwise as positive); the net force exerted on the CM is zero and thus the wheel only spins and the center of mass doesn't move. The issue here is that my intuition tells me that there should only be spinning. I've been testing the idea with a paper roll and its CM does move forward, but I think this is due to the roll not being perfectly cylindrical; if the unwrapping paper were to be touching only at a point with an icy ground the CM's roll shouldn't move. 'What's your reasoning to assert that?' Tangential velocity points forwards at distance $R$ below the disk's CM but this same tangential velocity points backwards at distance $R$ above the disk's CM and thus translational motion is cancelled out. Actually, we note that opposite points on the rim have opposite tangential velocities (assuming there's no friction so that the tangential velocity is constant). My book assumes a.1) is OK. I say a.2) is OK. Who's right then? b) We can calculate the unwrapped distance noting that the arc length is related to the radius by the angle (radian) enclosed: $$\Delta s = R \Delta \theta$$ Assuming constant acceleration and zero initial angular velocity: $$\Delta \theta = 1/2 \alpha t^2 = 1/2 \frac{\omega}{t} t^2 = 1/2 \omega t$$ By Newton's second Law (rotation) we can solve for $\omega$ and then plug it into the above equation: $$\tau = FR = I \alpha = I \frac{\omega}{t} = 1/2 M R^2 \frac{\omega}{t}$$ $$\omega = \frac{2F}{M R}t$$ Let's plugg it into the other EQ. $$\Delta \theta = \frac{F}{M R}t^2$$ Mmm we still have to eliminate $t$. Assuming constant acceleration we get by the kinematic equation (note I am using the time $t$ you take to walk 5.5 m so that we know how much rope has been unwrapped in that time): $$t^2 = \frac{2M\Delta x}{F}$$ Plugging it into $\Delta \theta$ equation: $$\Delta \theta = \frac{2\Delta x}{R}$$ Plugging it into $\Delta s$ equation we get the equation we wanted: $$\Delta s = 2 \Delta x$$ If we calculate both $v$ and $\omega$ we see that $v=R\omega$ is not true so the disk doesn't roll without slipping. c) Here $v=R\omega$ must be true. We know that if that's the case the tangential velocity must be related to the center of mass' velocity as follows: $$2v_{cm} = v$$ Assuming that the person holding the rope goes at speed $2v_{cm}$ we get: $$\Delta x= 2 \Delta s$$ I get reversed equations at b) and c). How can we explain that difference in both equations beyond the fact of rolling without slipping?
6.5 - Power The probability of rejecting the null hypothesis, given that the null hypothesis is false, is known as power. In other words, power is the probability of correctly rejecting \(H_0\). Power \(Power = 1-\beta\) \(\beta\) = probability of committing a Type II Error. The power of a test can be increased in a number of ways, for example (1) increasing the sample size; (2) when testing a mean or difference in means, decreasing the sample standard deviation(s); (3) increasing the effect size; or (4) increasing the alpha level. The relationship between \(\alpha\) and \(\beta\): If the sample size is fixed, then decreasing \(\alpha\) will increase \(\beta\). If we want both \(\alpha\) and \(\beta\) to decrease, then we should increase the sample size. On Your Own Section \(Power=P(rejecting\;H_{0}\mid H_{0}\;false)\) The probability of committing a Type II error is known as \(\beta\). \(\beta=P(failing\;to\;reject\;H_0\mid H_0\;false)\) \(Power+\beta=1\) \(Power=1-\beta\) If power increases then \(\beta\) must decrease. So, if the power of a statistical test is increased, for example by increasing the sample size, the probability of committing a Type II error decreases. No. When we perform a hypothesis test, we only set the size of Type I error (i.e., \(\alpha\)) and guard against it. Thus, we can only present the strength of evidence against the null hypothesis. We can sidestep the concern about Type II error if the conclusion never mentions that the null hypothesis is accepted. When the null hypothesis cannot be rejected, there are two possible cases: 1) the null hypothesis is true 2) the sample size is not large enough to reject the null hypothesis (i.e., statistical power is too low) The result of the study was to fail to reject the null hypothesis. In reality, the null hypothesis was false. This is a Type II error.
Factor scores are similar to the principal components in the previous lesson. Just we plotted principal components against each other, a similar scatter plot of factor scores is also helpful. We also might use factor scores as explanatory variables in future analyses. It may even be of interest to use the factor score as the dependent variable in a future analysis. The methods for estimating factor scores depend on the method used to carry out the principal components analysis. The vectors of common factors f is of interest. There are m unobserved factors in our model and we would like to estimate those factors. Therefore, given the factor model: \(\mathbf{Y_i = \boldsymbol{\mu} + Lf_i + \boldsymbol{\epsilon_i}}; i = 1,2,\dots, n,\) we may wish to estimate the vectors of factor scores \(\mathbf{f_1, f_2, \dots, f_n}\) for each observation. Methods Section There are a number of different methods for estimating factor scores from the data. These include: Ordinary Least Squares Weighted Least Squares Regression method Ordinary Least Squares Section By default, this is the method that SAS uses if you use the principal component method. The difference between the \(j^{th}\) variable on the \(i^{th}\) subject and its value under the factor model is computed. The \(\mathbf{L}\)'s are factor loadings and the f's are the unobserved common factors. The vector of common factors for subject i, or \( \hat{\mathbf{f}}_i \), is found by minimizing the sum of the squared residuals: \[\sum_{j=1}^{p}\epsilon^2_{ij} = \sum_{j=1}^{p}(y_{ij}-\mu_j-l_{j1}f_1 - l_{j2}f_2 - \dots - l_{jm}f_m)^2 = (\mathbf{Y_i - \boldsymbol{\mu} - Lf_i})'(\mathbf{Y_i - \boldsymbol{\mu} - Lf_i})\] This is like a least squares regression, except in this case we already have estimates of the parameters (the factor loadings), but wish to estimate the explanatory common factors. In matrix notation the solution is expressed as: \(\mathbf{\hat{f}_i = (L'L)^{-1}L'(Y_i-\boldsymbol{\mu})}\) In practice, we substitute in our estimated factor loadings into this expression as well as the sample mean for the data: \(\mathbf{\hat{f}_i = \left(\hat{L}'\hat{L}\right)^{-1}\hat{L}'(Y_i-\bar{y})}\) Using the principal component method with the unrotated factor loadings, this yields: \[\mathbf{\hat{f}_i} = \left(\begin{array}{c} \frac{1}{\sqrt{\hat{\lambda}_1}}\mathbf{\hat{e}'_1(Y_i-\bar{y})}\\ \frac{1}{\sqrt{\hat{\lambda}_2}}\mathbf{\hat{e}'_2(Y_i-\bar{y})}\\ \vdots \\ \frac{1}{\sqrt{\hat{\lambda}_m}}\mathbf{\hat{e}'_m(Y_i-\bar{y})}\end{array}\right)\] \(e_i\) m eigenvectors. Weighted Least Squares (Bartlett) Section The difference between WLS and OLS is that the squared residuals are divided by the specific variances as shown below. This is going to give more weight, in this estimation, to variables that have low specific variances. The factor model fits the data best for variables with low specific variances. The variables with low specific variances should give us more information regarding the true values for the specific factors. Therefore, for the factor model: \(\mathbf{Y_i = \boldsymbol{\mu} + Lf_i + \boldsymbol{\epsilon_i}}\) we want to find \(\boldsymbol{f_i}\) that minimizes \( \sum\limits_{j=1}^{p}\frac{\epsilon^2_{ij}}{\Psi_j} = \sum\limits_{j=1}^{p}\frac{(y_{ij}-\mu_j - l_{j1}f_1 - l_{j2}f_2 -\dots - l_{jm}f_m)^2}{\Psi} = \mathbf{(Y_i-\boldsymbol{\mu}-Lf_i)'\Psi^{-1}(Y_i-\boldsymbol{\mu}-Lf_i)}\) The solution is given by this expression where \(\mathbf{\Psi}\) is the diagonal matrix whose diagonal elements are equal to the specific variances: \(\mathbf{\hat{f}_i = (L'\Psi^{-1}L)^{-1}L'\Psi^{-1}(Y_i-\boldsymbol{\mu})}\) and can be estimated by substituting in the following: \(\mathbf{\hat{f}_i = (\hat{L}'\hat{\Psi}^{-1}\hat{L})^{-1}\hat{L}'\hat{\Psi}^{-1}(Y_i-\bar{y})}\) Regression Method Section This method is used for maximum likelihood estimates of factor loadings. A vector of the observed data, supplemented by the vector of factor loadings for the ith subject, is considered. The joint distribution of the data \(\boldsymbol{Y}_i\) and the factor \(\boldsymbol{f}_i\) is \(\left(\begin{array}{c}\mathbf{Y_i} \\ \mathbf{f_i}\end{array}\right) \sim N \left[\left(\begin{array}{c}\mathbf{\boldsymbol{\mu}} \\ 0 \end{array}\right), \left(\begin{array}{cc}\mathbf{LL'+\Psi} & \mathbf{L} \\ \mathbf{L'} & \mathbf{I}\end{array}\right)\right]\) Using this we can calculate the conditional expectation of the common factor score \(\boldsymbol{f}_i\) given the data \(\boldsymbol{Y}_i\) as expressed here: \(E(\mathbf{f_i|Y_i}) = \mathbf{L'(LL'+\Psi)^{-1}(Y_i-\boldsymbol{\mu})}\) This suggests the following estimator by substituting in the estimates for L and \(\mathbf{\Psi}\): \(\mathbf{\hat{f}_i = \hat{L}'\left(\hat{L}\hat{L}'+\hat{\Psi}\right)^{-1}(Y_i-\bar{y})}\) There is a little bit of a fix that often takes place to reduce the effects of incorrect determination of the number of factors. This tends to give you results that are a bit more stable. \(\mathbf{\tilde{f}_i = \hat{L}'S^{-1}(Y_i-\bar{y})}\)
The problem with the multivariate procedure outlined in the above is that it makes no assumptions regarding the temporal correlation structure of the data, and hence, may be overparameterized leading to poor parameter estimates. The mixed model procedure allows us to look at temporal correlation functions involving a limited number of parameters. The mixed model procedure falls beyond the scope of this class. The following brief outline is intended to be just an overview. Approach 3 - Mixed Model Analysis The mixed model initially looks identical to the split-plot model considered earlier. \(Y_{ijk} = \mu + \alpha_i + \beta_{j(i)}+ \tau_k + (a\tau)_{ik} + \epsilon_{ijk}\) where \(\mu\) = overall mean \(\alpha _ { i }\) = effect of treatment i \(\beta j ( i )\) = random effect of dog jreceiving treatment i \(\tau_k\) = effect of time k \((a\tau)_{ik}\) = treatment by time interaction \(\epsilon_{ijk}\) = experimental error Assumptions The dog effects \(\beta_{j ( i )}\) are independently sampled from a normal distribution with mean 0 and variance \(\sigma^2_\beta\) . The errors \(\epsilon_{ijk}\) from different dogs are independently sampled from a normal distribution with mean 0 and variance \(\sigma^2_\epsilon\) . The correlation between the errors for the same dog depends only on the difference in observation times:\(|k-k'|\) Several covariance and correlation functions are listed below. Compound Symmetry: \(cov(\epsilon_{ijk}, \epsilon_{ijk'}) = \sigma^2_\epsilon+\sigma^2_\beta\) if \(k=k'\) and \(\sigma^2_\beta\) otherwise. This is the default structure for split plots. Autoregressive: \(corr(\epsilon_{ijk}, \epsilon_{ijk'}) = \rho^{|k-k'|}\) Autoregressive Moving Average: \(corr(\epsilon_{ijk}, \epsilon_{ijk'}) = \left\{\begin{array}{cl}\gamma; & \text{if } |k-k'| = 1 \\ \gamma\rho^{|k-k'|-1}; & \text{if } |k-k'| \ge 2\end{array}\right.\) Toeplitz: \(corr(\epsilon_{ijk}, \epsilon_{ijk'}) = \rho(|k-k'|)\) Note! The autoregressive model is a special case of a autoregressive moving average model with γ = 1. The autoregressive moving average model is a special case of a toeplitz model with \(\rho(|k-k'|) = \left\{\begin{array}{cl}\gamma; & \text{if } |k-k'| = 1 \\ \gamma\rho^{|k-k'|-1}; & \text{if } |k-k'| \ge 2\end{array}\right.\) Analysis Approach 1: If one model is a special case of another, they can be compared using the -2 log likelihood values output. The difference is approximately chi-squared with degrees of freedom equal to the difference between the numbers of estimated parameters. For example, when comparing the AR(1) model with the ARMA(1,1) model, the difference between their -2 log likelihood values is: \(237.426 - 237.329 = .097\) which is less than the chi-square critical value of \(3.85 = \chi^2_{1, 0.05}\) (the df is 1 because there is one additional parameter estimated with the ARMA(1,1) model). This would not be significant evidence to claim the ARMA(1,1) fits better and the AR(1) model would be preferred. Approach 2: Models that are not special cases of each other can be compared using AICC or BIC values from the output. Smaller values are better. For example, based on the AICC values for the CS, AR(1), ARMA(1,1), and Toeplitz models below, the AR(1) would be preferred. Compound Symmetry 243.9 AR(1) 243.6 ARMA(1,1) 245.7 Toeplitz 247.8 Using SAS The syntax for using SAS's Mixed Model procedure can be seen in the program below. Note!The first instance of the mixed procedure (without the repeated statement) is using the compound symmetry structure. Download the SAS Program here: dog3.sas The general format here for the mixed model procedure requires that the data are on separate lines for separate points in time. Except for the first model, each of the various models will have repeated statements. The second, third and fourth models contain the repeated statements where subject is specified to be the dog within treatments, indicating within which units we have our repeated measures, in this case within each of the dogs. This is followed by the type option which specifies what model you want. Here we set ar(1) for an autoregressive model., arma(1,1) for a 1,1 autoregressive moving average model and toep, short for a Toeplitz model. Based on the AR(1) model above, the following hypothesis tests are obtained: Effect F d.f. p-value Treatments 6.09 3,32 0.0021 Time 9.84 3,96 < 0.0001 Treatment by Time 1.89 9,96 0.0631
It appears to be common in the discussion of perturbative FRW cosmologies to choose a gauge using hypersurfaces for special values of some quantity, like surfaces of constant density $\rho$, constant inflaton field $\phi$, or zero spatial curvature $\Psi = 0$. What guarantees that this foliates spacetime? It seems clear that in general there may be local density spikes that appear and disappear, so the constant density surface isn't space-like. Further, I don't think monotonicity of these quantities (perhaps because we're assuming small perturbations?) is a sufficient condition to guarantee foliation because monotonicity itself doesn't appear to be a gauge-invariant condition. (If the density $\rho$ were to be monotonically decreasing with respect to some choice of time coordinate $t$, there is another set of coordinates $x$ and $t$ for which it is not.) Alternatively, am I wrong in thinking that this is expected to define a foliation in general? Maybe the only thing that matters is that you can define local gauge-invariant quantities (e.g., spatial curvature on constant-density hypersurfaces $-\zeta = \Psi + (H/\dot{\bar{\rho}})\delta \rho$), and it's not necessary that this defines a preferred coordinate system.This post imported from StackExchange Physics at 2015-09-06 15:12 (UTC), posted by SE-user Jess Riedel
8.2.3.1.2 : Example: Pulse Rate A research study measured the pulse rates of 57 college men and found a mean pulse rate of 70.4211 beats per minute with a standard deviation of 9.9480 beats per minute. Researchers want to know if the mean pulse rate for all college men is different from the current standard of 72 beats per minute. Pulse rates are quantitative. The sampling distribution will be approximately normally distributed because \(n \ge 30\). This is a two-tailed test because we want to know if the mean pulse rate is from 72. different \(H_{0}:\mu=72 \) \(H_{a}: \mu\neq 72 \) Test Statistic: One Group Mean \(t=\frac{\overline{x}-\mu_0}{\frac{s}{\sqrt{n}}}\) \(\overline{x}\) = sample mean \(\mu_{0}\) = hypothesized population mean \(s\) = sample standard deviation \(n\) = sample size \(t=\frac{\overline{x}-\mu_0}{\frac{s}{\sqrt{n}}}=\frac{70.4211-72}{\frac{9.9480}{\sqrt{57}}}=-1.198\) Our \(t\) test statistic is -1.198 \(df=n-1=57-1=56\) \(p=0.117981+0.117981=0.235962\) Given that the null hypothesis is true and \(\mu=72\), the probability of taking a random sample of \(n=57\) and finding a sample mean this or more extremely different is 0.235962. This is our p-value. \(p>.05\), therefore we fail to reject the null hypothesis. There is not sufficient evidence to state that the mean pulse of college men is different from 72.
Magnetic force is a consequence of electromagnetic force and is caused due to the motion of charges. We have learned that moving charges surround itself with a magnetic field. With this context, the magnetic force can be described as a force that arises due to interacting magnetic fields. What is Magnetic Force? If we place a point charge in the presence of both a magnitude field given by magnitude q and an electric field given by a magnitude B(r) , then the total force on the electric charge E(r) can be written as the sum of the electric force and the magnetic force acting on the object ( q F electric +). F magnetic Magnetic force can be defined as: The magnetic force between two moving charges may be described as the effect exerted upon either charge by a magnetic field created by the other. How do we find magnetic force? The magnitude of the magnetic force depends on how much charge is in how much motion in each of the objects and how far apart they are. Mathematically, we can write magnetic force as: This force is termed as the Lorentz Force. It is the combination of the electric and magnetic force on a point charge due to electromagnetic fields. The interaction between the electric field and the magnetic field has the following features: The magnetic force depends upon the charge of the particle, the velocity of the particle and the magnetic field in which it is placed. The direction of the magnetic force is opposite to that of a positive charge. The magnitude of the force is calculated by the cross product of velocity and the magnetic field, given by . The resultant force is thus perpendicular to the direction of the velocity and the magnetic field, the direction of the magnetic field is predicted by the right-hand thumb rule. q [ v × B ] In the case of static charges, the total magnetic force is zero. Magnetic Force on a Current-Carrying Conductor Let us now discuss the force due to the magnetic field in a straight current-carrying rod. We consider a rod of uniform length and cross-sectional area l . A In the conducting rod, let the number density of mobile electrons be given by . n Then the total number of charge carriers can be given by , where nAI is the steady current in the rod. The drift velocity of each mobile carrier is assumed to be given as vd. When the conducting rod is placed in an external magnetic field of magnitude I , the force applied on the mobile charges or the electrons can be given as: B \(F=(nAI)qvd\times B\) Where q is the value of charge on the mobile carrier. As is also the current density nqvd and j is the current A×|nqvd| through the conductor, then we can write: I Where I is the vector of magnitude equal to the length of the conducting rod. Solved Examples Q1. The direction of the current in a copper wire carrying a current of 6.00 A through a uniform magnetic field with magnitude 2.20T is from the left to right of the screen. The direction of the magnetic field is upward-left, at an angle of θ = 3π/4 radians from the current direction. Determine the magnitude and direction fo the magnetic force acting on a 0.100 m section of the wire? Solution: The magnitude of the magnetic force can be found using the formula: \(\underset{F}{\rightarrow}=ILB\sin \theta \widehat{n}\) where, \(\underset{F}{\rightarrow}\) is the magnetic vector (N) I is the current magnitude (A) \(\underset{L}{\rightarrow}\) is the length vector (m) L is the length of the wire (m) \(\underset{B}{\rightarrow}\) is the magnetic field vector (T) B is the magnetic field magnitude (T) θ is the angle between length and magnetic field vectors (radians) \(\hat{n}\) is the cross product direction vector (unitless) Substituting the values, we get\(F = (6.00\, A)(0.100\, m)(2.20\, T)\sin(3\pi /4\,radians)\) \(F = (6.00\, A)(0.100\, m)(2.20\, T)(1/\sqrt{2})\) \(F = (6.00\, A)(0.100\, m)(2.20\, \frac{kg}{A\cdot s^{2}})(1/\sqrt{2})\) \(F = (6.00)(0.100\, m)(2.20\, \frac{kg}{s^{2}})(1/\sqrt{2})\) \(F = (6.00)(0.100)(2.20)(1/\sqrt{2})\,kg\cdot m/{s^{2}}\) \(F \simeq 0.933\,\,kg\cdot m/s^2\) The magnitude of the force on the 0.100 m section of wire has a magnitude of 0.933 N. We use “right-hand rule” to find the direction of the force vector. The direction of the current is to the right, and so point the right index finger in that direction. The magnetic field points upward-left, so curl your fingers up. Your thumb would be pointing away from the page. This means that the direction of the force vector is out of the page.
Wikipedia only lists two problems under "unsolved problems in computer science": What are other major problems that should be added to this list? Rules: Only one problem per answer Provide a brief description and any relevant links Theoretical Computer Science Stack Exchange is a question and answer site for theoretical computer scientists and researchers in related fields. It only takes a minute to sign up.Sign up to join this community Wikipedia only lists two problems under "unsolved problems in computer science": What are other major problems that should be added to this list? Rules: Can multiplication of $n$ by $n$ matrices be done in $O(n^2)$ operations? The exponent of the best known upper bound even has a special symbol, $\omega$. Currently $\omega$ is approximately 2.376, by the Coppersmith-Winograd algorithm. A nice overview of the state of the art is Sara Robinson, Toward an Optimal Algorithm for Matrix Multiplication, SIAM News, 38(9), 2005. Update: Andrew Stothers (in his 2010 thesis) showed that $\omega < 2.3737$, which was improved by Virginia Vassilevska Williams (in a July 2014 preprint) to $\omega < 2.372873$. These bounds were both obtained by a careful analysis of the basic Coppersmith-Winograd technique. Further Update (Jan 30, 2014): François Le Gall has proved that $\omega < 2.3728639$ in a paper published in ISSAC 2014 (arXiv preprint). Is Graph Isomorphism in P? The complexity of Graph Isomorphism (GI) has been an open question for several decades. Stephen Cook mentioned it in his 1971 paper on NP-completeness of SAT. Determining whether two graphs are isomorphic can usually be done quickly, for instance by software such as nauty and saucy. On the other hand, Miyazaki constructed classes of instances for which nauty provably requires exponential time. Read and Corneil reviewed the many attempts to tackle the complexity of GI up to that point: The Graph Isomorphism Disease, Journal of Graph Theory 1, 339–363, 1977. GI is not known to be in co-NP, but there is a simple randomized protocol for Graph Non-Isomorphism (GNI). So GI (= co-GNI) is therefore believed to be "close to" NP ${}\cap{}$ co-NP. On the other hand, if GI is NP-complete, then the Polynomial Hierarchy collapses. So GI is unlikely to be NP-complete. (Boppana, Håstad, Zachos, Does co-NP Have Short Interactive Proofs?, IPL 25, 127–132, 1987) Shiva Kintali has a nice discussion of the complexity of GI at his blog. Laszlo Babai proved that Graph Isomorphism is in subexponential time. Is Factoring in $\mathsf{P}$? Is there a pivoting rule for the simplex algorithm that yields worst-case polynomial running time? More generally, is there any strongly polynomial algorithm for linear programming? The exponential-time hypothesis (ETH) asserts that solving SAT requires exponential, 2 Ω(n) time. ETH implies many things, for instance that SAT is not in P, so ETH implies P ≠ NP. See Impagliazzo, Paturi, Zane, Which Problems Have Strongly Exponential Complexity?, JCSS 63, 512–530, 2001. ETH is widely believed, but likely to be difficult to prove, as it implies many other complexity class separations. Immerman and Vardi show that fixed-point logic captures PTIME on the class of ordered structures. One of the biggest open problems in descriptive complexity theory is whether the dependency on the order can be removed: Is there a logic that captures PTIME? Put simply, a logic capturing PTIME is a programming language for graph problems that works directly on the graph structure and does not have access to the encoding of the vertices and edges, such that the following hold: If there is no logic that captures PTIME, then $P \neq NP$ since NP is captured by existential second-order logic. A logic capturing PTIME would provide a possible attack to P vs NP. Is the unique games conjecture true? And: Given that there are sub-exponential time approximation algorithms for Unique Games, where does the problem ultimately rest in terms of the complexity landscape? The permanent versus determinant question is interesting because of two facts. First, the permanent of a matrix counts the number of perfect matchings in a bipartite graph. Therefore the permanent of such a matrix is #P-Complete. At the same time, the definition of the permanent is very close that of the determinant, ultimately different only because of a simple sign change. Determinant calculations are well known to be in P. Studying the different between the permanent and the determinant, and how many determinant calculations are required to compute the permanent speak about P versus #P. Can we compute the FFT in much less than $O(n \log n)$ time? In the same (very) general vein, there are many questions of improving the run-times of many classical problems or algorithms: e.g., can all-pairs-shortest-paths (APSP) be solved in $O(n^{3-\epsilon})$ time ? Edit: APSP runs in time $(\frac{n^3}{2^{\Omega(log n)^{1/2}}})$ "where additions and comparisons of reals are unit cost (but all other operations have typical logarithmic cost)": http://arxiv.org/pdf/1312.6680v2.pdf Or more generally: Is any online dynamic binary search tree O(1)-competitive? A linear time deterministic algorithm for the minimum spanning tree problem. NP versus co-NP The NP versus co-NP question is interesting because NP ≠ co-NP implies P ≠ NP (as P is closed under complement). It also relates to "duality": separation between finding/verifying examples and finding/verifying counterexamples. In fact, proving that a question is in both NP and co-NP is our first good evidence that a problem that seems to be outside of P is also likely not NP-Complete. Are there problems that cannot be solved efficiently by parallel computers? Problems that are P-complete are not known to be parallelizable. P-complete problems include Horn-SAT and Linear Programming. But proving that this is the case would require separating some notion of parallelizable problems (such as NC or LOGCFL) from P. Computer processor designs are increasing the number of processing units, in the hope that this will yield improved performance. If fundamental algorithms such as Linear Programming are inherently not parallelizable, then there are significant consequences. Do all propositional tautologies have polynomial-size Frege proofs? Arguably the major open problem of proof complexity: demonstratesuper-polynomial size lower bounds on propositional proofs (called also Frege proofs). Informally, a Frege proof system is just a standard propositional proof system for provingpropositional tautologies (one learns in a basic logic course), having axioms and deduction rules, where proof-lines are written as formulas. The size ofa Frege proof is the number of symbols it takes to write down the proof. The problem then asks whether there is a family $(F_n)_{n=1}^\infty$ of propositional tautological formulas for which there is no polynomial $ p $ such that the minimal Frege proof size of $ F_n $ is at most $ p(|F_n|)$, for all $ n=1,2,\ldots$ (where $ |F_n| $ denotes the size of the formula $ F_n $). Formal definition of a Frege proof system Definition (Frege rule) A Frege rule is a sequence of propositional formulas $ A_0(\overline x),\ldots,A_k(\overline x) $, for $ k \le 0 $,written as $ \frac{A_1(\overline x), \ldots,A_k(\overlinex)}{A_0(\overline x)}$. In case $ k = 0 $, the Frege rule is calledan axiom scheme. A formula $ F_0 $ is said to be derived by therule from $ F_1,\ldots,F_k $ if $ F_0,\ldots,F_k $ are allsubstitution instances of $ A_1,\ldots,A_k $, for some assignment tothe $ \overline x $ variables (that is, there are formulas$B_1,\ldots,B_n $ such that $F_i = A_i(B_1/x_1,\ldots,B_n/x_n), $ forall $ i=0,\ldots,k $. The Frege rule is said to be sound ifwhenever an assignment satisfies the formulas in the upper side$A_1,\ldots,A_k $, then it also satisfies the formula in the lowerside $ A_0 $. Definition (Frege proof) Given a set of Frege rules, a Frege proof is a sequence offormulas such that every proof-line is either an axiom or was derivedby one of the given Frege rules from previous proof-lines. If thesequence terminates with the formula $ A $, then the proof is said tobe a proof of $ A $. The size of a Frege proof is the the total sizes of all the formulas in the proof. A proof system is said to be implicationally complete if forall set of formulas $ T $, if $ T $ semantically implies $ F $, thenthere is a proof of $ F $ using (possibly) axioms from $ T $.A proof system is said to be sound if it admits proofs of onlytautologies (when not using auxiliary axioms, like in the $ T $above). Definition (Frege proof system)Given a propositional language and a finite set $ P $ of sound Frege rules, we say that $ P $ is a Frege proof system if $ P $ isimplicationally complete. Note that a Frege proof is always sound since the Frege rules are assumed to be sound. We do not need to work with a specific Frege proof system, since a basic result in proof complexity states that every two Frege proof systems, even over different languages, are polynomially equivalent [Reckhow, PhD thesis, University of Toronto, 1976]. Establishing lower bounds on Frege proofs could be viewed as a step towards proving $NP \neq coNP$, since if this is true then no propositional proof system (including Frege) can have polynomial size proofs for all tautologies. Can we compute the edit distance between two strings of length $n$ in sub-quadratic time, i.e., in time $O(n^{2-\epsilon})$ for some $\epsilon>0$ ? Are there truly subquadratic-time algorithms (meaning $O(n^{2-\delta})$ time for some constant $\delta>0$) for 3SUM-hard Problems? In 2014, Grønlund and Pettie described a deterministic algorithm for 3SUM itself that runs in time $O(n^2/(\log n/\log \log n)^{2/3})$. Although this is a major result, the improvement over $O(n^2)$ is only (sub)logarithmic. Moreover, no similar subquadratic algorithms are known for most other 3SUM-hard problems. BQP = P? Also: NP contained in BQP? I know this violated the rules by having two questions in the answer, but when taken with the P vs NP question, they are not necessarily independent questions. (Informally, if you have all problems in EXP on a table, and you pick one up uniformly at random, what is the probability that the problem you chose is also in NP? This question has been formalized by the notion of resource-bounded measure. It is known that P has measure zero within EXP, i.e., the problem you picked up from the table is almost surely not in P.) Approximating Metric TSP to within a factor smaller than 220/219 is NP-hard (Papadimitriou and Vempala, 2006 [PS]). To my knowledge this is the best known lower bound. The upper bound on approximability was recently lowered to $13/9$ (Mucha 2011 "13/9 -approximation for Graphic TSP" [PDF]) Shannon proved in 1949 that if you pick a Boolean function at random, it has exponential circuit complexity with probability almost one. The best lower bound for an explicit Boolean function $f:\{0,1\}^n \to \{0,1\}$ we have so far is $5n - o(n)$ by K. Iwama, O. Lachish, H. Morizumi, and R. Raz. What is the query complexity of testing triangle-freeness in dense graphs (i.e., distinguishing triangle-free graphs from those $\epsilon$-far from being triangle-free)? The known upper bound is a tower of exponentials in $1/\epsilon$, while the known lower bound is only mildly superpolynomial in $1/\epsilon$. This is a pretty basic question in extremal graph theory/additive combinatorics that has been open for nearly 30 years. Separate NEXP from BPP. People tend to believe BPP=P, but no one can separate NEXP from BPP. I know the OP asked for only one problem per post, but the RTA (Rewriting Techniques and their Applications) 1 and TLCA (Typed Lambda Calculi and their Applications) conferences both maintain lists of open problems in their fields 2. These lists are quite useful, as they also include pointers to previous work done on attempting to solve these problems. Derandomization of the Polynomial Identity Testing problem The problem is the following: Given an arithmetic circuit computing a polynomial $P$, is $P$ identically zero? This problem can be solved in randomized polynomial time but is not known to be solvable in deterministic polynomial time. Related is Shub and Smale's $\tau$ conjecture. Given a polynomial $P$, we define its $\tau$-complexity $\tau(P)$ as the size of the smallest arithmetic circuit computing $P$ using the sole constant $1$. For a univariate polynomial $P\in\mathbb Z[x]$, let $z(P)$ be its number of real roots. Prove that there exists a universal constant $c$ such that for every $P\in\mathbb Z[x]$, $z(P)\le (1+\tau(P))^c$. Is there a Quantum PCP theorem? I particularly like problem #5: Are there terms untypable in $F_ω$ but typable with help of positive recursive types? Is the discrete logarithm problem in P? Let $G$ be a cyclic group of order $q$ and $g,h \in G$ such that $g$ is a generator of $G$. The problem of finding $n \in \mathbb{N}$ such that $g^n = h$ is known as the discrete logarithm problem (DLP). Is there a (classical) algorithm for solving the DLP in worst-case polynomial-time in the number of bits of $q$? There are variations of DLP which are believed to be easier, but are still unsolved. The computational Diffie-Hellman problem (CDH) asks for finding $g^{a b}$ given $g, g^a$ and $g^b$. The decisional Diffie-Hellman problem (DDH) asks for deciding, given $g, g^a, g^b, h \in G$, if $g^{a b} = h$. Clearly DLP is hard if CDH is hard, and CDH is hard if DDH is hard, but no converse reductions are known, except for some groups. The assumption that DDH is hard is key to the security of some cryptosystems, such as ElGamal and Cramer-Shoup. Parity games are two-player infinite-duration graph games, whose natural decision problem is in NP and co-NP, and whose natural search problem in PPAD and PLS. Can parity games be solved in polynomial time? (More generally, a long-standing major open question in mathematical programming is whether P-matrix Linear Complementarity Problems can be solved in polynomial time?) The area of parameterized complexity has its own load of open problems. Consider the decision problems Many, MANY, combinatorial problems exist in this form. Parameterized complexity consider an algorithm to be "efficient" if its running time is upper bounded by $f(k)n^c$ where $f$ is an arbitrary function and $c$ is a constant independent of $k$. In comparison notice that all such problems can be easily solved in $n^{O(k)}$. This framework models the cases in which we are looking for a small combinatorial structure and we can afford exponential run-time with respect to the size of the solution/witness. A problem with such an algorithm (e.g. vertex cover) is called Fixed Parameter Tractable (FPT). Parameterized complexity is a mature theory and has both strong theoretical foundations and appeal for practical applications. Decision problems interesting for such theory form a very well structured hierarchy of classes with natural complete problems: $$ FPT \subseteq W[1] \subseteq W[2] \subseteq \ldots \subseteq W[i] \subseteq W[i+1] \subseteq \ldots W[P] $$ Of course it is open if any of such inclusion is strict or not. Notice that if $FPT=W[1]$ then SAT has subexponential algorithm (this is non trivial). Last statement connects prameterized complexity with $ETH$ mentioned above. Also notice that investigating such collapses is not an empty exercise: proving that $W[1]=FPT$ is equivalent to prove that there is a fixed parameter tractable algorithm for finding $k$-cliques. Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
July 6th, 2015, 07:28 AM # 3 Newbie Joined: Jun 2015 From: Sweden, Kalmar Posts: 7 Thanks: 1 I think the correct answer is the coefficient in front of x^17 in 2*((x^1+x^2+...+x^9)^0+(x^1+x^2+...+x^9)^1+(x^1+x^ 2+...+x^9)^2+...) = 2/(1-x-x^2-...-x^9) which is 129920 July 6th, 2015, 03:17 PM # 5 Math Team Joined: Dec 2013 From: Colombia Posts: 7,685 Thanks: 2666 Math Focus: Mainly analysis and algebra I agree: 64960 First, expand the set of digits to include "10", "11", ..., "17". Now count how many numbers of length $n$ in this expanded digit set have a digit sum of 17. To do this, lay out 17 stones in a line and add $(n-1)$ dividers between them so that no two dividers are adjacent (as that would represent a "zero" digit). We read off the digits by counting the number of stones between dividers. There are $16 \choose n-1$ ways to create numbers like this, and it works for $n \in \{1,2,\ldots,17\}$. Now we count how many of those numbers contain "digits" greater than 9. Note that each can contain at most one such "digit", otherwise the digit sum would be at least 20, not 17 as stipulated. To count these, we create numbers with a digit sum of 17-9=8, and then add 9 to each digit in turn. The same process for generating the numbers with a digit sum of 8 works, so our total count is $n{7 \choose n-1}$ numbers. This works for $n \in \{1,2,\ldots,8\}$ since any number with 9 or more non-zero digits and a digit sum of 17 has no more than 8 digits maximum digit value of 9. Thus our total, $T$, is given by$$T = \sum_{n = 1}^{17} {16 \choose n-1} - \sum_{n = 1}^{8} n{7 \choose n-1}$$ The first term is simply the sum of the entries in the 16th row of Pascal's triangle, and is thus equal to $2^{16}$. The second term is worked out by hand - I got 576, giving the total I gave above. Last edited by v8archie; July 6th, 2015 at 03:24 PM. July 7th, 2015, 05:05 AM # 6 Math Team Joined: Dec 2013 From: Colombia Posts: 7,685 Thanks: 2666 Math Focus: Mainly analysis and algebra Further to the above, we can write $$\begin{aligned} 2\sum_{n = 1}^8 n{7 \choose n-1} &= 2\sum_{n = 0}^7 (n+1){7 \choose n} = \sum_{n = 0}^7 (n+1){7 \choose n} + \sum_{n = 0}^7 (n+1){7 \choose n} \\ &= \sum_{n = 0}^7 (n+1){7 \choose n} + \sum_{n = 0}^7 (n+1){7 \choose 7-n} \\ &= \sum_{n = 0}^7 (n+1){7 \choose n} + \sum_{n = 0}^7 (7-n+1){7 \choose n} \\ &= 9\sum_{n = 0}^7 {7 \choose n} \\ &= 9 \cdot 2^7 \\ \sum_{n = 1}^8 n{7 \choose n-1} &= 9 \cdot 2^6 \end{aligned}$$ It is easy to generalise this result for digit sums small enough that the second term doesn't double-count numbers. August 27th, 2015, 09:39 PM # 7 Newbie Joined: Jul 2009 From: England Posts: 25 Thanks: 0 How many positive integers have non zero digits that add up to 17 in base 10? There are many ways to solve the above problem. In the paper below I have used two methods. 1, By using the Fibonacci numbers in base 3 we can see as we go up through the bases to base 10 how these numbers relate to this problem. 2, I have written a program to count the number of digits that add up to 17. https://www.scribd.com/doc/274558500...t-Add-Up-to-17 August 28th, 2015, 04:57 AM # 8 Math Team Joined: Dec 2013 From: Colombia Posts: 7,685 Thanks: 2666 Math Focus: Mainly analysis and algebra That document is over a thousand pages long! How did your program efficiently filter the decimals containing zeros? August 29th, 2015, 04:55 AM # 9 Math Team Joined: Apr 2010 Posts: 2,780 Thanks: 361 In his loops, digits are from 1 to some upperbound, depending on the amount of digits of the number being checked. One could also look at the partitions of 17; pick those whose highest term is less then ten. Then for all those partitions, get all permutations of the terms (=digits) and sort if you like. Tags adding, digits Thread Tools Display Modes Similar Threads Thread Thread Starter Forum Replies Last Post Adding Sin waves DaveJames Algebra 5 March 26th, 2014 06:34 PM Adding Vectors bilano99 Calculus 4 January 9th, 2013 03:57 PM Multiplying 4 digits with 3 digits and vice versa CarpeDiem Elementary Math 7 July 13th, 2012 07:35 PM Adding Vectors bilano99 Calculus 3 January 11th, 2012 05:21 PM adding exponents andi7 Algebra 2 March 12th, 2009 08:59 PM
I encountered some problems when implementing the cloth simulation algorithm from Baraff & Witkin 98's Large Steps in Cloth Simulation. Baraff & Witkin 98 Consider the cloth as a particle system in 3D, which consists of N particles. Equation of motion In each time step, the implicit integration of particle system is: $$ \begin{pmatrix} \Delta \mathbf x \\ \Delta \mathbf v \\ \end{pmatrix} = h\begin{pmatrix} \mathbf v_0 + \Delta \mathbf v \\ \mathbf M^{-1} \mathbf f(\mathbf x_0 + \Delta \mathbf x, \mathbf v_0 + \Delta \mathbf v) \\ \end{pmatrix} $$ where $h$ is the time step (scalar) The partial derivative equation Rewrite the aforementioned equality of motion into $\mathbf A \Delta \mathbf v = \mathbf b$ form, ( Take the 1st order Taylor expansion of $\mathbf f(\mathbf x_0 + \Delta \mathbf x, \mathbf v_0 + \Delta \mathbf v)$, and introduce constraint matrix $\mathbf S$, which make mass matrix from $\mathbf M^{-1}$ to $\mathbf M^{-1}\mathbf S$), we have: $$ (\mathbf I - h\mathbf M^{-1} \mathbf S\frac{\partial \mathbf f}{\partial \mathbf v} - h^2\mathbf M^{-1} \mathbf S\frac{\partial \mathbf f}{\partial \mathbf x})\Delta \mathbf v = h\mathbf M^{-1} \mathbf S(\mathbf f_0 + h\frac{\partial \mathbf f}{\partial \mathbf x} \mathbf v_0) + \mathbf z $$ Where all of the matrices are treated as $N\times N $ block matrices, eack block sizes $3\times 3 $; vectors as $N$ block vector, each sized $3$ (with N number of 3D forces/positions/velocities). $\mathbf I$ is identical matrix $ \mathbf M = diag(\mathbf M_1, \mathbf M_2, ..., \mathbf M_N) $, $\mathbf M_i = diag(m_i, m_i, m_i)$, mass of each particle $ \mathbf S = diag(\mathbf S_1, \mathbf S_2, ..., \mathbf S_N) $, $ \mathbf S_i = \begin{cases} \mathbf I, & \text{$ndof(i) = 3$} \\ \mathbf I - \mathbf p_i \mathbf p_i^T, & \text{$ndof(i) = 2$} \\ \mathbf I - \mathbf p_i \mathbf p_i^T - \mathbf q_i \mathbf q_i^T, & \text{$ndof(i) = 1$} \\ \mathbf 0, & \text{$ndof(i) = 0$} \end{cases}$, $\mathbf p$ and $\mathbf q$ are two oithogonal constraint direction, $\mathbf S$ is constraint matrix (Baraff98 chapter 5.1) $\frac{\partial \mathbf f}{\partial \mathbf v}$ and $\frac{\partial \mathbf f}{\partial \mathbf x}$ are force derivatives, which are symmetrical matrices . solve the PDE The paper solves the aforementioned PDE by Modified Preconditioning Conjugate Gradient Method (Baraff98 chapter 5.2) for $\Delta \mathbf v$, then update the velocity $\mathbf v$ and position $\mathbf x$ of each particle. My Questions Solve $\mathbf A\mathbf x=\mathbf b$ with PCG method requires the matrix $\mathbf A$ to be symmetrical and positive-definite, where according to the aforementioned PDE, $\mathbf A = (\mathbf I - h\mathbf M^{-1} \mathbf S\frac{\partial \mathbf f}{\partial \mathbf v} - h^2\mathbf M^{-1} \mathbf S\frac{\partial \mathbf f}{\partial \mathbf x})$. symmentrical In block view: treat $\mathbf A$ as an $N\times N$ block matrix, $\mathbf M^{-1}$ and $\mathbf S$ are both block diagonal matrices, makes $\mathbf A$ a block symmetrical matrix. While in normal view: $\mathbf A$ as $3N\times 3N$ matrix, $\mathbf S$ is symmetrical, but product of $\mathbf M^{-1} \mathbf S$ and $\frac{\partial \mathbf f}{\partial \mathbf v}$ (or $\frac{\partial \mathbf f}{\partial \mathbf x}$) makes $\mathbf A$ not symmetrical. Question 1: should I treat $\mathbf A\mathbf x=\mathbf b$ as a block matrix rather than normal matrix? If not, how to make $\mathbf A$ symmetrical? I then modified the PDE as $\mathbf A^T\mathbf A\mathbf x=\mathbf A^T\mathbf b$, $\mathbf A^T\mathbf A$ is symmetrical. Question 2: How to make the fliter procedure (Baraff98 chapter 5.3) compatible with $\mathbf A^T\mathbf A\mathbf x=\mathbf A^T\mathbf b$? positive-definite The blocks $\mathbf S_i$ in $\mathbf S$ may become zero block when $ndof(i) = 0$, which makes $\mathbf A^T\mathbf A$ a positive-semidefinite matrix. To apply PCG method, according to Baraff98 chapter 5.3: The CG method (technically, the preconditioned CG method) takes a symmetric positive semi-definitematrix $\mathbf A$, a symmetric positive definitepreconditioning matrix $\mathbf P$ of the same dimension as $\mathbf A$ ,a vector $\mathbf b$ and iteratively solves $\mathbf A \Delta \mathbf v = \mathbf b$. Question 3: How to find the symmetrical and positive-definite preconditioning matrix $\mathbf P$, with $\mathbf A^T\mathbf A$ symmetrical and semi PD? Some docs I also refered:
At the moment there are deep seas and high mountains. But imagine that the land elevation of the Earth is equal everywhere. How deep would the ocean be in that case? An approximation can be obtained quite simply by dividing the volume of water in the oceans by the surface area of an ellipsoid with a smooth surface representing the idealized Earth in your question. The volume of Earth's oceans, seas and bays is $1.332 \times 10^9 \text{ km}^3$. The surface area of the oblate ($c < a$) spheroid is: $$S = 2 \pi a^2 \left( 1 + \frac{1 - e^2}{e}\tanh^{-1} e \right)$$ where $e^2 = 1 - \frac{c^2}{a^2}$. Which gives us $\approx 0.51 \times 10^9 \text{ km}^2$. Dividing the volume of the oceans by this results gives us $\approx 2.6 \text{ km}$. Note: Earth is not a sphere. An ellipsoid is a better representation of our Earth. Nevertheless, the answer to your question would have been approximately the same had I used a sphere instead, as suggested in the title of your question.
Prosthaphaeresis Formulas/Sine plus Sine Theorem $\sin \alpha + \sin \beta = 2 \sin \left({\dfrac {\alpha + \beta} 2}\right) \cos \left({\dfrac {\alpha - \beta} 2}\right)$ Proof \(\displaystyle \) \(\) \(\displaystyle 2 \sin \left({\frac {\alpha + \beta} 2}\right) \cos \left({\frac {\alpha - \beta} 2}\right)\) \(\displaystyle \) \(=\) \(\displaystyle 2 \frac {\sin \left({\dfrac {\alpha + \beta} 2 + \dfrac {\alpha - \beta} 2}\right) + \sin \left({\dfrac {\alpha + \beta} 2 - \dfrac {\alpha - \beta} 2}\right)} 2\) Simpson's Formula for Sine by Cosine \(\displaystyle \) \(=\) \(\displaystyle \sin \frac {2 \alpha} 2 + \sin \frac {2 \beta} 2\) \(\displaystyle \) \(=\) \(\displaystyle \sin \alpha + \sin \beta\) $\blacksquare$ Also reported as This result is also sometimes reported as: $\dfrac {\sin \alpha + \sin \beta} 2 = \sin \left({\dfrac {\alpha + \beta} 2}\right) \cos \left({\dfrac {\alpha - \beta} 2}\right)$ The word prosthaphaeresis or prosthapheiresis is a neologism coined some time in the $16$th century from the two Greek words: With the advent of machines to aid the process of arithmetic, this word now has only historical significance.
Difference between revisions of "User talk:WikiSysop" Line 4: Line 4: [http://www.ams.org/publications/math-reviews/math-reviews URL on white list] [http://www.ams.org/publications/math-reviews/math-reviews URL on white list] + + ==Test Copy&Paste HTML== ==Test Copy&Paste HTML== Revision as of 10:33, 29 January 2015 Contents 1 Test external Link 29th January 2015 2 Test Copy&Paste HTML 3 Test Asymptote 3.1 Test January 12th 2015 3.2 Tests November 1th 3.3 Tests November 17th 3.4 Tests November 4th 3.5 Tests October 27th 3.6 Previous tests 4 Test Cite Extension 5 Test MathJax 6 Pages A-Z 7 Recent Changes Test external Link 29th January 2015 Test Copy&Paste HTML Test-copy-paste Test Test Test Asymptote Test January 12th 2015 Case 1 modified Tests November 1th Case 1 modified Case 1 Tests November 17th Case 1 Tests November 4th Case 1 Case 2 Case 3 Tests October 27th Case 1 Case 2 Case 3 Case 4 Case 5 [asy] pair A,B,C,X,Y,Z; A = (0,0); B = (1,0); C = (0.3,0.8); draw(A--B--C--A); X = (B+C)/2; Y = (A+C)/2; Z = (A+B)/2; draw(A--X, red); draw(B--Y,red); draw(C--Z,red); [/asy] Previous tests Test Cite Extension Example: Cite-Extension Test MathJax \begin{align} \dot{x} & = \sigma(y-x) \\ \dot{y} & = \rho x - y - xz \\ \dot{z} & = -\beta z + xy \end{align} \[ \frac{1}{(\sqrt{\phi \sqrt{5}}-\phi) e^{\frac25 \pi}} = 1+\frac{e^{-2\pi}} {1+\frac{e^{-4\pi}} {1+\frac{e^{-6\pi}} {1+\frac{e^{-8\pi}} {1+\ldots} } } } \] Some Text \( \frac{1}{(\sqrt{\phi \sqrt{5}}-\phi) e^{\frac25 \pi}} = 1+\frac{e^{-2\pi}} {1+\frac{e^{-4\pi}} {1+\frac{e^{-6\pi}} {1+\frac{e^{-8\pi}} {1+\ldots} } } } \) Some Text \[ \frac{1}{(\sqrt{\phi \sqrt{5}}-\phi) e^{\frac25 \pi}} = 1+\frac{e^{-2\pi}} {1+\frac{e^{-4\pi}} {1+\frac{e^{-6\pi}} {1+\frac{e^{-8\pi}} {1+\ldots} } } } \] Alphabetically ordered index of all pages List of previous changes on EOM . How to Cite This Entry: WikiSysop. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=WikiSysop&oldid=36278 Test January 12th 2015
In the typical undergrad curriculum for mathematics, you will find at least these two concepts that often lack motivation and proper intuition: Determinants and compactness. Here is my take on the latter. Compactness The usual definition of a compact space is as follows: A space is compact if every open cover contains a finite subcover. While this may often be a useful definition, I would argue that it is in no-way an intuitive definition when you first learn about it. Soon after that, you will learn about sequentially compact spaces: A space is sequentially compact if every infinite sequence has a convergent subsequence. This is much more relatable: By the time you get to point-set topology, you have seen plenty of arguments involving sequences. You are probably much more familiar with convergence of sequences than the elusive notion of an open set. Now the bad news is that sequential compactness is not generally equivalent to compactness (only in the case of first-countable spaces and metric spaces in particular). This is further proof that there is no god, but also shows us that what we are dealing with is essentially an issue of size: Compactness is not equivalent to sequential compactness because there are some spaces that are just too large to be characterized by their sequences only. What if there was a nice equivalent notion for general spaces that is just as intuitive? Well, you’re in luck: There is. 1 Ultrafilters Some topology classes include a section about filters and ultrafilters, but I have yet to find one that puts in some effort to actually explain some intuition for them (references welcome! 2).In the following, let $(X, \tau)$ be a topological space. A filter on $X$ is a non-empty subset $F \subseteq \tau$ with the following properties for all open sets $U, V \in \tau$: If $U \subseteq V$ and $U \in F$, then $V \in F$. If $U, V \in F$, then $U \cap V \in F$. $\emptyset \notin F$. Additionally, a filter $F$ is an ultrafilter if it is maximal w.r.t. inclusion among filters. Establishing the existence of interesting ultrafilters is not difficult, but requires stronger assumptions like the axiom of choice. 3 This statement is usually known as the ultrafilter lemma and is mostly used to say that a filter can be extended to an ultrafilter (not necessarily a unique one). There is also the notion of an ultrafilter on a set, which is the same as an ultrafilter on that said equipped with a discrete topology (i.e. all subsets of open). The typical definition for ultrafilters that you will likely find elsewhere is that for sets. You can use that one, too, for the most part, but will notice that all of its properties are defined by the open sets it contains, so we might just as well limit ourselves to them. What are ultrafilters? A filter is a way of zooming in on a region of the space $X$, starting by looking at the whole space. Let me explain: If $U \in F$, then the region that we are zooming in on is contained in $U$. We may say that we pass through $U$ while zooming in. The properties of a filter merely ensure that we are zooming somewhere in a consistent fashion: If we pass through $U$ (i.e. $U \in F$), then we also pass through any superset of $U$. If we pass through $U$ and through $V$, then we also pass through $U \cap V$. We cannot pass through $\emptyset$ because it is empty! Here are some examples: The set $F = \lbrace X \rbrace$ is a filter. It describes the process of zooming in until we see the whole space, i.e. not zooming in at all. For any point $x \in X$, the filter $F_{x} = \lbrace U \subseteq X \mid x \in U \rbrace$ is called the principal filter at $x$. It describes the process of zooming in on the point $x$. For any set $V \subseteq X$, the filter $F_{V} = \lbrace U \subseteq X \mid V \subseteq V \rbrace$ describes the process of zooming in on the set $V$ as a whole. Note how $V$ isn’t required to be open - the construction still makes sense. Now, ultrafilters are filters that zoom in as much as possible. The following three subsections present one of my favorite ways to look at ultrafilters in the context of ultrafilters on sets. Who Am I? Are you familiar with the game Who Am I? It is a two-player game where player A (she) secretly chooses a person. Now player B (he) repeatedly asks yes/no-questions about the person: “Are you still alive?” - “Are you a mathematician?” etc. and player A keeps answering truthfully until player B figures out who player A thought of. Now imagine that you have an infinite set $X$ of people for player A to choose from. Call the person chosen by her $p$. Instead of asking of whether the person is a mathematician, player B now asks whether the person is contained in the set $M$ of all mathematicians. In fact, he can ask for each set $U \subset X$ whether or not $x \in U$. 4 The set of all sets containing $p$ is an ultrafilter on the set $X$, namely $F_{p}$ - the principal ultrafilter at $X$. By asking questions, player B is zooming in on $p$. Conversely, each ultrafilter $F$ gives rise to a way for player A to answer player B’s question: If $U \in F$, then player A should answer “yes” when asked whether her person is in $U$. The properties of ultrafilters make sure that we actually zoom in on something by answering the questions consistently: If we said that $p \in U$ and are asked about $V \supseteq U$, we must also answer that $p \in V$. If we said that $p \in U$ and $p \in V$, then we must surely also say that $p \in U \cap V$. Whatever we thought of surely is not contained in the empty set. The ultrafilter property of maximality merely ensures that player A has an answer for every set, i.e. question. In fact, in the setting of ultrafilters on a set, a filter $F$ is an ultrafilter if and only if for every $U \subseteq X$ we have either $U \in F$ or $(X \setminus U) \in F$. Cheating the game Luckily for player A, player B does not have all day for playing games, so she can assume that he will not be able to ask her about every subset of $X$. But she does not know when player B has his next appointment, so she should be prepared to answer any finite number of questions. This opens up an interesting possibility: Player A does not have to choose a person $p \in X$ at all! It is absolutely sufficient to choose a consistent way of answering any number of finite questions. Consistency is required to prevent player B from catching her while cheating! Maybe it is a good idea to switch back to ultrafilters for a second: Ultrafilters have the so-called finite intersection property (FIP), meaning that any finite number of sets from an ultrafilter have non-empty intersection. As long as the intersection of all answers given so far is non-empty, our player A’s person could be in there; if the intersection was empty, she would have answered inconsistently (that is, her answers contradict each other). The fun part is that for most ultrafilters $F$ we have $\bigcap_{U \in F} U = \emptyset$. Such ultrafilters are known as free ultrafilters. In fact, for ultrafilters on a set, only principal ultrafilters are not free. So while player A can answer any finite number of questions truthfully using a free ultrafilter, she did not choose any person at all. Ultrafilters as generalized points Given all that, it is very reasonable to look at ultrafilters as complete and consistent specifications of points of $X$: Each ultrafilter describes a point of $X$ by listing the set-theoretic properties that we want our hypothetical point to have. 5 Some specifications just happen to not specify actual points. We can even go a step further and refer to the ultrafilters as generalized points of $X$ (not in the categorical sense). For an ultrafilter $F$ and $U \subseteq X$, you should (morally) think that if $U \in F$, then $F \in U$ (I’m absolutely serious about this!), so the ultrafilter is specifying a point contained in $U$. Hence each actual point $x \in X$ is a generalized point by identifying it with the corresponding principal ultrafilter $F_{x}$. An ultrafilter $F$ is then free if for any finite set $U \subseteq X$ it is the case that $U \notin F$ - which means that $F$ as a generalized point is not in $U$. Coming back to the game of Who Am I, this makes perfect sense: If player B manages to find out that player A’s chosen person is one of finitely many, he can just ask about them one after another - meaning that player A can only answer consistently if her (allegedly) chosen person is in that set, meaning that the ultrafilter must be principal (and therefore not free). All this means that free ultrafilters do not describe actual points and are only contained in infinite sets. They zoom in on a region of space where there could be a point, but there is none.This is by the way a generalization of how Cauchy-sequences work: They also zoom in on a region of space (meaning that all points of a sequence fit into an area of ever decreasing size), but they do not necessarily zoom in on something - there are non-convergent Cauchy-sequences. Ultrafilters and Cauchy-sequences both share the property that they are going somewhere, but necessarily getting somewhere! Convergence for Ultrafilters Now back to topology. If ultrafilters describe the process of zooming in on something, there should be a notion of convergence. Here it is: An filter $F$ converges to a point $x \in X$ if every neighborhood $U$ of $x$ is contained in $F$. This makes sense: In terms of generalized points, the ultrafilter is closing in on $x$ if it is contained in every neighborhood of $x$. A principal ultrafilter $F_{x}$ therefore converges to $x$. Ultrafilters on $X$ converge to at most one point if and only if $X$ is Hausdorff. This is immediate from the fact that in a Hausdorff space each two points $x, y \in X$ have disjoint neighborhoods - and an ultrafilter can only contain one of them (or rather: be contained in one of them). Compactness via Ultrafilters What does all of this have to do with compactness? Well, here is a well-known characterization of compactness: A space is compact if and only if each ultrafilter on it converges to at least one point. This is not really surprising once you consider the dual of the usual definition and notice that it looks suspiciously like the finite intersection property. It would make much more sense to take this as the definition of compactness and regard the usual statement as a technical characterization (unless you are concerned about using somewhat stronger axioms). With this as a definition, I put forward the following: Compactness is to topological spaces as completeness is to metric spaces. Ultrafilters are the purely topological equivalent of Cauchy-sequences: They both zoom in on part of the space, and a space can only be called truly nice if they don’t zoom into emptiness. Once again: Compactness is topological completeness. To illustrate this: You know that $\mathbb{R} \; \simeq \; ]0,1[$. Completing the right hand side (in the metric sense) yields $[0,1]$ which is homeomorphic to a special compactification of $\mathbb{R}$: The two point compactification. All non-principal ultrafilters of $\mathbb{R}$ converge to either of the two points, depending on whether they contain the set $\mathbb{R}^{+} = \lbrace x \in \mathbb{R} \mid x > 0 \rbrace$ or the set $\mathbb{R}^{-} = \lbrace x \in \mathbb{R} \mid x < 0 \rbrace$. In $\mathbb{R}$, our left hand side, there are of course no non-convergent Cauchy-sequences left, so completing it as a metric space does not change it – but that is the beauty of ultrafilters: They characterize gaps in your space topologically. I like to think about compactness by imagining what would happen if I was to pour water into the space. Where would it leak? If it doesn’t leak at all, the space is compact. As a topological space, $\mathbb{R}$ leaks water on both ends. For this thought experience, it doesn’t matter that the ends of $\mathbb{R}$ are infinitely far away, because topology does not care about it. Something like $Q \cap [0, 1]$ is even worse: It leaks water everywhere and hence is not compact. Non-convergent ultrafilters describe these holes. You can of course also use nets instead of sequences that solve the sizing issue by using sequences that are indexed by sets larger than $\mathbb{N}$. I prefer to talk about ultrafilters instead because I find them yet more intuitive than even sequences. ↩ At first I thought that this is a good reason not to teach it in an introductory topology class, but then again people teach Tychonoff’s theorem which has the same problem. Also, the existence of ultrafilters is strictly weaker than the axiom of choice. ↩ Instead of allowing to ask about every set (or rather property), you can also restrict the set to some sub Boolean algebra of $\mathcal{P}(X)$. This takes you into the wonderful realm of Stone duality. ↩ Ultrafilters only care about open sets. Therefore, the topology specifies which sets are relevant for our specification. There should probably be a reference here to pointfree topology etc. but I’m not too familiar with that, yet. ↩
Search Now showing items 1-2 of 2 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... Beauty production in pp collisions at √s=2.76 TeV measured via semi-electronic decays (Elsevier, 2014-11) The ALICE Collaboration at the LHC reports measurement of the inclusive production cross section of electrons from semi-leptonic decays of beauty hadrons with rapidity |y|<0.8 and transverse momentum 1<pT<10 GeV/c, in pp ...
There are two players 1 and 2, playing a two-period public good contribution game. The benefits from the public good per-period to each player is $b_1$ and $b_2$ respectively, which are common knowledge. For the provision of the public good only one player needs to contribute. Once a player contributes and the good is provided then both can derive the benefits. The public good lasts for only one period and contributions are required in each period to renew it. The per-period cost of contribution for player 1 is private information which can take two values, $\underline c$ (low type with probability $p$) and $\overline c$ (high type with probability $1 − p$) for which following condition holds $\underline c < b_1 < \overline c$. However, the cost of contribution for player 2 takes the value $c_2$ which is common knowledge and is lower than its benefit i.e. $c_2 < b_2$. The game is played as per the following sequence: First period: Only player 1 moves. He can choose to ‘contribute’($C$) or ‘Not to Contribute’($N$) towards the first period contribution of public good. If player 1 chooses $N$ then both gets 0 otherwise if player 1 chooses $C$ then both players get their benefits but only player 1 pays the cost. Note that the player 2 observes the player 1’s action of either $C$ or $N$ but not its type (i.e. the private information regarding costs). Second period: The game is played sequentially where player 2 moves first and chooses $C$ or $N$. Player 1 observes it and decides on $C$ or $N$. If at least one of them contributes then both players get their benefits bi and those who contribute pay their respective costs. If no one contributes each gets 0. Players discount their payoffs by $\delta$, which is common to both. In answering the questions below, you need to write strategies and beliefs precisely. (a) Find an equilibrium where different types of player 1 choose different actions in first period (separating equilibrium). (b) Find an equilibrium where different types player 1 choose the same action in first period (pooling equilibrium). (c) Find a hybrid equilibrium where low type of player 1 in first period chooses $C$ with probability $\alpha$ and $N$ with probability $(1 − \alpha)$, and player 2 in second period randomizes between $C$ with probability $\beta$ and $N$ with probability $(1 − \beta)$.
X Search Filters Format Subjects Library Location Language Publication Date Click on a bar to filter by decade Slide to change publication date range 1. Search for heavy ZZ resonances in the ℓ + ℓ - ℓ + ℓ - and ℓ + ℓ - vv - final states using proton–proton collisions at √s=13 TeV with the ATLAS detector European Physical Journal C, ISSN 1434-6044, 04/2018, Volume 78, Issue 4 Journal Article 2. Measurement of the ZZ production cross section in proton-proton collisions at √s = 8 TeV using the ZZ → ℓ−ℓ+ℓ′−ℓ′+ and ZZ→ℓ−ℓ+νν¯¯¯ decay channels with the ATLAS detector Journal of High Energy Physics, ISSN 1126-6708, 2017, Volume 2017, Issue 1, pp. 1 - 53 A measurement of the ZZ production cross section in the ℓ−ℓ+ℓ′ −ℓ′ + and ℓ−ℓ+νν¯ channels (ℓ = e, μ) in proton-proton collisions at s=8TeV at the Large Hadron... Hadron-Hadron scattering (experiments) | Fysik | Physical Sciences | Subatomic Physics | Naturvetenskap | Subatomär fysik | Natural Sciences Hadron-Hadron scattering (experiments) | Fysik | Physical Sciences | Subatomic Physics | Naturvetenskap | Subatomär fysik | Natural Sciences Journal Article 3. Search for new resonances decaying to a W or Z boson and a Higgs boson in the ℓ+ℓ−bb¯, ℓνbb¯, and νν¯bb¯ channels with pp collisions at s=13 TeV with the ATLAS detector Physics Letters B, ISSN 0370-2693, 02/2017, Volume 765, Issue C, pp. 32 - 52 Journal Article 4. Search for heavy ZZ resonances in the ℓ + ℓ - ℓ + ℓ - and ℓ + ℓ - ν ν ¯ final states using proton-proton collisions at s = 13 TeV with the ATLAS detector The European physical journal. C, Particles and fields, ISSN 1434-6044, 2018, Volume 78, Issue 4, pp. 293 - 34 Journal Article 5. ZZ -> l(+)l(-)l '(+)l '(-) cross-section measurements and search for anomalous triple gauge couplings in 13 TeV pp collisions with the ATLAS detector PHYSICAL REVIEW D, ISSN 2470-0010, 02/2018, Volume 97, Issue 3 Measurements of ZZ production in the l(+)l(-)l'(+)l'(-) channel in proton-proton collisions at 13 TeV center-of-mass energy at the Large Hadron Collider are... PARTON DISTRIBUTIONS | EVENTS | ASTRONOMY & ASTROPHYSICS | PHYSICS, PARTICLES & FIELDS | Couplings | Large Hadron Collider | Particle collisions | Transverse momentum | Sensors | Cross sections | Bosons | Muons | Particle data analysis | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences PARTON DISTRIBUTIONS | EVENTS | ASTRONOMY & ASTROPHYSICS | PHYSICS, PARTICLES & FIELDS | Couplings | Large Hadron Collider | Particle collisions | Transverse momentum | Sensors | Cross sections | Bosons | Muons | Particle data analysis | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences Journal Article 6. Search for heavy ZZ resonances in the $$\ell ^+\ell ^-\ell ^+\ell ^-$$ ℓ+ℓ-ℓ+ℓ- and $$\ell ^+\ell ^-\nu \bar{\nu }$$ ℓ+ℓ-νν¯ final states using proton–proton collisions at $$\sqrt{s}= 13$$ s=13 $$\text {TeV}$$ TeV with the ATLAS detector The European Physical Journal C, ISSN 1434-6044, 4/2018, Volume 78, Issue 4, pp. 1 - 34 A search for heavy resonances decaying into a pair of $$Z$$ Z bosons leading to $$\ell ^+\ell ^-\ell ^+\ell ^-$$ ℓ+ℓ-ℓ+ℓ- and $$\ell ^+\ell ^-\nu \bar{\nu }$$... Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology Journal Article 7. Measurement of the ZZ production cross section in proton-proton collisions at s = 8 $$ \sqrt{s}=8 $$ TeV using the ZZ → ℓ−ℓ+ℓ′−ℓ′+ and Z Z → ℓ − ℓ + ν ν ¯ $$ ZZ\to {\ell}^{-}{\ell}^{+}\nu \overline{\nu} $$ decay channels with the ATLAS detector Journal of High Energy Physics, ISSN 1029-8479, 1/2017, Volume 2017, Issue 1, pp. 1 - 53 A measurement of the ZZ production cross section in the ℓ−ℓ+ℓ′ −ℓ′ + and ℓ − ℓ + ν ν ¯ $$ {\ell}^{-}{\ell}^{+}\nu \overline{\nu} $$ channels (ℓ = e, μ) in... Quantum Physics | Quantum Field Theories, String Theory | Hadron-Hadron scattering (experiments) | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory | Nuclear Experiment Quantum Physics | Quantum Field Theories, String Theory | Hadron-Hadron scattering (experiments) | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory | Nuclear Experiment Journal Article Physical Review D, ISSN 1550-7998, 2012, Volume 86, Issue 3, p. 032012 In a sample of 471×106 BB̄ events collected with the BABAR detector at the PEP-II e ASTRONOMY & ASTROPHYSICS | PHYSICS, PARTICLES & FIELDS | Physics - High Energy Physics - Experiment | Física de partícules | Leptons (Física nuclear) | Leptons (Nuclear physics) | Experiments | Particle physics | Physics | High Energy Physics - Experiment ASTRONOMY & ASTROPHYSICS | PHYSICS, PARTICLES & FIELDS | Physics - High Energy Physics - Experiment | Física de partícules | Leptons (Física nuclear) | Leptons (Nuclear physics) | Experiments | Particle physics | Physics | High Energy Physics - Experiment Journal Article 9. Measurement of the ZZ production cross section and Z→ℓ+ℓ−ℓ′+ℓ′− branching fraction in pp collisions at s=13 TeV Physics Letters B, ISSN 0370-2693, 12/2016, Volume 763, Issue C, pp. 280 - 303 Four-lepton production in proton–proton collisions, , where or , is studied at a center-of-mass energy of 13 TeV with the CMS detector at the LHC. The data... CMS | Physics | Electroweak | Nuclear and High Energy Physics | Physics and Astronomy | CMS; Electroweak; Physics; Nuclear and High Energy Physics CMS | Physics | Electroweak | Nuclear and High Energy Physics | Physics and Astronomy | CMS; Electroweak; Physics; Nuclear and High Energy Physics Journal Article 10. Search for heavy ZZ resonances in the l(+) l(-) l(+) l(-) and l(+) l(-) nu(nu)over-bar final states using proton-proton collisions at root s=13 TeV with the ATLAS detector EUROPEAN PHYSICAL JOURNAL C, ISSN 1434-6044, 04/2018, Volume 78, Issue 4 A search for heavy resonances decaying into a pair of Z bosons leading to l(+) l(-) l(+) l(-) and l(+) l(-) nu(nu) over bar final states, where l stands for... DISTRIBUTIONS | BOSON | DECAY | MASS | TAUOLA | TOOL | PHYSICS, PARTICLES & FIELDS | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences DISTRIBUTIONS | BOSON | DECAY | MASS | TAUOLA | TOOL | PHYSICS, PARTICLES & FIELDS | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences Journal Article
In Peskin & Schroeder section 9.2, they derive the two-point function in the path integral formalism: $$\langle \Omega | \mathcal{T} \left\{ \hat{\phi}(x_1)\hat{\phi}(x_2)\right\} | \Omega \rangle = \frac{\int \mathcal{D}\phi \ \phi(x_1) \phi(x_2) e^{ i\int d^4 x\ \mathcal{L} }}{\int \mathcal{D}\phi \ e^{ i\int d^4 x\ \mathcal{L} }}. $$ The trick to derive this is to insert the identity $$1 = \int \mathcal{D}\phi\ |\phi\rangle \langle \phi|$$ between the operators $\hat{\phi}(x_i)$. Then we can change the operator for regular functions using: $$ \hat{\phi}(x_i) |\phi_i\rangle = \phi(x_i) |\phi_i\rangle.$$ My first question is: what are the states that form the complete orthogonal basis $|\phi\rangle$? The authors never seem to sepecify it. It cannot be just any complete orthogonal basis since these states seem to be eigenstates of the field operator $$\hat{\phi}(x) = \int \frac{d^3p}{(2\pi)^3} \frac{1}{\sqrt{2E_p}}(\hat{a}_p e^{i p\cdot x} + \hat{a}_p^\dagger e^{-i p\cdot x}).$$ From what I understand, the states $|\phi\rangle$ represent all the possible classical field configurations (classical as in well-defined at all points in space at a given time, with no uncertainty) over which we integrate between two boundary states. But I don't see how these classical states are the eigenstates of $\hat{\phi}(x)$. Is there a simple expression for $|\phi\rangle$ in terms of e.g. creation/annihilation operators? Actually what bothers me is that eigenstates of the field operator are supposed to be coherent states, which form an overcomplete set. Which means that if the $|\phi\rangle$'s are coherent states we cannot write the identity as the combination above since the states are not orthogonal (see section 8.1.3 of this document). My second question is: is it possible tha coherent states might be the eigenstates of another type of "field operator", not the one above? If so, what is this other operator? ( Solved: see edit below) Note that in the given link they don't seem to define the operator for which the coherent state is an eigenstate. (Related: 148200 and 109343. The answer in the first link doesn't really answer the question "what is $|\phi\rangle$?" and the second link mentions coherent states only, which as I mentioned are not orthogonal and therefore cannot be the states used in the derivation by Peskin & Schroeder) EDIT: As @Mane.andrea suggested in the comments, I checked out section 4.1 of Condensed Matter Field Theory by Atland & Simons. It seems that they define the coherent state as the eigenstate of the annihilation operators $\hat{a}_i$ specifically, i.e. the positive-frequency part of the field above. So the answer to my 2nd question seems to be Yes, coherent states are the eigenstates of a different "field operator".
A particle moves along the x-axis so that at time t its position is given by $x(t) = t^3-6t^2+9t+11$ during what time intervals is the particle moving to the left? so I know that we need the velocity for that and we can get that after taking the derivative but I don't know what to do after that the velocity would than be $v(t) = 3t^2-12t+9$ how could I find the intervals Fix $c\in\{0,1,\dots\}$, let $K\geq c$ be an integer, and define $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$.I believe I have numerically discovered that$$\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n \quad \text{ as } K\to\infty$$but cannot ... So, the whole discussion is about some polynomial $p(A)$, for $A$ an $n\times n$ matrix with entries in $\mathbf{C}$, and eigenvalues $\lambda_1,\ldots, \lambda_k$. Anyways, part (a) is talking about proving that $p(\lambda_1),\ldots, p(\lambda_k)$ are eigenvalues of $p(A)$. That's basically routine computation. No problem there. The next bit is to compute the dimension of the eigenspaces $E(p(A), p(\lambda_i))$. Seems like this bit follows from the same argument. An eigenvector for $A$ is an eigenvector for $p(A)$, so the rest seems to follow. Finally, the last part is to find the characteristic polynomial of $p(A)$. I guess this means in terms of the characteristic polynomial of $A$. Well, we do know what the eigenvalues are... The so-called Spectral Mapping Theorem tells us that the eigenvalues of $p(A)$ are exactly the $p(\lambda_i)$. Usually, by the time you start talking about complex numbers you consider the real numbers as a subset of them, since a and b are real in a + bi. But you could define it that way and call it a "standard form" like ax + by = c for linear equations :-) @Riker "a + bi where a and b are integers" Complex numbers a + bi where a and b are integers are called Gaussian integers. I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd... Does anyone know if $T: V \to R^n$ is an inner product space isomorphism if $T(v) = (v)_S$, where $S$ is a basis for $V$? My book isn't saying so explicitly, but there was a theorem saying that an inner product isomorphism exists, and another theorem kind of suggesting that it should work. @TobiasKildetoft Sorry, I meant that they should be equal (accidently sent this before writing my answer. Writing it now) Isn't there this theorem saying that if $v,w \in V$ ($V$ being an inner product space), then $||v|| = ||(v)_S||$? (where the left norm is defined as the norm in $V$ and the right norm is the euclidean norm) I thought that this would somehow result from isomorphism @AlessandroCodenotti Actually, such a $f$ in fact needs to be surjective. Take any $y \in Y$; the maximal ideal of $k[Y]$ corresponding to that is $(Y_1 - y_1, \cdots, Y_n - y_n)$. The ideal corresponding to the subvariety $f^{-1}(y) \subset X$ in $k[X]$ is then nothing but $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$. If this is empty, weak Nullstellensatz kicks in to say that there are $g_1, \cdots, g_n \in k[X]$ such that $\sum_i (f^* Y_i - y_i)g_i = 1$. Well, better to say that $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$ is the trivial ideal I guess. Hmm, I'm stuck again O(n) acts transitively on S^(n-1) with stabilizer at a point O(n-1) For any transitive G action on a set X with stabilizer H, G/H $\cong$ X set theoretically. In this case, as the action is a smooth action by a Lie group, you can prove this set-theoretic bijection gives a diffeomorphism