text
stringlengths
256
16.4k
This post contains all the important definitions that you need for GCE O Level Physics. (Equivalent to American High School Diploma) If you do not recognise any of the terms listed here, you should proceed to review the respective topic (Click on each of the sub-headers). For a list of the important equations for O Level Physics, please visit O Level Formula List. Please drop a comment below if any particular important definitions are missing in this post. Scalar quantities are quantities in which the magnitude is stated, but the direction is either not applicable or not specified. Vector quantities are quantities in which both the magnitude and the direction must be stated. Distance travelled by an object is the length of path taken. Displacement is the shortest distance from the initial to the final position of an object. Speed is the distance moved per unit time. Velocity (v) of an object is the rate of change of displacement with respect to time. Acceleration of an object is the rate of change of velocity with respect to time. Newton’s First Law states that an object will continue in its state of rest or uniform motion in a straight line as long as there is no net force acting on the body. Newton’s Second Law states that when a resultant force acts on an object of a constant mass, the object will accelerate in the direction of the resultant force. Newton’s Third Law states that if object A exerts a force on object B, then object B will also exert an equal and opposite force on object A The moment of a force (torque) is defined as the turning effect of the force about a pivot and is the product of the force and the perpendicular distance from the line of action of the force to the pivot. The centre of mass of a body of matter is an imaginary point at which the entire mass of the body seems to act. The centre of gravity of a body of matter is an imaginary point at which the entire weight of the body seems to act. Mass is defined as the amount of matter in an object Weight is defined as the gravitational force acting on an object Inertia is defined as the reluctance on an object to change its state of rest or motion, due to its mass. A Gravitational Field is a region in which a mass experiences a force due to gravitational attraction Gravitational Field Strength is defined as the gravitational force acting per unit mass . Density ($\rho$) is defined as the mass of a substance per unit volume. Terminal velocity is the highest velocity attainable by an object in free fall. Pressure is defined as the perpendicular force acting on unit area of a surface or the force per unit area. Boyle’s Law states that the volume of a fixed mass of gas at constant temperature is inversely proportional to the pressure applied to the gas. The Principle of conservation of energy states that energy cannot be created nor destroyed but can be converted from one form to another and the total amount of energy of a enclosed system remains constant. Kinetic Energy, $E_{k}$ is the energy a body possesses by virtue of its motion. Gravitational Potential Energy is defined as the amount of work done in order to raise the body to the height h from a reference level. Power is defined as the rate of work done OR; Power is defined as the rate of energy converted with respect to time. Melting is the process in which energy absorbed by a substance results in a change of state from solid to liquid, without a change in temperature. Solidification is the process in which energy taken away from a substance results in a change of state from liquid to solid, without a change in temperature. Boiling is the process in which the energy absorbed by a substance changes it from liquid state to gaseous state, without a change in temperature. Condensation is the process in which energy taken away from substance changes it from gaseous state to liquid state, without a change in temperature. Heat Capacity, C,of a body is defined as the amount of heat (Q) required to raise its temperature (θ) by one degree, without going through a change of state. Specific heat capacity, c, of a body is defined as the amount of heat (Q) required to raise the temperature (θ) of a unit mass of it by one degree, without going through a change in state. Specific latent heat of fusion of a substance is defined as the amount of heat required to change a unit mass of the substance from solid to liquid state, without any change in the temperature. Specific latent heat of vapourization of a substance is defined as the amount of heat required to change unit mass of the substance from liquid state to gas state without a temperature change. Conduction is the transfer of thermal energy from one place to another without any flow of the medium. Convection is the transfer of thermal energy from one place to another by means of convection currents in a fluid (gas or liquid), due to a difference in density Radiation is the transfer of thermal energy from one place to another by means of electromagnetic radiation, without the need of an intervening material medium. Amplitude is the maximum displacement from the rest or central position, in either directions. Frequency (f) is defined as the number of complete waves produced per unit time. Wavelength (λ) is the distance between corresponding points of two consecutive waves. Speed of the wave propagation is defined as the distance travelled by a wave per unit time. Period (T) is defined as the time taken to produce one complete wave. First law of reflection states that the incident ray, the reflected ray and the normal to the surface all lie in the same plane. Second law of reflection states that the angle of incidence is equal to the angle of reflection. Refraction of light is the change in direction (bending of light rays) when it passes from one optically transparent medium to another. First law of refraction states that the incident ray, the refracted ray and the normal to the interface all lie in the same plane. Second law of refraction states that for two given media, the ratio $\frac{\sin{i}}{\sin{r}} = \, \text{constant}$, where $i$ is the angle of incidence and $r$ is the angle of refraction. Electric Current is defined as the rate of flow of charges. Electromotive Force (e.m.f) of a source is defined as the the work done by the source in driving a unit charge around a complete circuit. Potential difference across a component is defined as the work done to drive a unit charge through the component. Ohm’s law states that the current flowing through a metallic conductor is directly proportional to the potential difference across it, provided that the physical conditions remain constant. Faraday’s Law of Electromagnetic induction is the process in which an electromotive force (emf) is induced in a closed circuit due to changes in the magnetic field around the circuit. Lenz’s law states that the direction of the induced e.m.f. and hence the induced current in a closed circuit is always such that it opposes the change in producing it. Isotopes are different atoms of an element which have the same number of protons, but a different number of neutrons from each other. Radioactive decay refers to the process in which α-particles and β-particles are emitted by an unstable nuclei (contains too many neutrons or protons) of an element in order to form a more stable nuclei of another element. The half-life of a sample of a radioactive isotope is defined as the time taken for half the original unstable radioactive nuclei to decay.
Introduction: This is slightly edited and generalised version of a question I asked on the Physics Stack Exchange website. This question has a twin brother asked here on MO, only now we consider values of the Riemann zeta function $\zeta(n)$ where the function diverges: $n \leq 1$. I am especially interested in the physical interpretations, but I value geometrical/probabilistic or other interpretations highly as well. Recently, I also learned that some divergent series have a combinatorial interpretation as well. See this post on "The Everything Seminar". I am curious about such interpretations of divergent series as well. Body: For my bachelor's thesis, I am investigating Divergent Series. Apart from the mathematical theory behind them (which I find fascinating), I am also interested in their applications in physics. Currently, I am studying the divergent series that arise when considering the Riemann zeta function at negative arguments. The Riemann zeta function can be analytically continued. By doing this, finite constants can be assigned to the divergent series. For $n \geq 1$, we have the formula: $$ \zeta(-n) = - \frac{B_{n+1}}{n+1} . $$ This formula can be used to find: $\zeta(-1) = \sum_{n=1}^{\infty} n = - \frac{1}{12} . $ This identity is used in Bosonic String Theory to find the so-called "critical dimension" $d = 26$. For more info, one can consult the relevant wikipedia page. $\zeta(-3) = \sum_{n=1}^{\infty} n^3 = - \frac{1}{120} $ . This identity is used in the calculation of the energy per area between metallic plates that arises in the Casimir Effect. Furthermore, the sum $\sum_{n=0}^{\infty}2^n $ converges to $-1$ in the 2-adic number system. I guess this could allow a geometric interpretation of this divergent sum, to a certain extent. My first question is: do more of these values of the Riemann zeta function at negative arguments arise in physics/geometry/probability theory? If so: which ones, and in what context? Furthermore, I consider summing powers of the Riemann zeta function at negative arguments. I try to do this by means of Faulhaber's formula. Let's say, for example, we want to compute the sum of $$p = \Big( \sum_{k=1}^{\infty} k \Big)^3 . $$ If we set $a = 1 + 2 + 3 + \dots + n = \frac{n(n+1)}{2} $, then from Faulhaber's formula we find that $$\frac{4a^3 - a^2}{3} = 1^5 + 2^5 + 3^5 + \dots + n^5 , $$ from which we can deduce that $$ p = a^3 = \frac{ 3 \cdot \sum_{k=1}^{\infty} k^5 + a^2 }{4} .$$Since we can also sum the divergent series arising from the Riemann zeta function at negative arguments by means of Ramanujan Summation (which produces that same results as analytic continuation) and the Ramanujan Summation method is linear, we find that the Ramanujan ($R$) or regularised sum of $p$ amounts to $$R(p) = R(a^3) = \frac{3}{4} R\Big(\sum_{k=1}^{\infty} k^5\Big) + \frac{1}{4} R(a^2) . $$Again, we know from Faulhaber's Formula that $a^2 = \sum_{k=1}^{\infty} k^3 $ , so $R(a^2) = R(\zeta(-3)) = - \frac{1}{120} $, so $$R(p) = \frac{3}{4} \Big(- \frac{1}{252} \Big) + \frac{ ( \frac{1}{120} )} {4} = - \frac{1}{1120} . $$ My second (bunch of) question(s) is: Do powers of these zeta values at negative arguments arise in physics/probability theory/geometry? If so, how? Are they summed in a manner similar to process I just described, or in a different manner? Of the latter is the case, which other summation method is used? Do powers of divergent series arise in physics in general? If so: which ones, and in what context? My This post imported from StackExchange at 2014-04-07 13:22 (UCT), posted by SE-user Max Muller third and last (bunch of) question(s) is: which other divergent series arise in physics/probability theory/geometry (not just considering (powers of) the Riemann zeta function at negative arguments) ? I know there are whole books on renormalisation and/or regularisation in physics. However, for the sake of my bachelor's thesis I would like to know some concrete examples of divergent series that arise in physics which I can study. It would also be nice if you could mention some divergent series which have defied summation by any summation method that physicists (or mathematicians) currently employ. Please also indicate as to how these divergent series arise in physics, or how they can be geometrically/probabilistically/combinatorially interpreted.
I am covering the classic literature on predictions of Cabibbo angle or other relationships in the mass matrix. As you may remember, this research was a rage in the late seventies, after noticing that $\tan^2 \theta_c \approx m_d/m_s$. A typical paper of that age was Wilczek and Zee Phys Lett 70B, p 418-420. The technique was to use a $SU(2)_L \times SU(2)_R \times \dots $ model and set some discrete symmetry in the Right multiplets. Most papers got to predict the $\theta_c$ and some models with three generations or more (remember the third generation was a new insight in the mid-late seventies) were able to producte additional phases in relationship with the masses. Now, what I am interested is on papers and models including also some prediction of mass relationships, alone, or cases where $\theta_c$ is fixed by the model and then some mass relationship follows. A typical case here is Harari-Haut-Weyers (spires) It puts a symmetry structure such that the masses or up, down and strange are fixed to: $m_u=0, {m_d\over m_s} = {2- \sqrt 3 \over 2 + \sqrt 3} $ Of course in such case $\theta_c$ is fixed to 15 degrees. But also $m_u=0$, which is an extra prediction even if the fixing of Cabibbo angle were ad-hoc. Ok, so my question is, are there other models in this theme containing predictions for quark masses? Or was Harari et al. an exception until the arrival of Koide models?This post has been migrated from (A51.SE)
The "solution" to the quadratic equation makes no sense to me. Assuming the Van der Waal equation with b=0, I agree with the solution to the equation: $$\mathrm{PV}_m^2 - 24\mathrm{V}_m + 2 = 0\tag{1}$$ I'll point out that I'm following the notation of the gievn solution, but $\mathrm{V}_m$ seems odd to me. I'd think that $\mathrm{V}_m$ would be reserved to mean the molar volume at STP. But Equation 1 has two unknowns, $\mathrm{P}$ and $\mathrm{V}_m$. Letting $a' = P$, $b' = -24$, and $c' = 2$ then the appropriate quadratic equation is of course $$\mathrm{V}_m = \dfrac{-b' \pm\sqrt{b'^2 - 4a'c'}}{2a'}\tag{2}$$ so $$\mathrm{V}_m = \dfrac{-(-24) \pm\sqrt{(-24)^2 - 4(\mathrm{P})(2)}}{2(\mathrm{P})}\tag{3}$$ $$\mathrm{V}_m = \dfrac{24 \pm\sqrt{(-24)^2 - 8\mathrm{P}}}{2\mathrm{P}}\tag{4}$$ $$\mathrm{V}_m = \dfrac{12 \pm\sqrt{144 - 2\mathrm{P}}}{\mathrm{P}}\tag{5}$$ So we have two unknowns and one equation. Thus the problem is unsolvable without additional information. Notice that would still be the case if the simple ideal gas equation, PV=nRT, had been used. The ideal gas equation could be solved only to $\mathrm{PV}_m=24$. For equation 5 there are three cases for the sqrt term $\sqrt{144 - 2\mathrm{P}}$: (1) If the term is negative: If $144 - 2\mathrm{P}$ is negative then both roots would be imaginary. (2) If the sqrt term is equal to zero: $\sqrt{144 - 2\mathrm{P}} = 0$ $144 - 2\mathrm{P} = 0$ $ \mathrm{P} = 72$ and therefore $\mathrm{V}_m = \dfrac{12}{72} = 0.167$ (3) If the sqrt term is greater than 0:, then 72 > P also, and $\mathrm{V}_m$ has two roots.
Let f,g be functions that are defined in the area of $x_0$ (Except $x_0$ itself) $f(x) \ge g(x)$ Given the limit $ \lim_{x \to x_0}g(x) = \infty $ Prove that $ \lim_{x \to x_0}f(x) = \infty $ It seems logic that if $g(x)$ approaches some value where its height (limit) is at infinity, any other functions above $g(x)$ are at infinity too in this area, but I don't know what is the right approach proving this. some help? :)
In TextMate in LaTeX bundle, you have 4 different types of calling snippets (I will put a very understandable example, $0 is the placement of the cursor): LaTeX symbol based on current word. i.e. If you write sand you use this command you get \sigma $0. Command Based on current word. i.e. if you write sand use this you get \sum_{$1}^{$2} $0. Environment based on current word. i.e. if you write sand use this you get \begin{s} $0 \end{s}(or what you have defined before). And, at last, Snippets (true snippets, as simple as in Sublime Text). i.e. if you write matand press Tab button you get \begin{${1:pvbPVBsmall}matrix} $0 \end{matrix}(exactly, at least the same way as in Sublime Text). I wish you could add this option to your program, I mean, one (configurable) key combination for each one of this four kinds of snippets. Anyway, I wait for your answer (saying yes or no, but please tell me you read that). In case of NO as answe, I would really appreciate to hear about your reason. Ah, and a very very nice program. Customer support service by UserEcho
I'm actually working through the Renewal Limit Theorem Proof from "Einführung in die Wahrscheinlichkeitstheorie und Statistik" by Ulrich Krengel and having problems to understand the first step, everything else is fine. He defines: $f_m := f_{jj}^{(m)} = $ Probability of reaching state j for the first time again after time m $p_{jj}^{(n)} := $ Probability of reaching state j again after time n $u_n := p_{jj}^{(n)} = \sum_{m=1}^n {f_m u_{n-m}}$ $\lambda := \limsup u_{n_k}$ For a sequence $n_k \rightarrow \infty$ and $u_{n_k} \rightarrow \lambda$ and for every $m \geq 1$ there is: $$\lambda = \lim_{k\rightarrow \infty} u_{n_k} = \lim_k(f_m u_{n_{k-m}}+\sum_{1\leq s\leq n_k, s\neq m}{f_s u_{n_{k-s}}}) \leq \liminf_k(f_m u_{n_{k-m}}) +\limsup_k (\sum_{1\leq s\leq n_k, s\neq m}{f_s u_{n_{k-s}}})$$ I don't really understand why he sets $\lambda = \limsup u_{n_k} = \lim_k u_{n_k}$ and later uses a sum of $\limsup$ and $\liminf$ which is $\leq$ than the $\lim$ itself. Any help would be greatly appreciated.
Consider the following problem. We're given a circuit $C$ with $n$ binary inputs and $n$ binary outputs, computing some boolean function $f_C : \mathbb{Z}_2^n \rightarrow \mathbb{Z}_2^n$. We assume for simplicity that $C$ only contains binary AND/OR gates, except for the output gates that can be OR gates of unbounded fan-in. We want to ensure that $C$ is 'disruption-resistant' in the sense that modifying a small number of gates does not change the output. Formally, fix an integer $k$ and a circuit $C$. Call a '$k$-disruption' of $C$ a circuit $C'$ obtained from $C$ by modifying $k$ gates, except for output gates. Say that $C$ is '$k$-resistant' if, for any $k$-disruption $C'$ of $C$, it holds that $f_{C'} = f_C$, i.e. the two circuits compute the same function. Consider the function $\chi_k : \mathbb{Z}_2^k \rightarrow \mathbb{Z}_2^k$ such that $\chi_k(x) = y$ with $y_j = 1 \Leftrightarrow \sum_{i = 1}^{k} x_i \geq j$. Given two tuples $u,v \in \mathbb{Z}_2^k$, say that $v$ is an $l$-perturbation of $u$ if $d_H(u,v) \leq l$. Consider the following question: given $l < k$, can we find a circuit $X_{k,l}$ such that for each $k$-disruption $C$ of $X_{k,l}$, for each $u \in \mathbb{Z}_2^k$, there is an $l$-perturbation $v$ of $u$ such that $f_C(u) = \chi_k(v)$? That is to say, $C$ would count the one-entries of $u$ up to some additive error $l$. Suppose that we can find such a circuit $X_{4k,k}$ for any integer $k$. We can then solve the first problem using replication as follows. Take $4k$ copies $C_1,\ldots,C_{4k}$ of the given circuit $C$, and let gate $g_{i,j}$ be the $j$th output gate of $C_i$. For each $j \in [n]$, insert a copy of the circuit $X_{4k,k}$ with input connected to the gates $g_{1,j},\ldots,g_{4k,j}$, and with output gates labeled by $h_{1,j},\ldots,h_{4k,j}$. Finally, let the output gate for the $i$th bit be an OR of the outputs $h_{2k-1,i},\ldots,h_{4k,i}$. Let $\Gamma$ denote the resulting circuit. It is then easy to see that for any $k$-disruption $C'$ of $\Gamma$, there are at least $3k$ circuits $C_i$ that are not affected. Thus, if we consider a fixed input $x$, for each $i$th output bit the following holds: if the gates $g_{1,i},\ldots,g_{4k,i}$ evaluate to a constant tuple $t$ in $\Gamma$ and to a tuple $t'$ in $C'$, then $t'$ is a $k$-perturbation of $t$, and by definition of the circuit $X_{4k,k}$ it then computes a value $\chi_k(t'')$ with $t''$ a $2k$-perturbation of $t$. As this circuit has $4k$ outputs it follows that taking the OR of the last $2k+1$ values yields the desired result. It it thus desirable to construct a circuit $X_{k,l}$ fullfilling the above requirements. I'm working on it but I welcome ideas or advice you may have.
Convex Model Solvers¶ Model solvers are implementations which either solve for the parameters/coefficients which determine the prediction of a model. Below is a list of all model solvers currently implemented, they are all sub-classes/subtraits of the top level optimization API. Refer to the wiki page on optimizers for more details on extending the API and writing your own optimizers. Gradient Descent¶ The bread and butter of any machine learning framework, the GradientDescent class in the dynaml.optimization package provides gradient based optimization primitives for solving optimization problems of the form. Gradients¶ Name Class Equation Logistic Gradient LogisticGradient L = \frac1n \sum_{k=1}^n \log(1+\exp( -y_k w^T x_k)), y_k \in \{-1, +1\} Least Squares Gradient LeastSquaresGradient L = \frac1n \sum_{k=1}^n \|w^{T} \cdot x_k - y_k\|^2 Updaters¶ Name Class Equation L_1 Updater L1Updater R = \|\|w\|\|_{1} L_2 Updater SquaredL2Updater R = \frac{1}{2} \|\|w\|\|^2 BFGS Updater SimpleBFGSUpdater 1 2 3 4 5 6 7 8 9 val data: Stream[(DenseVector[Double], Double)] = ... val num_points = data.length val initial_params: DenseVector[Double] = ... val optimizer = new GradientDescent( new LogisticGradient, new SquaredL2Updater ) val params = optimizer.setRegParam(0.002).optimize( num_points, data, initial_params) Quasi-Newton (BFGS)¶ The Broydon-Fletcher-Goldfarb-Shanno (BFGS) is a Quasi-Newton based second order optimization method. To calculate an update to the parameters, it requires calculation of the inverse Hessian \mathit{H}^{-1} as well as the gradient at each iteration. 1 2 3 4 5 6 7 8 9 10 val optimizer = QuasiNewtonOptimizer( new LeastSquaresGradient, new SimpleBFGSUpdater) val data: Stream[(DenseVector[Double], Double)] = ... val num_points = data.length val initial_params: DenseVector[Double] = ... val params = optimizer.setRegParam(0.002).optimize( num_points, data, initial_params) Regularized Least Squares¶ This subroutine solves the regularized least squares optimization problem as shown below. 1 2 3 4 5 6 7 8 9 10 val num_dim = ... val designMatrix: DenseMatrix[Double] = ... val response: DenseVector[Double] = ... val optimizer = new RegularizedLSSolver() val x = optimizer.setRegParam(0.05).optimize( designMatrix.nrow, (designMatrix, response), DenseVector.ones[Double](num_dim)) Back propagation with Momentum¶ This is the most common learning methods for supervised training of feed forward neural networks, the edge weights are adjusted using the generalized delta rule. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 val data: Seq[(DenseVector[Double], DenseVector[Double])] = _ //Input, Hidden, Output val num_units_by_layer = Seq(5, 8, 3) val acts = Seq(VectorSigmoid, VectorTansig) val breezeStackFactory = NeuralStackFactory(num_units_by_layer)(acts) //Random variable which samples layer weights val stackInitializer = GenericFFNeuralNet.getWeightInitializer( num_units_by_layer ) val opt_backprop = new FFBackProp(breezeStackFactory) val learned_stack = opt_backprop.optimize( data.length, data, stackInitializer.draw) Deprecated back propagation API 1 2 3 4 5 6 7 8 9 10 11 12 val data: Stream[(DenseVector[Double], DenseVector[Double])] = ... val initParam = FFNeuralGraph(num_inputs = data.head._1.length, num_outputs = data.head._2.length, hidden_layers = 1, List("logsig", "linear"), List(5)) val optimizer = new BackPropogation() .setNumIterations(100) .setStepSize(0.01) val newparams = optimizer.optimize(data.length, data, initParam) Conjugate Gradient¶ The conjugate gradient method is used to solve linear systems of the form Ax = b where A is a symmetric positive definite matrix. 1 2 3 4 5 6 7 8 val num_dim = ... val A: DenseMatrix[Double] = ... val b: DenseVector[Double] = ... ///Solves A.x = b val x = ConjugateGradient.runCG(A, b, DenseVector.ones[Double](num_dim), epsilon = 0.005, MAX_ITERATIONS = 50) Dual LSSVM Solver¶ The LSSVM solver solves the linear program that results from the application of the Karush, Kuhn Tucker conditions on the LSSVM optimization problem. 1 2 3 4 5 6 7 8 9 10 11 val data: Stream[(DenseVector[Double], Double)] = ... val kernelMatrix: DenseMatrix[Double] = ... val initParam = DenseVector.ones[Double](num_points+1) val optimizer = new LSSVMLinearSolver() val alpha = optimizer.optimize(num_points, (kernelMatrix, DenseVector(data.map(_._2).toArray)), initParam) Committee Model Solver¶ The committee model solver aims to find the optimum values of weights applied to the predictions of a set of base models. The weights are calculated as follows. Where C is the sample correlation matrix of errors for all combinations of the base models calculated on the training data. 1 2 3 4 5 6 7 val optimizer= new CommitteeModelSolver() //Data Structure containing for each training point the following couple //(predictions from base models as a vector, actual target) val predictionsTargets: Stream[(DenseVector[Double], Double)] = ... val params = optimizer.optimize(num_points, predictionsTargets, DenseVector.ones[Double](num_of_models))
This question is somewhat related to this one. As Alan has already said, following the actual light path through each layer leads to more physically accurate results. I will base my answer on a paper by Andrea Weidlich and Alexander Wilkie ("Arbitrarily Layered Micro-Facet Surfaces") that I have read and partially implemented. In the paper the authors assume that the distance between two layers is smaller than the radius of a differential area element. This assumption simplifies the implementation because we do not have to calculate intersection points separately for each layer, actually we assume that all intersection points over the layers are just the same point. According to the paper, two problems must be solved in order to render multilayered material. The first one is to properly sample the layers and the second is to find the resulting BSDF generated by the combination of the multiple BSDFs that are found along the sampling path. UPDATE: Actually I have adopted a different method to implement the evaluation of this layered model. While I have kept with the idea of considering the intersection points to be just the same point along the layers, I have computed the sampling and the final BRDF differently: for sampling, I have used ordinary ray tracing, but through the layers (using Russian Roulette to select between reflection/refraction when that's the case); for the final BRDF evaluation, I just multiply each BRDF traversed by the ray path (weighting the incident radiances according to the cosine of the incident ray). Sampling In this first stage we will determine the actual light path through the layers. When a light ray is moving from a less dense medium, e.g. air, to a denser medium, e.g. glass, part of its energy is reflected and the remaining part is transmitted. You can find the amount of energy that is reflected through the Fresnel reflectance equations. So, for instance, if the Fresnel reflectance of a given dielectric is 0.3, we know that 30% of the energy is reflected and 70% will be transmitted: When the light ray is moving from a denser to a less dense medium, the same principle described by the Fresnel reflectance applies. However, in this specific case, total internal reflection (a.k.a TIR) might also happen if the angle of the incident ray is above the critical angle. In the case of TIR, 100% of the energy is reflected back into the material: When light hits a conductor or a diffuse surface, it will always be reflected (being the direction of reflection related to the type of the BRDF). In a multilayer material, the resulting light path will be the aggregate result of all those possibilities. Thus, in the case of a 3-layer material, assuming that the first and second layers are dielectrics and the third layer is diffuse, we might end up, for instance, with the following light path (a tree actually): We can simulate this type of interaction using recursion and weighting each light path according to the actual reflectance/transmitance at the corresponding incident points. A problem regarding the use of recursion in this case is that the number of rays increases with the deepness of the recursion, concentrating computational effort on rays that individually might contribute almost nothing to the final result. On the other hand, the aggregate result of those individual rays at deep recursion levels can be significant and should not be discarded. In this case, we can use Russian Roulette (RR) in order to avoid branching and to probabilistic end light paths without losing energy, but at the cost of a higher variance (noisier result). In this case, the result of the Fresnel reflectance, or the TIR, will be used to randomly select which path to follow. For instance: As can be seen, TIR or Fresnel reflectance might keep some rays bouncing indefinitely among layers. As far as I know, Mitsuba implements plastic as a two layer material, and it uses a closed form solution for this specific case that accounts for an infinity number of light bounces among layers. However, Mitsuba also allows for the creation of multilayer materials with an arbitrary number of layers, in which case it imposes a maximum number of internal bounces since no closed form solution seems to exist for the general case. As a side effect, some energy can be lost in the rendering process, making the material look darker than it should eventually be. In my current multilayer material implementation I allow for an arbitrary number of internal bounces at the cost of longer rendering times (well... actually, I've implemented only two layers.. one dielectric and one diffuse :). An additional option is to mix branching and RR. For instance, the initial rays (lower deep levels) might present substantial contribution to the final image. Thus, one might choose to branch only at the first one or two intersections, using only RR afterwards. This is, for example, the approached used by smallpt. An interesting point regarding multilayered materials is that individual reflected/transmitted rays can be importance sampled according to the corresponding BRDFs/BTDFs of each layer. Evaluating the Final BSDF Considering the following light path computed using RR: We can evaluate the total amount of radiance $L_r$ reflected by a multilayer BSDF considering each layer as a individual object and applying the same approach used in ordinary path tracing (i.e. the radiance leaving a layer will be the incident radiance for the next layer). The final estimator can thus be represented by the product of each individual Monte Carlo estimator: $$ L_r = \left( \frac{fr_1 \cos \theta_1}{pdf_1} \left( \frac{fr_2 \cos \theta_2}{pdf_2} \left( \frac{fr_3 \cos \theta_3}{pdf_3} \left( \frac{fr_2 \cos \theta_4}{pdf_2} \left( \frac{L_i fr_1 \cos \theta_5}{pdf_1} \right)\right)\right)\right)\right)$$ Since all terms of the estimator are multiplied, we can simplify the implementation by computing the final BSDF and $pdf$ and factoring out the $L_i$ term: $$fr = fr_1 \cdot fr_2 \cdot fr_3 \cdot fr_2 \cdot fr_1$$ $$pdf = pdf_1 \cdot pdf_2 \cdot pdf_3 \cdot pdf_2 \cdot pdf_1$$ $$\cos \theta= \cos \theta_1 \cdot \cos \theta_2 \cdot \cos \theta_3 \cdot \cos \theta_2 \cdot \cos \theta_1$$ $$ L_r = \left( \frac{fr \cos \theta}{pdf} \right) L_i$$ The paper by Andrea Weidlich and Alexander Wilkie also takes absorption into consideration, i.e. each light ray might be attenuated according to the absorption factor of each transmissive layer and its thickness. I've not included absorption into my renderer yet, but it is represented by just one scalar value, which will be evaluated according to the Beer's Law. Alternate approaches The Mitsuba renderer uses an alternate representation for multilayered material based on the "tabulation of reflectance functions in a Fourier basis". I have not yet dig into it, but might be of interest: "A Comprehensive Framework for Rendering Layered Materials" by Wenzel Jacob et al. There is also an expanded version of this paper.
First, I do not see why this is a Bayesian construction. Indeed, if you do mean that the density is known, it cannot possibly be a Bayesian construction. Consider the case where $f(x)=280{x^3}(1-x)^4,x\in[0,1]$,. There is no unknown parameter here. This is both your prior and your posterior as no amount of data will alter anything. If this were a Bayesian construction then there would have to be some uncertain parameter, but there is no uncertain parameter. This is a beta distribution with $\alpha=5$ and $\beta=4$ The prior is forced to be $\Pr(\alpha=5;\beta=4)=1$. The question is "what is x?" The only uncertainty is in the valuation. This is a Frequentist problem. EDIT In the case where it is drawn from an unknown distribution, you are facing two options, even if the distribution is known with certainty to the actors. The first is to use Bayesian non-parametric methods, the second is to use Frequentist non-parametric methods. Depending on what I wanted to accomplish, I would choose one or the other. The Bayesian method will be coherent and so you could place gambles on it. It will also likely be very difficult to implement. There cannot be a Bayesian solution that is free of its prior. Such a thing does not exist. It might be that it is uninformative, but it must exist. The alternative is to use Fisher's failed method of fiducial statistics. The Frequentist method will minimize the maximum loss you could experience from making a choice based on the data by using an incorrect inference. It will also allow you to control for power. It will usually be far simpler to implement. Bayesian non-parametric methods are potentially infinite dimensional constructions and you would need to do a bit of reading on them. A simple approximation though would be to use the beta distribution because of its incredible flexibility, although you could use any high degree polynomial that stays above the axis since your bounding guarantees that a constant of integration exists. You would then perform model selection. As long as you believe it is unimodal, the bounding on both sides guarantees that a mean exists. Even though your distribution is unknown, it is guaranteed to have moments. The t-test is probably inappropriate because of the bounding is so tight, but you could use the empirical quantiles to test significance. If you felt you needed the higher moments, the method of moments is always available. Finally, in either case, you have kernel methods available to you. You cannot avoid a prior using Bayesian methods, but the greatest advantage of Frequentist statistics is to be able to solve problems when you cannot form a prior.
In this question, I'm continuing to explore the tools used/presented in Lars Hansen's Econometrica paper "Dynamic Valuation Decomposition within Stochastic Economies" (2012). This might be an easy question, but I can't quite see it. In the paper linked above, the factorization is presented in which one component is a martingale. See p. 937. On this page it presents this formula and says Given a solution to (21), I construct a martingale via $$ \widetilde M_t = \exp(-\rho t) M_t \left [ \frac{e(X_t)}{e(X_0)} \right ] $$ which is itself a multiplicative functional. Maybe it's easy, but I just don't see right away how to show that $\widetilde M_t$ is a martingale. How can I show this? NOTE: This question is related to the following two questions:
8.1.2.1 - Normal Approximation Method Formulas Here we will be using the five step hypothesis testing procedure to compare the proportion in one random sample to a specified population proportion using the normal approximation method. In order to use the normal approximation method, the assumption is that both \(n p_0 \geq 10\) and \(n (1-p_0) \geq 10\). Recall that \(p_0\) is the population proportion in the null hypothesis. Research Question Is the proportion different from \(p_0\)? Is the proportion greater than \(p_0\)? Is the proportion less than \(p_0\)? Null Hypothesis, \(H_{0}\) \(p=p_0\) \(p= p_0\) \(p= p_0\) Alternative Hypothesis, \(H_{a}\) \(p\neq p_0\) \(p> p_0\) \(p< p_0\) Type of Hypothesis Test Two-tailed, non-directional Right-tailed, directional Left-tailed, directional Where \(p_0\) is the hypothesized population proportion that you are comparing your sample to. When using the normal approximation method we will be using a z test statistic. The z test statistic tells us how far our sample proportion is from the hypothesized population proportion in standard error units. Note that this formula follows the basic structure of a test statistic that you learned last week: \(test\;statistic=\frac{sample\;statistic-null\;parameter}{standard\;error}\) Test statistic: One Group Proportion \(z=\frac{\widehat{p}- p_0 }{\sqrt{\frac{p_0 (1- p_0)}{n}}}\) \(\widehat{p}\) = sample proportion \(p_{0}\) = hypothesize population proportion \(n\) = sample size Given that the null hypothesis is true, the p value is the probability that a randomly selected sample of n would have a sample proportion as different, or more different, than the one in our sample, in the direction of the alternative hypothesis. We can find the p value by mapping the test statistic from step 2 onto the z distribution. Note that p-values are also symbolized by \(p\). Do not confuse this with the population proportion which shares the same symbol. We can look up the \(p\)-value using Minitab Express by constructing the sampling distribution. Because we are using the normal approximation here, we have a \(z\) test statistic that we can map onto the \(z\) distribution. Recall, the z distribution is a normal distribution with a mean of 0 and standard deviation of 1. If we are conducting a one-tailed (i.e., right- or left-tailed) test, we look up the area of the sampling distribution that is beyond our test statistic. If we are conducting a two-tailed (i.e., non-directional) test there is one additional step: we need to multiple the area by two to take into account the possibility of being in the right or left tail. We can decide between the null and alternative hypotheses by examining our p-value. If \(p \leq \alpha\) reject the null hypothesis. If \(p>\alpha\) fail to reject the null hypothesis. Unless stated otherwise, assume that \(\alpha=.05\). When we reject the null hypothesis our results are said to be statistically significant. Based on our decision in step 4, we will write a sentence or two concerning our decision in relation to the original research question.
[texhax] Multiple column math equations with tags / multicols questions Johan Glimming glimming at kth.se Thu Aug 12 20:43:08 CEST 2004 Hello Thanks. I tried to improve the code, but I am unable to use the \tag outside a gather-environment (see code below, removing #2 fails if \tag{..} is in #1 instead). Also I have to tweak math mode for some obscure reason - I have to use ensuremath before invoking the command. My main idea is to have this: \begin{multieqn} x^2+x = 1 \tag{$\alpha$} \\ \dfrac{x^3+x^2}{e^x} = 2 \tag{2nd one} \\ ... \end{multieqn} Is there a way of reusing the //-separators to insert an occurrence of my \multieqn on every separate line here? Med vänliga hälsningar / Yours Sincerely, Johan Glimming --- \documentclass{article} \usepackage{amsmath} \newlength{\eqwidth} \newcommand{\multieqn}[2]{\settowidth{\eqwidth}{$$#1$$}\begin{minipage}{ .3\textwidth} \begin{gather} \ensuremath{#1} \tag{#2} \end{gather} \end{minipage}\hfill} \begin{document} \multieqn{\ensuremath{1^2=1}}{test} \multieqn{\ensuremath{2=2}}{2nd} \multieqn{\ensuremath{another~beast~equation}}{3rd} % two problems: \tag not handlded nicely % and ensuremath work-around needed. \end{document} More information about the texhaxmailing list
Search Click on the Title, Author, or Picture to Open an Entry Displaying entries 21-30 out of 44. Surface plasmon polariton excitation in Kretschmann configuration 9 reviews Excitation of surface plasmon polaritions at the gold-air interface in Kretschmann configuration. Tutorial models for COMSOL Webinar "Simulating Graphene-Based Photonic and Optoelectronic Devices" 68 reviews Basic tutorial models for COMSOL Webinar "Simulating Graphene-Based Photonic and Optoelectronic Devices" by Prof. Alexander Kildishev, Purdue University, USA Validation with a meshless method... Shape of a static meniscus pinned at the contact line from Young-Laplace equation 4 reviews This is a simple example for equation based modeling where the static Young-Laplace equation - [Delta P] = [surface tension] * [divergence of the surface normal vector] - is solved to determine the... Maxwell-Wagner Model of Blood Permittivity 2 reviews Maxwell-Wagner model is used for explanation of frequency dispersion, which takes place for permittivity at various kinds of suspensions. In particular, this phenomenon is observed in the blood. The... Laserwelding 2 reviews Laser Welding of PMMA with 1 W Laser. Convection dominated Convection-Diffusion Equation by upwind discontinuous Galerkin (dG) method 3 reviews We consider the Convection-Diffusion Equation with very small diffusion coefficient $\mu$: \[ -mu\Delta u + \mathbf{\beta}\dot\nabla u =f \mathrm{in}~ \Omega u=g(x,y) \mathrm{on}~... Microsphere resonator 8 reviews This model reproduces the simulation results from: http://dx.doi.org/10.1063/1.4801474 Solutions were stripped so you will have to run the simulation to see the results. That may take a while... 2D Directional Coupler 8 reviews A simplification of the 3D directional coupler using the RF Module and Boundary Mode Analysis. Just download and compute to see the results. Made with Comsol version 4.4.0.248. Enjoy! Material: Water H2O 3 reviews Just open and save material to your own User-Defined Library, or copy the Interpolations to your own material. Hale and Querry 1973- Water; n,k 0.2-200 µm; 25 °C Data from:... Material: Fused Silica with sellmeier refractive index 2 reviews Material Fused Silica, just open and save material to your own User-Defined Library, or copy the equation to your own material. Refractive index data (0.21-3.71 µm) based on Sellmeier equation...
Those looking for the Quick Vibe Estimator will find the tool in AB-031: Vibration Motor Calculators – ERMs and LRAs Well, 1 G is equal to the acceleration from gravity: $$1G = 9.8 \frac{m}{s^{2}}$$ What we feel as vibrations is simply the object being repeatedly displaced and a very high frequency. But why do we express the vibration amplitude as acceleration (G) instead of a force (N) or the displacement (mm)? Why not use Displacement (mm) or Force (N)? Vibration motors are not used on their own – they’re attached to a product/device/piece of equipment that is intended to vibrate. Therefore we are interested in the whole system (motor + target mass). We measure the vibration amplitude by mounting the motor on a known target mass and reading the results from an accelerometer, see more here. This helps us plot our Typical Performance Characteristics graph. The force of a vibration motor is governed by the equation: $$F = m \times r \times \omega ^{2}$$ Where \(F\) is the force, \(m\) is the mass of the eccentric mass on the motor (not the whole system), \(r\) is the eccentricity of the eccentric mass, and \( \omega \) is the frequency. We can see that the vibration force of the motor doesn’t take the target mass into consideration. As you can imagine, a much heavier object would require more force to generate the same acceleration as a small and light object. This means when using the same motor on the two objects, the vibration amplitude would feel much smaller in the heavy object – even though the motor has the same force. Another aspect of the motor is the vibration frequency: $$ f = \frac{Motor \: Speed \:(RPM)}{60}$$ The displacement is directly affected by the vibration frequency. Due to the cyclical nature of vibration devices, for every force acting upon the system, there is eventually an equal and opposite force. With high-frequency vibrations, the time between the opposing forces is reduced, which means the system has less time to be displaced. In addition (similar to the force above) heavier objects are displaced less with the same force. If you ‘Normalise’ G, what about Force or Displacement? It’s true, we provide a Typical Normalised Vibration Amplitude. This means our acceleration measurements are adjusted to reflect the resulting vibration amplitude for a 100g mass (it makes for easy comparison between different models). We could calculate normalised ratings for force and displacement, including the rated frequency. However as our test systems measure acceleration (G) and it requires additional calculations to provide either the normalised force (N) or the normalised displacement (mm), it makes the most sense to use G. For quick calculations, or to demonstrate the above, use the Quick Vib Estimator (link at top of the page). Try entering random information to start, then try these tests: What values are affected by Target Mass? Can you adjust the Vibration Displacement without affecting the Vibration Force or Acceleration? What value affects all three resulting measurements?
Piecewise Continuous Function with One-Sided Limits is Riemann Integrable Contents Theorem Let $f$ be piecewise continuous with one-sided limits on $\left[{a \,.\,.\, b}\right]$. Then $f$ is Riemann integrable on $\left[{a \,.\,.\, b}\right]$. We are given that $f$ is piecewise continuous with one-sided limits on $\left[{a \,.\,.\, b}\right]$. The result follows from Bounded Piecewise Continuous Function is Riemann Integrable. $\blacksquare$ We are given that $f$ is piecewise continuous with one-sided limits on $\closedint a b$. Therefore, there exists a finite subdivision $\left\{{x_0, x_1, \ldots, x_n}\right\}$ of $\closedint a b$, where $x_0 = a$ and $x_n = b$, such that for all $i \in \set {1, 2, \ldots, n}$: $f$ is continuous on $\left({x_{i - 1} \,.\,.\, x_i}\right)$ $\displaystyle \lim_{x \mathop \to {x_{i - 1} }^+} f \left({x}\right)$ and $\displaystyle \lim_{x \mathop \to {x_i}^-} f \left({x}\right)$ exist. For all $k \in \left\{{1, 2, \ldots, n}\right\}$, let $\map P k$ be the proposition: $f$ is Riemann integrable on $\left[{x_0 \,.\,.\, x_k}\right]$. Basis for the Induction $\map P 1$ is the case: $f$ is Riemann integrable on $\left[{x_{i - 1} \,.\,.\, x_i}\right]$ for an arbitrary $i \in \left\{{1, 2, \ldots, k}\right\}$. Piecewise continuity with one-sided limits of $f$ for the case $n = 1$ means that: $f$ is continuous on $\left({x_{i - 1} \,.\,.\, x_i}\right)$ $\displaystyle \lim_{x \mathop \to {x_{i - 1} }^+} f \left({x}\right)$ and $\displaystyle \lim_{x \mathop \to {x_i}^-} f \left({x}\right)$ exist. By Integrability Theorem for Functions Continuous on Open Intervals, $f$ is Riemann integrable on $\left[{x_{i - 1} \,.\,.\, x_i}\right]$. Thus $\map P 1$ is seen to hold. This is the basis for the induction. Induction Hypothesis Now it needs to be shown that, if $\map P k$ is true, where $k \ge 1$, then it logically follows that $\map P {k + 1}$ is true. So this is the induction hypothesis: $f$ is Riemann integrable on $\left({x_0 \,.\,.\, x_k}\right)$. from which it is to be shown that: $f$ is Riemann integrable on $\left({x_0 \,.\,.\, x_{k + 1} }\right)$. Induction Step This is the induction step: By definition of a piecewise continuous function with one-sided limits, for every $i \in \left\{ {1, 2, \ldots, k, k + 1}\right\}$: $f$ is continuous on $\left({x_{i - 1} \,.\,.\, x_i}\right)$ the one-sided limits $\displaystyle \lim_{x \mathop \to x_{i - 1}^+} f \left({x}\right)$ and $\displaystyle \lim_{x \mathop \to x_i^-} f \left({x}\right)$ exist. We have that $f$ is Riemann integrable on $\left[{x_0 \,.\,.\, x_k}\right]$ and $\left[{x_k \,.\,.\, x_{k + 1} }\right]$. Therefore, by Existence of Integral on Union of Adjacent Intervals: $f$ is Riemann integrable on $\left[{x_0 \,.\,.\, x_k}\right] \cup \left[{x_k \,.\,.\, x_{k + 1} }\right]$. We have that: $\left[{x_0 \,.\,.\, x_{k + 1} }\right] = \left[{x_0 \,.\,.\, x_k}\right] \cup \left[{x_k \,.\,.\, x_{k + 1} }\right]$ Accordingly, $f$ is Riemann integrable on $\left[{x_0 \,.\,.\, x_{k + 1} }\right]$. So $\map P k \implies \map P {k + 1}$ and the result follows by the Principle of Mathematical Induction. Therefore: $f$ is Riemann integrable on $\closedint a b$. $\blacksquare$
Let $K$ be a field (of characteristic $\neq 2$ if that matters) and $V$ a finite dimensional vector space over $K$, $dim(V)=n$. Let $g$ be a symmetric bilinear form on $V$. By $Cl(V,g)$ I denote the clifford algebra associated to $V$ and $g$. $Cl(V,g):=T(V)/I$ where $T(V)$ is the tensor algebra of $V$ and $I$ is the twosided ideal generated by $\{x\otimes x + g(x,x)1|\text{ } x\in V\}$. My question regards the proof of the follwing claim Claim: The linear map $i\colon V\rightarrow Cl(V,g)$ that maps $v\in V$ to $[v]\in Cl(V,g)$ is injective. It is often mentioned that the Injectivity follows from the above construction of the clifford algebra, see e.g. http://en.wikipedia.org/wiki/Clifford_algebra. In "Dirac Operators in Riemannian Geometry" (Friedrich) the injectivity of the above map is stated as a corollary of the existence of $Cl(V,g)$ (but not proved). This appears to me as if the injectivity would be a simple fact that is easily proven, however I don't know a "simple" proof. Am I missing something? I know of two ways to prove the claim: One can use representation theory (see https://mathoverflow.net/questions/68378/clifford-algebra-non-zero), or one can first prove that the dimension of $Cl(V,g)$ is $2^{n}$, then show that the elements $[e_{i_1}]\cdot\ldots\cdot[e_{i_k}]$, $1\le i_1<\ldots <i_k\le n$, $1\le k\le n$ and $1$ generate $Cl(V,g)$ and therefore must be a basis and the injectivity of $i$ follows since it maps basis vectors to basis vectors. So my question is: Question: Are there easier (more elementary) ways to prove that $i$ is injective than those I mentioned?
Definition:Symmetry (Relation) Contents Definition Let $\mathcal R \subseteq S \times S$ be a relation in $S$. $\mathcal R$ is symmetric if and only if: $\tuple {x, y} \in \mathcal R \implies \tuple {y, x} \in \mathcal R$ $\mathcal R$ is asymmetric if and only if: $\tuple {x, y} \in \mathcal R \implies \tuple {y, x} \notin \mathcal R$ $\mathcal R$ is antisymmetric if and only if: $\tuple {x, y} \in \mathcal R \land \tuple {y, x} \in \mathcal R \implies x = y$ that is: $\set {\tuple {x, y}, \tuple {y, x} } \subseteq \mathcal R \implies x = y$ Note the difference between: An asymmetric relation, in which the fact that $\tuple {x, y} \in \mathcal R$ means that $\tuple {y, x}$ is definitely notin $\mathcal R$ and: An antisymmetric relation, in which there maybe instances of both $\tuple {x, y} \in \mathcal R$ and $\tuple {y, x} \in \mathcal R$ but if there are, then it means that $x$ and $y$ have to be the same object. The word symmetry comes from Greek συμμετρεῖν ( symmetría) meaning measure together. Also see Results about symmetry of relationscan be found here.
Detecting groups of vechicles travelling together as a convoy is an important problem in military and law enforcement applications. We present a method for identifying these groups travelling together over time using short-range sensors distributed over a road network. Results of expirements on simulated and real data show that our algorithm does detect and report convoys in real-time. The figure shows an example convoy found in a real dataset. The system presented uses license plate recognition (LPR) cameras distributed across a city in order to receive information about where a vehicle was recognized and the time of recognition. Using this data we construct a semi-Markov model describing how vehicles move between the state (sensor) locations. We then construct two models for vehicles travelling independently as well as in a convoy where transitions in the semi-Markov process are correlated. From these two models, a sequential hypothesis test is constructed which updates the likelihood ratio of the process travelling as a convoy as samples of each vehicle are received. The image to the right shows example sensor locations overlayed on a city grid. This model represents the joint random process, Z, of two random objects X and Y travelling through the state space (set of sensors). Each state has a corresponding latitude and longitude location. We have no knowledge of where the vehicles are in-between samples at sensor locations, leading to the Markov model where it is assumed they do not move until observed next. Each joint process is a combination of two individual processes travelling through the Markov chain. Each process are characterized by observations at each individual state and the time of the transition to that state, more specifically $$ \mathbf{X} = \left\lbrace X(t^x_0) = x_o, X(t^x_1) = x_1,...,X(t^x_{n_x(t)}) = x_{n_x(t)} \right\rbrace $$ where $n_x(t)$ is the number of observations of $X$ at time $t$, or $$ n_x(t) = \max \lbrace k : t_{k}^{x} \leq t \rbrace . $$ The model then defines the joint process as $$ Z(t) = \left[ X(t^x_{n_x(t)}), Y(t^y_{n_y(t)}) \right] $$ which captures the current state of objects $X$ and $Y$ at the time $t$. We now provide a method of defining the probability that two objects, $X$ and $Y$, are travelling independently in model $H_0$ (which simply the Markov probability for each object). We also define a density describing the two objects travelling dependently, in model $H_1$, taking into account the physical properties of the transitions being made. We then define a sequential hypothesis test as $$ \Lambda \left( Z(t_i) = z_i | Z(t_{i-1}) = z_{i-1} \right) = \frac{\Pr \left( Z(t_i) = z_i | Z(t_{i-1}) = z_{i-1}, H_1 \right)}{\Pr \left( Z(t_i) = z_i | Z(t_{i-1}) = z_{i-1}, H_0 \right)} . $$ The decision rules for this likelihood ratio are the following $\Lambda \left( Z(t_i) = z_i | Z(t_{i-1}) = z_{i-1} \right)$ < $\eta_0$ decide $H_0$ $\eta_0$ $\leq$ $\Lambda \left( Z(t_i) = z_i | Z(t_{i-1}) = z_{i-1} \right)$ < $\eta_1$ decide "need more data" $\eta_1$ $\leq$ $\Lambda \left( Z(t_i) = z_i | Z(t_{i-1}) = z_{i-1} \right)$ decide $H_1$ where the quantities $\eta_0$ and $\eta_1$ are chosen to control the probability of false detection ($P_{FD}$) and probability of detection ($P_{D}$). An example detected convoy can be seen in the following The example path of a detected convoy through space. The z-axis denotes the minutes since the track of the pair started. The x and y-axes denote latitude and longitude. This is the same convoy as in the path plot however instead of plotting the actualy latitude and longitue we have plotted the states where the object was observed. Lawlor, S. and M. G. Rabbat, " Detecting Convoys in Networks of Short-Range Sensors", 2 014 Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, California, 11/2014 Lawlor, S, "Detecting Convoys in Networks of Short-Range Sensors", Electrical and Computer Engineering, Masters of Engineering, Montreal, Quebec, McGill University, pp. 73, 08/2013.
The question is about combinatorics. I have no idea on how to start solving the problem. Please guide me. $(a) 2mn+ {m \choose 2}$ $(b) \frac{1}{2}m(m-1)+n(2m+n-1)$ $(c) {m \choose 2}+2({n \choose 2})$ (d) none of these I tried solving this question, but I couldn't even find a starting point?? The question is about combinatorics. I have no idea on how to start solving the problem. Please guide me. Here are some examples of maximal number of points of intersection for $n,m\in\{1,2,3,4,5\}$: The number of ways you can choose a pair from $n$ circles is $\binom n2$. Each such pair intersect in at most two points. The number of ways you can choose a pair from $m$ lines is $\binom m2$. Each such pair intersect in at most one point. The number of ways you can choose one circle and one line from $n$ circles and $m$ lines is $n\cdot m$. Each such pair intersect in at most two points. So as a total we have: $$ \begin{align} 2\binom n2+\binom m2+2n\cdot m&=n(n-1)+\frac{m(m-1)}{2}+2n\cdot m\\ &=\tfrac 12m(m-1)+n(2m+n-1) \end{align} $$ which leads directly to the correct answer.
I've created two (poorly) drawn graphics to illustrate my point. Two unit vectors, $\widehat i$ and $\hat j$, in their standard orientation of \begin{bmatrix} 1 & 0 \\ 0 & 1\\ \end{bmatrix} Form an enclosed area like this: Please note that this is merely a graphic to illustrate my point and salient details of the vector's size and coordinate system have been omitted. Why, however, is the enclosed area not this, as it seems more intuitive to me?: The vectors in both graphics are of the same magnitudes and only visually aren't due to my lack of artistic skills, but in this one I have half the enclosed area. This seems more intuitive to me because it's now a shape using the two length projections of $\hat i$ and $\hat j$ instead of using all the enclosed area. Is it convention? Or is there something else to it? Let me know if my question is unclear and I'll try to clarify, but I'm essentially asking why we establish that the enclosed area of the matrix I've stated above forms a square and not a triangle, when I imagine you would need $4$ lengths to create a square, and $2$ to create a triangle. What contradicts my logic is that by that logic the area of a triangle would be $base \ \dot \ \ height$, but I can't explain why beyond that point, as visually it would make sense to me.
Let $V=\mathcal{C}([0,1])$ be the space of all complex valued continuous functions with the norm $||f||_{\infty}=\sup_{x\in[0,1]}|f(x)|$. Then, $V$ with the norm $||\cdot||_{\infty}$ is a Banach space. I'm having problems to understand some part of the proof. I will proceed proving the previous statement until the question comes. Given a Cauchy sequence $(f_n)_{n\in \mathbb{N}}\in V$, that's to say, for all $\varepsilon >0$ there exists $N\in \mathbb{N}$ such that for all $n,m \geq N$ we have that $||f_n-f_m|||_\infty \leq \varepsilon$; we need to find a candidate $f$ for the limit of $(f_n)_{n\in \mathbb{N}}$. Fixing $x\in[0,1]$ and noting that for all $n,m\geq N$ we have that $$|f_n(x)-f_m(x)|\leq \sup_{z\in[0,1]}|f_n(z)-f_m(z)|=||f_n-f_m|||_\infty \leq \varepsilon.$$ Thus, $(f_n(x))_{n\in \mathbb{N}}$ is a Cauchy sequence in $\mathbb{C}$ and since $\mathbb{C}$ is complete we know that $\lim_{n\to \infty}f_n(x)$ exists. Then, our candidate will be $f(x):=\lim_{n\to \infty}f_n(x)$. Now, we want to prove that $f$ is continuous. Fixing $x\in[0,1]$ and letting $\varepsilon >0$ we can choose $N\in \mathbb{N}$ such that for all $n,m\geq N$ we have that $||f_n-f_m|||_\infty \leq \varepsilon/3$. Also, as $f_N\in (f_n)_{n\in \mathbb{N}}\subset V$, for this $\varepsilon >0$ there exists $\delta >0$ such that if $|x-y|\leq \delta$ then $|f_N(x)-f_N(y)|\leq \varepsilon/3$. Furthermore, for any $n,m\geq N$ and $y\in \mathbb{C}$ such that $|x-y|\leq \delta$ we have that \begin{align*} |f(x)-f(y)| =&|f(x)-f_n(x)+f_n(x)-f_N(x)+f_N(x)\\ & -f_N(y)+f_N(y)-f_m(y)+f_m(y)-f(y)| \\ \leq&|f(x)-f_n(x)|+|f_n(x)-f_N(x)|+|f_N(x)-f_N(y)|\\ & +|f_N(y)-f_m(y)|+|f_m(y)-f(y)| \end{align*} I think it's important to do it that way since then $\delta$ could depend on $N$ but not on $n$. The second and the third terms are less or equal that $\varepsilon/3$. However, I don't think I can use the same argument with the fourth, since $y$ may not be in $[0,1]$. The only thing I know is that $|x-y|\leq \delta$. If I prove that the fourth them is less or equal than $\varepsilon/3$, then I know how to finish the prove.
McGill University Boston University Purdue University Lebedev Physical Institute Crimean Astrophysical Observatory Department of Electronics and Nanoengineering Instituto de Astrofisica de Andalucia (Spain) ITMO University University of Arizona Series: 35th International Cosmic Ray Conference, ICRC2017. The astroparticle physics conference. 12-20 July 2017, Bexco, Busan, Korea, Pos proceedings of science Feng , Q , Jorstad , S G , Marscher , A P , Lister , M L , Kovalev , Y Y , Pushkarev , A B , Savolainen , T , Agudo , I , Molina , S N , Gomez , J L , Larionov , V M , Borman , G A , Mokrushina , A A & Smith , P S 2017 , Multiwavelength observations of the blazar BL Lacertae: a new fast TeV gamma-ray flare . in 35th International Cosmic Ray Conference, ICRC2017. The astroparticle physics conference. 12-20 July 2017, Bexco, Busan, Korea . Pos proceedings of science , Sissa , Proceedings of the 35th International Cosmic Ray Conference (ICRC 2017) , International Cosmic Ray Conference , Busan , Korea, Republic of , 10/07/2017 . Abstract: Observations of fast TeV $\gamma$-ray flares from blazars reveal the extreme compactness of emitting regions in blazar jets. Combined with very-long-baseline radio interferometry measurements, they probe the structure and emission mechanism of the jet. We report on a fast TeV $\gamma$-ray flare from BL Lacertae observed by VERITAS, with a rise time of about 2.3 hours and a decay time of about 36 minutes. The peak flux at $>$200 GeV measured with the 4-minute binned light curve is $(4.2 \pm 0.6) \times 10^{-6} \;\text{photons} \;\text{m}^{-2}\, \text{s}^{-1}$, or $\sim$180% the Crab Nebula flux. Variability in GeV $\gamma$-ray, X-ray, and optical flux, as well as in optical and radio polarization was observed around the time of the TeV $\gamma$-ray flare. A possible superluminal knot was identified in the VLBA observations at 43 GHz. The flare constrains the size of the emitting region, and is consistent with several theoretical models with stationary shocks.
8.3.1.1. - Example: Change in Knowledge An educational research study is designed so that participants complete a measure of demonstrated knowledge twice. The researcher wants to estimate the change in scores from the first to second administrations (i.e., pre- and post-test). Data are paired by participant. The researcher subtracted pre-test scores from the post test scores and found a mean increase of 6.560 with a standard deviation of 3.867 for \(n=100\). She wants to construct a 95% confidence interval for the mean difference. First, we'll find the appropriate multiplier. \(df=n-1=100-1=99\) For a 95% confidence interval: \(t_{df=99}=1.984\) \(6.560 \pm 1.984 \left(\frac{3.867}{\sqrt{100}}\right)=6.560 \pm 0.767=[5.793, 7.327]\) We are 95% confident that the difference between post- and pre- test scores is between 5.793 and 7.327. Data from Zimmerman, W. A. (2015). Impact of Instructional Materials Eliciting Low and High Cognitive Load on Self-Efficacy and Demonstrated Knowledge (Unpublished doctoral dissertation). The Pennsylvania State University, University Park, PA.
Sessions Monday, July 6 2009, Plenary Monday, Working group 1 Monday, Working group 2 Monday, Working group 3 Tuesday, July 7, 2009, Plenary Tuesday, Working group 1 Tuesday, Working group 2 Tuesday, Working group 3 Wednesday, July 8, 2009, Plenary Thursday, July 9, 2009, Plenary Thursday, Working group 1 Thursday, Working group 2 Thursday, Working group 3 Friday, July 10, 2009, Plenary session poster Monday, July 6 2009, Plenary Effective field theories - past and future PoS(CD09)001 pdf pion pion scattering lengths measurement at NA48-CERN PoS(CD09)002 pdf Investigation of pi+ pi- and pK atoms at DIRAC PoS(CD09)003 pdf Spontaneous chiral symmetry breaking on the lattice PoS(CD09)004 pdf Monday, Working group 1 Light quark masses PoS(CD09)005 pdf Results from ETMC in the light-quark sector PoS(CD09)006 pdf MILC results for light pseudoscalars PoS(CD09)007 pdf Evidence for pi-K atoms with DIRAC PoS(CD09)008 pdf Updated results from PIBETA and overview of the PEN experiment PoS(CD09)009 pdf Pion form factor from RBC and UKQCD PoS(CD09)010 pdf Pion form factors from lattice QCD with exact chiral symmetry PoS(CD09)011 pdf Experimental opportunities of ChPT at J-PARC PoS(CD09)012 pdf Monday, Working group 2 Analyticity constrained pion-nucleon analysis PoS(CD09)013 pdf Isospin-breaking corrections to the pion-nucleon scattering lengths PoS(CD09)014 pdf Pion-nucleon scattering around the delta resonance PoS(CD09)015 pdf A method to measure the Kbar-N scattering length in lattice QCD PoS(CD09)016 pdf Two-flavour ChPT for hyperons PoS(CD09)018 pdf A Bayesian approach to chiral extrapolations PoS(CD09)019 pdf Mesons and glueballs in chiral approach and AdS/QCD PoS(CD09)020 pdf Recent QCD results on the strange hadron systems PoS(CD09)021 pdf S = 1 pentaquarks in QCD sum rules PoS(CD09)022 pdf Monday, Working group 3 Few-nucleon scattering experiments PoS(CD09)023 pdf Chiral EFT for nuclear forces with Delta isobar degrees of freedom PoS(CD09)024 pdf NN potentials from IR chiral EFT PoS(CD09)025 pdf Chiral 3NF and neutron-deuteron scattering PoS(CD09)026 pdf Nuclear lattice simulations PoS(CD09)027 pdf New approach to NN with perturbative pions PoS(CD09)028 pdf Nuclear interactions and the space-like structure of the pion PoS(CD09)029 pdf Pion reactions with few-nucleon systems PoS(CD09)030 pdf Tuesday, July 7, 2009, Plenary ChPT in the meson sector PoS(CD09)031 pdf Kaons on the lattice PoS(CD09)032 pdf Kaon physics: Recent experimental results PoS(CD09)033 pdf Experimental information on Vus PoS(CD09)034 Lifetime Measurement of the pi^0 Meson and the QCD Chiral Anomaly PoS(CD09)035 pdf Tuesday, Working group 1 Dispersion sum rules for pion polarizabilities PoS(CD09)036 pdf Pion polarizabilities: No conflict between dispersion theory and ChPT PoS(CD09)037 pdf Prospects for the Primakoff measurement of the pion polarizability at COMPASS PoS(CD09)038 pdf Chiral expansions of the pi^0 decay amplitude PoS(CD09)039 pdf Precision Measurement of Electroproduction of pi^0 Near Threshold: A Test of Chiral QCD Dynamics PoS(CD09)040 pdf Precise tests of chiral perturbation theory from Ke4 decays by the NA48/2 experiment PoS(CD09)041 pdf The Standard model prediction for K_e2/K_mu2 and pi_e2/pi_mu2 PoS(CD09)042 pdf Electromagnetic effects in $\eta\rightarrow 3 \pi$ PoS(CD09)043 pdf Study of the η→ 3po decay with the crystal ball at MAMI PoS(CD09)044 eta, eta' physics at KLOE PoS(CD09)045 pdf Studies of the eta meson decays with WASA PoS(CD09)046 pdf A new dispersive analysis of $\eta\rightarrow 3 \pi$ PoS(CD09)047 pdf Search of new physics via eta rare decays PoS(CD09)048 pdf Tuesday, Working group 2 Nucleon Spin Structure in the Resonance Region PoS(CD09)049 pdf Complex mass renormalization in EFT PoS(CD09)050 pdf Different nature of rho and a1 PoS(CD09)051 pdf Origin of resonances in chiral dynamics PoS(CD09)052 pdf Baryonic resonances dynamically generated from the interactionof vector mesons with stable baryons of the decuplet PoS(CD09)053 pdf Photoproduction of neutral pions PoS(CD09)054 pdf The unexpected role of D-waves in low-energy neutral pion photoproduction PoS(CD09)055 pdf A gauge invariant chiral unitary framework for kaon photo- and electroproduction on the proton PoS(CD09)056 pdf Threshold pion electroproduction at large momentum transfers PoS(CD09)057 pdf Neutron spin sum rules and spin polarizabilities at low Q2 PoS(CD09)058 pdf Compton scattering from the proton: An analysis using the delta expansion up to N3LO PoS(CD09)059 pdf Nucleon Spin Polarisabilities from Polarised Deuteron Compton Scattering PoS(CD09)060 pdf Tuesday, Working group 3 Recent few-body studies at TUNL: Experimental results and challenges PoS(CD09)061 pdf M1 properties of the few-nucleon systems PoS(CD09)062 pdf Subtractive renormalization of the chiral effective theory NN potentials up to next-to-next-to-leading order PoS(CD09)064 pdf Hadronic atoms PoS(CD09)065 pdf Calculations of electromagnetic (and a few weak) reactions of light nuclei in chiral effective theory PoS(CD09)066 pdf Hadron-hadron interactions in lattice QCD PoS(CD09)067 Nuclear forces from lattice QCD PoS(CD09)068 pdf Unitarized chiral dynamics in three-body systems PoS(CD09)069 pdf Wednesday, July 8, 2009, Plenary Pion physics on the lattice PoS(CD09)070 pdf Isospin symmetry breaking PoS(CD09)071 pdf Effective theories for magnetic systems PoS(CD09)072 pdf Effective theories of electroweak symmetry breaking PoS(CD09)073 pdf Thursday, July 9, 2009, Plenary Baryons on the lattice PoS(CD09)074 Baryon chiral perturbation theory PoS(CD09)075 pdf Nuclear forces on the lattice PoS(CD09)076 pdf Effective field theory for nuclear forces PoS(CD09)077 pdf More effective theory of nuclear forces PoS(CD09)078 pdf Thursday, Working group 1 Theory of the hadronic light-by-light contribution to muon g-2 PoS(CD09)079 pdf Hadronic light-by-light scattering in the muon g-2: a new short-distance constraint on pion exchange PoS(CD09)080 pdf Some aspects of isospin breaking in kaon decays PoS(CD09)081 pdf Light quark results from a mixed lattice action PoS(CD09)082 Strange quark mass from tau decays PoS(CD09)083 Investigations on the properties of the f0(600) and f0(980) resonances in gamma gamma to pi pi Process PoS(CD09)084 pdf Chiral low-energy couplings from lattice computations in the epsilon regime PoS(CD09)085 pdf Chiral low-energy constants from tau data PoS(CD09)086 pdf Determination of LECs and testing ChPT at order p6 (NNLO) PoS(CD09)087 pdf Lattice study of ChPT beyond QCD PoS(CD09)088 pdf Relations between SU(2) and SU(3)-LECs in ChPT at two-loop level PoS(CD09)089 pdf Thursday, Working group 2 Recent results on GPD/DVCS experiments at CLAS PoS(CD09)090 pdf Moments of generalized parton distribution functions viewed from baryon ChPT PoS(CD09)091 Delta electromagnetic form factors and quark transverse charge densities from lattice QCD PoS(CD09)092 pdf Baryon structure in chiral effective field theory on the light front PoS(CD09)093 pdf Nucleon electromagnetic form factor ratio at low Q2: The JLab experimental program PoS(CD09)094 pdf An exemplary case of chiral EFT for resonances: Delta (1232) PoS(CD09)095 pdf Hadron masses and decay constants from lattice QCD at realistic quark masses PoS(CD09)096 Electromagnetic structure of the low-lying baryons in covariant chiral perturbation theory PoS(CD09)097 pdf Parity-violating electron scattering and strangeness form factors of the nucleon PoS(CD09)098 pdf SU(3)-breaking corrections to the hyperon vector coupling f1(0) in covariant baryon chiral perturbation theory PoS(CD09)099 pdf Thursday, Working group 3 Parity violation from few nucleon systems PoS(CD09)100 Study of the GDH sum rule of 3He at HIGS PoS(CD09)101 pdf Study of the GDH sum rule of 3He at JLab PoS(CD09)102 Lattice QCD simulations of baryon-baryon interactions PoS(CD09)103 Few-body systems and the pionless effective field theory PoS(CD09)104 pdf Universal correlations in pion-less EFT with the resonating group model: three and four nucleons PoS(CD09)105 pdf Differential cross section for neutron-proton bremsstrahlung at 200 MeV PoS(CD09)106 New results on photodisintegration of 4He PoS(CD09)107 pdf Electromagnetic currents from chiral EFT PoS(CD09)108 pdf Friday, July 10, 2009, Plenary Single-nucleon experiments PoS(CD09)109 pdf Nucleon form factors PoS(CD09)110 pdf Few-body reactions at low energies PoS(CD09)111 pdf Experimental results from MAMI PoS(CD09)112 pdf Closing talk: Chiral dynamics 2009 PoS(CD09)113 pdf session poster Dispersive $K \pi$ vector form factor and fits to $\tau\rightarrow K \pi \nu_tau$ and $K_{e3}$ data PoS(CD09)114 Consistency checks between OPE condensates and low-energy couplings PoS(CD09)115 pdf Chiral dynamics predictions for $\eta\prime\rightarrow \eta \pi \pi$ PoS(CD09)117 pdf The electroweak model based on the nonlinearly realized gauge group PoS(CD09)118 pdf RGE in resonance chiral theory: the pi pi vector form factor PoS(CD09)119 pdf Cusps in $\eta\prime\rightarrow \eta \pi \pi$ decays PoS(CD09)120 pdf Weiberg sum rules and NLO in 1/N_C PoS(CD09)121 pdf Construction of the eta\rightarrow 3pi (and K\rightarrow 3 pi) amplitudes using a dispersive approach PoS(CD09)122 pdf
Publication Info Title: Existence of the solution to a nonlocal-in-time evolutional problem Type: Article Status: Published Journal: Nonlinear Anal. Model. Control 19 (3), 432-447 Extended summary This work is devoted to the study of a nonlocal-in-time evolutional problem for the first order differential equation in Banach space. \begin{equation} \label{bp1:eq} u’_t + Au=f(t), \quad t \in [0,T] \end{equation} \begin{equation} \label{bp1:nc} u(0)+\sum\limits_{k=1}^{n}\alpha_k u(t_k) =u_0,\quad 0 < t_1 < t_2 < \ldots < t_n \leq T, \end{equation} Sufficient conditions guarantying the existence of solution to \eqref{bp1:eq} — \eqref{bp1:nc} was formulated by Byszewski and Liang. According to them the solution of \eqref{bp1:eq} — \eqref{bp1:nc} exists if the coefficients from \eqref{bp1:nc} satisfy the following inequality \begin{equation}\label{estLiang2002} \sum\limits_{i=1}^{n} \left|\alpha_i\right|e^{-\rho t_i} \leq 1. \end{equation} Our primary approach, although stems from the convenient technique based on the reduction of a nonlocal problem to its classical initial value analogue, uses more advanced analysis. That is a validation of the correctness in definition of the general solution representation given by \begin{equation} \label{bp1IntRed} u(t)=e^{-At} B^{-1} \left[ u_0 - \sum\limits_{i=1}^n \alpha_i \int\limits_0^{t_i} e^{-A (t_i-\tau)}f(\tau) d\tau\right] +\int\limits_0^t{e^{-A(t-\tau)}f(\tau)}d\tau. \end{equation} here $B(z)$ is defined as follows: \begin{equation} \label{zerosExp} B(z)=1+\sum_{k=1}^n{\alpha_k e^{(-t_k z)}}. \end{equation} Such approach allows us to reduce the given existence problem to the problem of locating zeros of a entire function $B(z)$. It results in the necessary and sufficient conditions for the existence of a generalized (mild) solution to the given nonlocal problem: Theorem 1. Let $A$ be a strongly positive linear operator with the spectral parameters $(\rho, \theta)$, and $f(t) \in L^1((0;T),X)$ be a given function. Then the generalized solution \eqref{bp1IntRed} exists if and only if the set of zeros $\mathrm{Ker}(B(z)) \equiv \left\{z: B(z)=0,\ z \in \mathbb{C} \right\}$ of $B(z)$ associated with nonlocal condition \eqref{bp1:nc}, satisfy the inclusion \begin{equation}\label{zerosB} \mathrm{Ker}(B(z))\subset \mathbb{C} \backslash \Sigma. \end{equation} Example 1. To demonstrate the application of Theorem 1 let us consider the following nonlocal problem \begin{equation} \begin{array}{l} u’_t+Au=f(t), \quad t \in [0,T]\\ u(0)+\alpha_1 u(t_1) =u_0,\quad 0<t_1\leq T. \end{array}\label{exNCP1p} \end{equation} Condition \eqref{zerosB} for problem \eqref{exNCP1p} is equivalent to the following inequality: \begin{equation}\label{estNC1p} \left|\mathrm{Arg}\left(-\frac{1}{\alpha_1}\right)\right|>\left(\ln\left|{\alpha_1}\right|-t_1\rho\right)\tan\theta. \end{equation} The comparison of sufficient condition \eqref{estLiang2002} against necessary and sufficient condition \eqref{estNC1p} is given on Figure 1 where we depict three sets of admissible values of $\alpha_1 \in \C$ for $t_1=1$: $\cbox{gray80}$ — estimate \eqref{estLiang2002} ($\theta=\pi/4$), $\cbox{gray60}$ — estimate \eqref{estNC1p} ($\theta=\pi/4$) and $\cbox{gray40}$ by estimate \eqref{estNC1p} ($\theta=\pi/6$). Observe that for the operator $A$ with spectral parameters $(1,\pi/4)$ ($\rho=1$, $\theta=\pi/4$) the set of admissible $\alpha_1$ obtained from \eqref{estNC1p} (interior of the region coloured in $\cbox{gray60}$) contains in itself as a subset the admissible set obtained by \eqref{estLiang2002} (coloured in $\cbox{gray80}$). This set remains the same for the whole family of sectorial operator coefficients with some fixed $\rho$ and $\forall \theta \in [0, \pi/2]$ since \eqref{estLiang2002} are independent of $\theta$. While in reality the admissible set grows larger when we make $\theta$ smaller. Check for example the corresponding set for the case $\theta=\pi/6$ obtained using \eqref{estNC1p} which is coloured in $\cbox{gray40}$ on Figure 1. In the limiting case of $\theta=0$ when $A$ is self-adjoint this set becomes equal to $\C \backslash (-\infty; -e^{t_1\rho})$. A mapping \begin{equation} \label{bp_transf1} \varphi(z)=\exp(-z/Q), \quad Q=\mbox{LCM}(\mu_1,\mu_2, \ldots, \mu_n) \end{equation} transforms \eqref{zerosExp} into the following form \begin{equation} \label{zerosPol} P(z)=1+\suml_{k=1}^{n}\alpha_k z^{c_k}. \end{equation} It is well known that \eqref{zerosExp} is one-to-one conformal mapping of $\Omega_Q$ onto $\Phi$ (see Figure 2 b)). The conditions guarantying that all roots of $P(z)$ lie outside $\Phi$ or equivalently $\Ker{B(z)} \subset D_{Q} \backslash \Omega_Q$ would be necessary and sufficient to prove that existence and uniquenesses of solution \eqref{bp1IntRed}. The majority of results related to such conditions for polynomials are devoted to the situation when a circle is considered in place of $\Phi$. That is why we first encircle $\Phi$ and then use readily available zero-free conditions for that circle. Such approach will make the resulting conditions only sufficient for all $\theta \in [0,\pi/2)$ except for the limiting case $\theta = \pi/2$ when $\Phi$ is a circle by construction. For any given operator $A$ with spectral parameters $(\rho,\theta)$ the boundary of $\Phi$ can be parametrized as follows $$ \partial \Phi = \left\{\exp\left(\frac{-Z(x)}{Q}\right): x\in[0,+\infty]\right\}, $$ where $Z(x)$ is a parametrization of $\partial\Omega_Q$: $$ Z(x)=\rho+x+i \cdot\left\{ \begin{array}{ll} x \tan{\theta},&\quad x\tan{\theta}< Q\pi,\\ Q\pi,&\quad x\tan{\theta}\geq Q\pi. \end{array} \right. $$ A closer look at the expression for $\partial \Phi$ unveils that a vertical linear diameter of $\Phi$ is proportional to the magnitude of spectral angle, and the horizontal diameter of $\Phi$ is reversely proportional to $\rho$. This observation suggests us to describe the encompassing circle as a circumcircle of a triangle with the vertices $$ B=\max_{z \in \partial \Phi}\Re(z)+0i=\exp(-\rho/Q), $$ and $C_{1/2} \in \partial \Phi $ which are symmetric with respect to the real axis. The coordinates of $C_{1}$ are chosen to maximize the distance $|O_1-B|$ under the constrain $|O_1-B|^2=|O_1-C_i|^2$, here $O_1$ is a circumcentre of $\triangle_{BC_1C_2}$. Using the definition of $Z(x)$, \eqref{bp_transf1} and some basic facts from calculus we reduce the mentioned maximization problem to the following equation \begin{equation} \begin{split} \exp\left(-\frac{2x}{Q}\right)\left[\cos\left(\frac{x\tan\theta}{Q}\right)-\tan(\theta)\sin\left(\frac{x\tan\theta}{Q}\right)\right]+ \\ +\cos\left(\frac{x\tan(\theta)}{Q}\right)+\tan(\theta)\sin\left(\frac{x\tan\theta}{Q}\right) = 2\exp\left(-\frac{x}{Q}\right). \end{split} \label{eq_maxdist} \end{equation} It has a positive solution for $Q\in \N$ and $\forall \theta \in [0,\pi/2]$. Assume that $x_d$ is a solution of \eqref{eq_maxdist}, then $$ C_{1/2}=\varphi(\rho+x_d \pm i x_d \tan\theta), \quad O_1= \frac{\varphi(2\rho)-\Re(C_1)^2-\Im(C_1)^2}{2\left(\varphi(\rho)-\Re(C_1)\right)}, $$ while the radius of circumcircle $r=\varphi(\rho)-O_1$ (The picture of $\Phi$, its encompassing circle along with their inverse images are shown in Figure 2 ). In addition to representation \eqref{zerosPol} we introduce two alternative forms of polynomial related to nonlocal condition \eqref{bp1:nc}: with the given circle transformed to the unit circle centered at the origin \begin{equation}\label{zerosPol1} P_1(z’)= P(O_1+rz’)=\suml_{k=0}^{c_n} \alpha’_k z’^k, \end{equation} and with the given circle transformed to the circle centered at the origin \begin{equation}\label{zerosPol2} P_2(z’')=P(O_1+z’')=\suml_{k=0}^{c_n} \alpha’'_k z’'^k. \end{equation} Next we recollect some know results concerning the zero free regions estimates for polynomials. Definition 1 Given $P^\star(z)=z^n\overline{P(1/\overline{z})}$ we define the Schur transform $T$ of polynomial $P(z)$ by $$ \begin{array}{rl} TP(z):=&\overline{\alpha_0}P(z)-\alpha_n P^\star(z)\ =& \suml_{k=0}^{n-1} (\overline{\alpha_0} \alpha_k - \alpha_n \overline{\alpha_{n-k}})z^k. \end{array} $$ Theorem 2 Let $P(z)$ is a polynomial of degree $n>0$. All zeros of $P(z)$ lie in the exterior of the circle $|z| \leq 1$ if and only if for all $k=1,2, \ldots, n$ \begin{equation} \gamma_k > 0, \label{shurCrit} \end{equation} where $\gamma_k := T^k P(0)$ and $T^k P= T(T^{k-1} P)$. Lemma 1 All zeros of $P(z)$ lie in th region $$ |z|\geq \frac{|\alpha_0|}{|\alpha_0|+M}, $$ where $M=\max\limits_{1\leq k \leq n}|\alpha_k|$. Lemma 2 All zeros of $P(z)$ satisfy the inequality $$ |z|\geq \frac{|\alpha_0|}{\left[|\alpha_0|+M^q \right]^{1/q}},\quad M=\left(\sum\limits_{k=1}^n |\alpha_k|^p\right)^{1/p},\quad p,\ q \in \mathbb{R}_+,\quad \frac{1}{p}+\frac{1}{q}=1 $$ Lemma 3 All zeros of $P(z)$ belong to the region $$ |z|\geq \frac{1}{2}\min\limits_{\alpha_i\neq 0 }\left\{\left|\frac{\alpha_0}{\alpha_1}\right|, \left|\frac{\alpha_0}{\alpha_2}\right|^{1/2}, \ldots, \left|\frac{\alpha_0}{\alpha_{n-1}}\right|^{1/(n-1)}, \left|\frac{2\alpha_0}{\alpha_n}\right|^{1/n}\right\}. $$ Lemma 4 All zeros of $P(z)$ belong to the region $|z|\geq \max{V_1^{-1},V_2^{-1}}$, where $$ V_1=\cos{\frac{\pi}{n+1}}+\frac{|\alpha_n|}{2|\alpha_0|}\left(\left|\frac{\alpha_1}{\alpha_n}\right|+\sqrt{1+\suml_{k=1}^{n-1}\left|\frac{\alpha_k}{\alpha_n}\right|^2}\right), $$ $$ V_2=\frac{1}{2}\left(\left|\frac{\alpha_1}{\alpha_0}\right|+\cos\frac{\pi}{n}\right)+ \frac{1}{2}\left[\left(\left|\frac{\alpha_1}{\alpha_0}\right|-\cos\frac{\pi}{n}\right)^2+ \left(1+\left|\frac{\alpha_n}{\alpha_0}\right|\sqrt{1+\suml_{k=2}^{n-1}\left|\frac{\alpha_k}{\alpha_n}\right|^2}\right)^2\right]^{1/2}. $$ By combining the estimates given by Theorem 2 or Lemmas 1 — 4 with Theorem 1 we obtain new sufficient conditions for the existence and uniquenesses of the solution to \eqref{bp1:eq} — \eqref{bp1:nc} . Theorem 3 Assume that $A$, $f(t)$ and $u_0$ satisfy the conditions of Theorem 1. The generalized solution \eqref{bp1IntRed} of nonlocal problem \eqref{bp1:eq} — \eqref{bp1:nc} exists if either of the following is true. $\exists\ z > 1$, which along with the coefficients of $P(\varphi(\rho) z)$ from \eqref{zerosPol} satisfies the conditions of Theorem 2 or at least one of Lemmas 1 — 4. $\exists\ z > 1$, which along with the coefficients of $P_1(z)$ from \eqref{zerosPol1} satisfies the conditions of Theorem 2 or at least one of Lemmas 1 — 4. $\exists\ z > r$, which along with the coefficients of $P_2(z)$ from \eqref{zerosPol2} satisfies the conditions of at least one of Lemmas 1 — 4. Example 2 Let us again consider the nonlocal problem \eqref{bp1:eq} — \eqref{bp1:nc} with operator coefficient $A$ ($\theta=\theta_0, \rho=0$) and the Bicadze-Samarskii—type nonlocal condition $u(0)+\alpha_1 u(t_1)=\alpha_2 u(t_2)$. In the case of given two-point nonlocal condition $\Ker(B(z))$ can not be found in a closed form. Estimate (\ref{estLiang2002}) yields:$$%|\alpha_1|e^{-t_1}+|\alpha_2|e^{-t_2}<1,|\alpha_1|+|\alpha_2|<1. $$ Meanwhile the application of proposition 1 from Theorem 3 together with Schur-Cohn algorithm with $\theta_0=\pi/2$ lead us to system of inequalities: \begin{equation}\label{eqintshura2} \left\{ \begin{array}{l} |\alpha_2|<1,\\ |1-\alpha_2^2|>|\alpha_1(1-\alpha_2)|. \end{array}\right. \end{equation} Here we assumed that $t_1=1,\ t_2=2$ and $Q=1$. Solution of \eqref{eqintshura2} are graphically compared to with set of pairs $(\alpha_1, \alpha_2)$ satisfying \eqref{estLiang2002} in Figure 3 a). To clarify the dependence on $t_i$ we present in Figure 3 b) similar comparison for $t_1=3,\ t_2=8$. Our approach apparently gives more general conditions than \eqref{estLiang2002} or its particular case from the works of Byszewski, even though we made sufficient conditions \eqref{eqintshura2} independent on $\theta_0$. Propostion 2 of Theorem 3 ought to be more advantageous for operator coefficients with smaller $\theta$. Let us fix $\theta_0 =\pi/3$. We get $$ O_1 = 0.3950734246 , \quad r = 0.6049265754 $$ for the parameters of encompassing circle and $$ \begin{array}{ll} P_1(z’) &\approx 0.37 \alpha_2 {z’}^2+(0.6 \alpha_1+0.48 \alpha_2) z’+1+0.4 \alpha_1+0.16 \alpha_2 , \\ P_2(z’') &\approx \alpha_2 {z’'}^2+(\alpha_1+0.79 \alpha_2) {z’'}+1+0.4 \alpha_1+0.16 \alpha_2 \end{array} $$ for polynomials \eqref{zerosPol1}, \eqref{zerosPol2} correspondingly. Application of the mentioned proposition along with \eqref{shurCrit} (setting $t_1=1,\ t_2=2$ as before) gives us the set of admissible $(\alpha_1,\alpha_2)$ depicted in Figure 4. One can see that this set contains both admissible parameters sets obtained from proposition 1 of the same theorem and condition \eqref{estLiang2002}. Materials Figures Codes Maple file circumcircle_and_preimage.mws to generate Figure 1 Maple file 1pnc_alpha_complex.mws to generate Figure 2 Maple file 2pnc_tShura_alpha_real.mws to generate Figure 3 This code implements root-free region tests presented in Lemmas 1—4, Shur-Cohn algorithm and encompassing circle parameters calculation. It allows to obtain inequalities for admissible parameters sets based on the various propositions of Theorem 2.
Diameter is defined as the line passing through the center of a circle having two extremes on the circumference of a circle. The Diameter of a circle divided the circle into two equal parts known as semi-circle. The center of a circle is the midpoint of its diameter. It divides the diameter into two equal parts, each of which is a radius of the circle. The radius is half the diameter. We all know that A chord of a circle is a straight line segment whose endpoints both lie on the circle. Thus it can be said that the diameter is the longest possible chord on any circle. Relation between Radius and Diameter: We know that the the distance from the center point to any point on the circumference of a circle is a fixed distance, known as the Radius of a circle. Therefore, the relation between Radius and Diameter is \(Diameter = 2 \times Radius\) Formula for Area and circumference in Terms of Diameter: Circumference: We know Circumference = \(2 \pi r\) It can be re-written as, \( (2r) \times \pi\) = \(d \times \pi\) Area: We know Area= \(\pi r^{2}\) It can be re-written as \( \frac{\pi (2r)^{2}}{4}\) \(= \frac{\pi}{4}.d^{2}\) Given, Radius of the circle (r) = 8 cm Diameter of the circle = 2r = 2 $\times$ 8 cm = 16 cm Now Area = 200.96 Given, Area of the circle (A) = 154 We know Area (A) Now Finding the Circumference of a circle which is equal to, = 44 cm
The bounty period lasts 7 days. Bounties must have a minimum duration of at least 1 day. After the bounty ends, there is a grace period of 24 hours to manually award the bounty. Simply click the bounty award icon next to each answer to permanently award your bounty to the answerer. (You cannot award a bounty to your own answer.) @Mathphile I found no prime of the form $$n^{n+1}+(n+1)^{n+2}$$ for $n>392$ yet and neither a reason why the expression cannot be prime for odd n, although there are far more even cases without a known factor than odd cases. @TheSimpliFire That´s what I´m thinking about, I had some "vague feeling" that there must be some elementary proof, so I decided to find it, and then I found it, it is really "too elementary", but I like surprises, if they´re good. It is in fact difficult, I did not understand all the details either. But the ECM-method is analogue to the p-1-method which works well, then there is a factor p such that p-1 is smooth (has only small prime factors) Brocard's problem is a problem in mathematics that asks to find integer values of n and m for whichn!+1=m2,{\displaystyle n!+1=m^{2},}where n! is the factorial. It was posed by Henri Brocard in a pair of articles in 1876 and 1885, and independently in 1913 by Srinivasa Ramanujan.== Brown numbers ==Pairs of the numbers (n, m) that solve Brocard's problem are called Brown numbers. There are only three known pairs of Brown numbers:(4,5), (5,11... $\textbf{Corollary.}$ No solutions to Brocard's problem (with $n>10$) occur when $n$ that satisfies either \begin{equation}n!=[2\cdot 5^{2^k}-1\pmod{10^k}]^2-1\end{equation} or \begin{equation}n!=[2\cdot 16^{5^k}-1\pmod{10^k}]^2-1\end{equation} for a positive integer $k$. These are the OEIS sequences A224473 and A224474. Proof: First, note that since $(10^k\pm1)^2-1\equiv((-1)^k\pm1)^2-1\equiv1\pm2(-1)^k\not\equiv0\pmod{11}$, $m\ne 10^k\pm1$ for $n>10$. If $k$ denotes the number of trailing zeros of $n!$, Legendre's formula implies that \begin{equation}k=\min\left\{\sum_{i=1}^\infty\left\lfloor\frac n{2^i}\right\rfloor,\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\right\}=\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\end{equation} where $\lfloor\cdot\rfloor$ denotes the floor function. The upper limit can be replaced by $\lfloor\log_5n\rfloor$ since for $i>\lfloor\log_5n\rfloor$, $\left\lfloor\frac n{5^i}\right\rfloor=0$. An upper bound can be found using geometric series and the fact that $\lfloor x\rfloor\le x$: \begin{equation}k=\sum_{i=1}^{\lfloor\log_5n\rfloor}\left\lfloor\frac n{5^i}\right\rfloor\le\sum_{i=1}^{\lfloor\log_5n\rfloor}\frac n{5^i}=\frac n4\left(1-\frac1{5^{\lfloor\log_5n\rfloor}}\right)<\frac n4.\end{equation} Thus $n!$ has $k$ zeroes for some $n\in(4k,\infty)$. Since $m=2\cdot5^{2^k}-1\pmod{10^k}$ and $2\cdot16^{5^k}-1\pmod{10^k}$ has at most $k$ digits, $m^2-1$ has only at most $2k$ digits by the conditions in the Corollary. The Corollary if $n!$ has more than $2k$ digits for $n>10$. From equation $(4)$, $n!$ has at least the same number of digits as $(4k)!$. Stirling's formula implies that \begin{equation}(4k)!>\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\end{equation} Since the number of digits of an integer $t$ is $1+\lfloor\log t\rfloor$ where $\log$ denotes the logarithm in base $10$, the number of digits of $n!$ is at least \begin{equation}1+\left\lfloor\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)\right\rfloor\ge\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right).\end{equation} Therefore it suffices to show that for $k\ge2$ (since $n>10$ and $k<n/4$), \begin{equation}\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)>2k\iff8\pi k\left(\frac{4k}e\right)^{8k}>10^{4k}\end{equation} which holds if and only if \begin{equation}\left(\frac{10}{\left(\frac{4k}e\right)}\right)^{4k}<8\pi k\iff k^2(8\pi k)^{\frac1{4k}}>\frac58e^2.\end{equation} Now consider the function $f(x)=x^2(8\pi x)^{\frac1{4x}}$ over the domain $\Bbb R^+$, which is clearly positive there. Then after considerable algebra it is found that \begin{align*}f'(x)&=2x(8\pi x)^{\frac1{4x}}+\frac14(8\pi x)^{\frac1{4x}}(1-\ln(8\pi x))\\\implies f'(x)&=\frac{2f(x)}{x^2}\left(x-\frac18\ln(8\pi x)\right)>0\end{align*} for $x>0$ as $\min\{x-\frac18\ln(8\pi x)\}>0$ in the domain. Thus $f$ is monotonically increasing in $(0,\infty)$, and since $2^2(8\pi\cdot2)^{\frac18}>\frac58e^2$, the inequality in equation $(8)$ holds. This means that the number of digits of $n!$ exceeds $2k$, proving the Corollary. $\square$ We get $n^n+3\equiv 0\pmod 4$ for odd $n$, so we can see from here that it is even (or, we could have used @TheSimpliFire's one-or-two-step method to derive this without any contradiction - which is better) @TheSimpliFire Hey! with $4\pmod {10}$ and $0\pmod 4$ then this is the same as $10m_1+4$ and $4m_2$. If we set them equal to each other, we have that $5m_1=2(m_2-m_1)$ which means $m_1$ is even. We get $4\pmod {20}$ now :P Yet again a conjecture!Motivated by Catalan's conjecture and a recent question of mine, I conjecture thatFor distinct, positive integers $a,b$, the only solution to this equation $$a^b-b^a=a+b\tag1$$ is $(a,b)=(2,5).$It is of anticipation that there will be much fewer solutions for incr...
I am aware there are other proofs of line of this statement. But I am interested in the argument outlined here on page 62-63 Corollary II.2.2.9Let $A$ and $B$ be $C^*$ algebras, $\phi:A \rightarrow B$ be injective $*$-homomoprhism. Then $\phi$ is isometric, i.e. $||\phi(x) || = ||x||$ for all $x \in A$. The proof goes as follows: Wlog we may assume $A,B$ are commutative (I got this ), and it is obvious from II.2.2.4. (as below). Theorem II.2.2.4If $A$ is a commutative $C^*$ algebra, then the Gelfand trasform is an isometric $*$-isomorphism from $A$ onto $C_0(\hat{A})$. How does II.2.2.4 imply 2.2.9?
Aika is a neural network simulation algorithm, that uses neurons to represent a wide variety of linguistic concepts and connects them via synapses. In contrast to most other neural network architectures, Aika does not employ layers to structure its network. Synapses can connect arbitrary neurons with each other, but the network is not fully connected either. Like other artificial neural networks (ANN) the synapses are weighted. By choosing the weights and the threshold (i.e. the bias) accordingly, neurons can take on the characteristics of boolean logic gates such as an and-gate or an or-gate. To compute the activation value of a neuron, the weighted sum over its input synapses is computed. Then the bias value \(b\) is added to this sum and the result is sent through an activation function \(\varphi\). $$net_j = {b_j + \sum\limits_{i=0}^N{x_i w_{ij}}}$$ $$y_j = \varphi (net_j)$$ Depending on the type of neuron, different activation functions are used. One commonly used activation function in Aika is the rectified hyperbolic tangent function, which is basically the positive half of the \(\tanh()\) function. \[\varphi(x) = \Bigg \{ {0 \atop \tanh(x)} {: x \leq 0 \atop : x > 0}\] The activation functions are chosen in a way that they clearly distinguish between active and inactive neurons. Only activated neurons are processed. These activations are expressed not only by a real valued number but also by an activation object. The advantage of having activation objects is that, through them, Aika is able to cope with the relational structure of natural language text by making the activation relate to a specific segment of text. In a way these activations can be seen as text annotations that specify their start and end character. Words, phrases and sentences are in a relation to each other through their sequential order. The assignment of text ranges and word positions to activations is a simple yet powerful representation of the relational structure of text and avoids some of the shortcomings of other representations such as bag of words or sliding window. Since the activations are propagated along through the network, synapses need to be able to manipulate the text range and the word position while the activations are passed on to the next neuron. One common problem when processing text is that of cyclic dependencies. In the example 'jackson cook' it is impossible to decide which word has to be resolved first, the forename or the surname, since both depend on each other. The word jackson can be recognized as a forename when the next word is a surname and the word cook can be recognized as a surname when the previous word is a forename. To tackle this problem Aika employs non-monotonic logic and is therefore able to derive multiple interpretations that are mutually exclusive. These interpretations are then weighted and only the strongest interpretation is returned as the result. Consider the following network, which is able to determine whether a word, which has been recognized in a text, is a forename, surname, city name, or profession. If for instance the word "jackson" has been recognized in a text, it will trigger further activations in the two jackson entity neurons. Since both are connected through the inhibiting neuron only one of them can be active in the end. Aika will therefore generate two interpretations. But these interpretations are not limited to a single neuron. For instance if the word neuron cook gets activated too, the jackson forename entity and the cook surname entity will be part of the same interpretation. The forename and surname category links in this example form a positive feedback loop which reinforces this interpretation. New interpretations are spawned if both input and output of a negative recurrent synapse get activated. In this case a conflict is generated. An interpretation is a conflict free set of activations. Therefore, if there are no conflicts during the processing of an input data set, only one interpretation will exist and the search for the best interpretation will end immediately. On the other hand, if there are conflicts between activations, a search needs to be performed which selects or excludes individual activations and tests how these changes affect the overall weights sum of all activations. This sum is also called the objective function \(f\) and can be stated in the following way: $$f = \sum\limits_{j \in Acts}{\min (-g_j, net_j)}$$ $$g_j = \sum\limits_{i = 0, w_{ij} < 0, w_{ij} \in Recurrent}^N{w_{ij}}$$ The value \(g_j\) is simply the sum of all negative feedback synapses. The intention behind this objective function is to measure a neurons ability to overcome the inhibiting input signals of other neurons. Synapses in Aika consist not just of a weight value but also properties that specify relations between synapses. These relations can either be used to imply an constrained on the matching input text ranges or the dependency structure of the input activations. Biological neurons seem to achieve such a relation matching through the timing of the firing patterns of their action potentials. The complete list of synapse properties looks as follows: new Synapse.Builder() .setSynapseId(0) .setNeuron(inputNeuronA) .setWeight(10.0) .setRecurrent(false) new Synapse.Builder() .setSynapseId(1) .setNeuron(inputNeuronB) .setWeight(10.0) .setRecurrent(false) new Relation.Builder() .setFrom(0) .setTo(1) .setRelation(new Equals(END, BEGIN)), new Relation.Builder() .setFrom(0) .setTo(OUTPUT) .setRelation(new Equals(BEGIN, BEGIN)), new Relation.Builder() .setFrom(1) .setTo(OUTPUT) .setRelation(new Equals(END, END)) Unlike other ANNs, Aika allows to specify a bias value per synapse. These bias values are simply summed up and added to the neurons bias value. The property recurrent states whether this synapse is a feedback loop or not. Depending on the weight of this synapse such a feedback loop might either be positive or negative. This property is an important indicator for the interpretation search. Range relations define a relation either to another synapse or to the output range of this activation. The range relations consist of four comparison operations between the between the begin and end of the current synapses input activation and the linked synapses input activation. The next property is the range output which consists of two boolean values. If these are set to true, the range begin and the range end are propagated to the output activation. The last two properties are used to establish a dependency structure among activations. The instance relation here defines whether this synapse and the linked synapse have a common ancestor or are depending on each other in either direction. During the evaluation of this relation only those synapses are followed which are marked with the identity flag. Depending on the weight of the synapse and the bias sum of the outgoing neuron, Aika distinguishes two types synapses: conjunctive synapses and disjunctive synapses. Conjunctive synapses are stored in the output neuron as inputs and disjunctive synapses are stored in the input neuron as outputs. The reason for this is that disjunctive neurons like the inhibitory neuron or the category neurons may have a huge number of input synapses, which makes it expensive to load them from disk. By storing those synapses on the input neuron side, only those synapses need to stay in memory, that are needed by an currently activated input neuron. There are two main types of neurons in Aika: excitatory neurons and inhibitory neurons. The biological role models for those neurons are the spiny pyramidal cell and the aspiny stellate cell in the cerebral cortex. The pyramidal cells usually exhibit an excitatory characteristic and some of them possess long ranging axons that connect to other parts of the brain. The stellate cells on the other hand are usually inhibitory interneurons with short axons which form circuits with nearby neurons. Those two types of neurons also have a different electrical signature. Stellate cells usually react to a constant depolarising current by firing action potentials. This occurs with a relatively constant frequency during the entire stimulus. In contrast, most pyramidal cells are unable to maintain a constant firing rate. Instead, they are firing quickly at the beginning of the stimulus and then reduce the frequency even if the stimulus stays strong. This slowdown over time is called adaption. Aika tries to mimic this behaviour by using different activation functions for the different types of neurons. Since Aika is not a spiking neural network like the biological counterpart, we only have the neurons activation value which can roughly be interpreted as the firing frequency of a spiking neuron. In a sense the earlier described activation function based on the rectified tanh function quite nicely captures the adaption behaviour of a pyramidal cell. An increase of a weak signal has a strong effect on the neurons output, while an increase on an already strong signal has almost no effect. Furthermore, if the input of the neuron does not surpass a certain threshold then the neuron will not fire at all. For inhibitory neurons Aika uses the rectified linear unit function (ReLU). Especially for strongly disjunctive neurons like the inhibitory neuron, ReLU has the advantage of propagating its input signal exactly as it is, without distortion or loss of information.
CentralityBin () CentralityBin (const char *name, Float_t low, Float_t high) CentralityBin (const CentralityBin &other) virtual ~CentralityBin () CentralityBin & operator= (const CentralityBin &other) Bool_t IsAllBin () const const char * GetListName () const virtual void CreateOutputObjects (TList *dir, Int_t mask) virtual Bool_t ProcessEvent (const AliAODForwardMult *forward, UInt_t triggerMask, Bool_t isZero, Double_t vzMin, Double_t vzMax, const TH2D *data, const TH2D *mc, UInt_t filter, Double_t weight) virtual Double_t Normalization (const TH1I &t, UShort_t scheme, Double_t trgEff, Double_t &ntotal, TString *text) const virtual void MakeResult (const TH2D *sum, const char *postfix, bool rootProj, bool corrEmpty, Double_t scaler, Int_t marker, Int_t color, TList *mclist, TList *truthlist) virtual void End (TList *sums, TList *results, UShort_t scheme, Double_t trigEff, Double_t trigEff0, Bool_t rootProj, Bool_t corrEmpty, Int_t triggerMask, Int_t marker, Int_t color, TList *mclist, TList *truthlist) Int_t GetColor (Int_t fallback=kRed+2) const void SetColor (Color_t colour) TList * GetResults () const const char * GetResultName (const char *postfix="") const TH1 * GetResult (const char *postfix="", Bool_t verbose=true) const void SetDebugLevel (Int_t lvl) void SetSatelliteVertices (Bool_t satVtx) virtual void Print (Option_t *option="") const const Sum * GetSum (Bool_t mc=false) const Sum * GetSum (Bool_t mc=false) const TH1I * GetTriggers () const TH1I * GetTriggers () const TH1I * GetStatus () const TH1I * GetStatus () Calculations done per centrality. These objects are only used internally and are never streamed. We do not make dictionaries for this (and derived) classes as they are constructed on the fly. Definition at line 701 of file AliBasedNdetaTask.h. Calculate the Event-Level normalization. The full event level normalization for trigger \(X\) is given by \begin{eqnarray*} N &=& \frac{1}{\epsilon_X} \left(N_A+\frac{N_A}{N_V}(N_{-V}-\beta)\right)\\ &=& \frac{1}{\epsilon_X}N_A \left(1+\frac{1}{N_V}(N_T-N_V-\beta)\right)\\ &=& \frac{1}{\epsilon_X}N_A \left(1+\frac{N_T}{N_V}-1-\frac{\beta}{N_V}\right)\\ &=& \frac{1}{\epsilon_X}N_A \left(\frac{1}{\epsilon_V}-\frac{\beta}{N_V}\right) \end{eqnarray*} where \(\epsilon_X=\frac{N_{T,X}}{N_X}\) is the trigger efficiency evaluated in simulation. \(\epsilon_V=\frac{N_V}{N_T}\) is the vertex efficiency evaluated from the data \(N_X\) is the Monte-Carlo truth number of events of type \(X\). \(N_{T,X}\) is the Monte-Carlo truth number of events of type \(X\) which was also triggered as such. \(N_T\) is the number of data events that where triggered as type \(X\) and had a collision trigger (CINT1B) \(N_V\) is the number of data events that where triggered as type \(X\), had a collision trigger (CINT1B), and had a vertex. \(N_{-V}\) is the number of data events that where triggered as type \(X\), had a collision trigger (CINT1B), but no vertex. \(N_A\) is the number of data events that where triggered as type \(X\), had a collision trigger (CINT1B), and had a vertex in the selected range. \(\beta=N_a+N_c-N_e\) is the number of control triggers that were also triggered as type \(X\). \(N_a\) Number of beam-empty events also triggered as type \(X\) events (CINT1-A or CINT1-AC). \(N_c\) Number of empty-beam events also triggered as type \(X\) events (CINT1-C). \(N_e\) Number of empty-empty events also triggered as type \(X\) events (CINT1-E). Note, that if \( \beta \ll N_A\) the last term can be ignored, and the expression simplyfies to \[ N = \frac{1}{\epsilon_X}\frac{1}{\epsilon_V}N_A \] Parameters t Histogram of triggers scheme Normalisation scheme trgEff Trigger efficiency ntotal On return, the total number of events to normalise to. text If non-null, fill with normalization calculation Returns \(N_A/N\) or negative number in case of errors. Definition at line 1770 of file AliBasedNdetaTask.cxx.
"Abstract: We prove, once and for all, that people who don't use superspace are really out of it. This includes QCDers, who always either wave their hands or gamble with lettuce (Monte Zuma calculations). Besides, all nonsupersymmetric theories have divergences which lead to problems with things like renormalons, instantons, anomalons, and other phenomenons. Also, they can't hide from gravity forever." Can a gravitational field possess momentum? A gravitational wave can certainly possess momentum just like a light wave has momentum, but we generally think of a gravitational field as a static object, like an electrostatic field. "You have to understand that compared to other professions such as programming or engineering, ethical standards in academia are in the gutter. I have worked with many different kinds of people in my life, in the U.S. and in Japan. I have only encountered one group more corrupt than academic scientists: the mafia members who ran Las Vegas hotels where I used to install computer equipment. " so Ive got a small bottle that I filled up with salt. I put it on the scale and it's mass is 83g. I've also got a jup of water that has 500g of water. I put the bottle in the jug and it sank to the bottom. I have to figure out how much salt to take out of the bottle such that the weight force of the bottle equals the buoyancy force. For the buoyancy do I: density of water * volume of water displaced * gravity acceleration? so: mass of bottle * gravity = volume of water displaced * density of water * gravity? @EmilioPisanty The measurement operators than I suggested in the comments of the post are fine but I additionally would like to control the width of the Poisson Distribution (much like we can do for the normal distribution using variance). Do you know that this can be achieved while still maintaining the completeness condition $$\int A^{\dagger}_{C}A_CdC = 1$$? As a workaround while this request is pending, there exist several client-side workarounds that can be used to enable LaTeX rendering in chat, including:ChatJax, a set of bookmarklets by robjohn to enable dynamic MathJax support in chat. Commonly used in the Mathematics chat room.An altern... You're always welcome to ask. One of the reasons I hang around in the chat room is because I'm happy to answer this sort of question. Obviously I'm sometimes busy doing other stuff, but if I have the spare time I'm always happy to answer. Though as it happens I have to go now - lunch time! :-) @JohnRennie It's possible to do it using the energy method. Just we need to carefully write down the potential function which is $U(r)=\frac{1}{2}\frac{mg}{R}r^2$ with zero point at the center of the earth. Anonymous Also I don't particularly like this SHM problem because it causes a lot of misconceptions. The motion is SHM only under particular conditions :P I see with concern the close queue has not shrunk considerably in the last week and is still at 73 items. This may be an effect of increased traffic but not increased reviewing or something else, I'm not sure Not sure about that, but the converse is certainly false :P Derrida has received a lot of criticism from the experts on the fields he tried to comment on I personally do not know much about postmodernist philosophy, so I shall not comment on it myself I do have strong affirmative opinions on textual interpretation, made disjoint from authoritorial intent, however, which is a central part of Deconstruction theory. But I think that dates back to Heidegger. I can see why a man of that generation would be leaned towards that idea. I do too.
4.2 - Introduction to Confidence Intervals4.2 - Introduction to Confidence Intervals In Lesson 4.1 we learned how to construct sampling distributions when population values were known. In real life, we don't typically have access to the whole population. In these cases we can use the sample data that we do have to construct a confidence interval to estimate the population parameter with a stated level of confidence. This is one type of statistical inference. Confidence Interval A range computed using sample statistics to estimate an unknown population parameter with a stated level of confidence Example: Statistical Anxiety The statistics professors at a university want to estimate the average statistics anxiety score for all of their undergraduate students. It would be too time consuming and costly to give every undergraduate student at the university their statistics anxiety survey. Instead, they take a random sample of 50 undergraduate students at the university and administer their survey. Using the data collected from the sample, they construct a 95% confidence interval for the mean statistics anxiety score in the population of all university undergraduate students. They are using \(\bar{x}\) to estimate \(\mu\). If the 95% confidence interval for \(\mu\) is 26 to 32, then we could say, “we are 95% confident that the mean statistics anxiety score of all undergraduate students at this university is between 26 and 32.” In other words, we are 95% confident that \(26 \leq \mu \leq 32\). This may also be written as \(\left [ 26,32 \right ]\). At the center of a confidence interval is the sample statistic, such as a sample mean or sample proportion. This is known as the point estimate. The width of the confidence interval is determined by the margin of error. The margin of error is the amount that is subtracted from and added to the point estimate to construct the confidence interval. Point Estimate Sample statistic that serves as the best estimate for a population parameter Margin of Error Half of the width of a confidence interval; equal to the multiplier times the standard error General Form of Confidence Interval \(sample\ statistic \pm margin\ of\ error\) \(margin\ of\ error=multiplier(standard\ error)\) The margin of error will depend on two factors: The level of confidence which determines the multiplier The value of the standard error In Lesson 2 you first learned about the Empirical Rule which states that approximately 95% of observations on a normal distribution fall within two standard deviations of the mean. Thus, when constructing a 95% confidence interval your textbook uses a multiplier of 2. General Form of 95% Confidence Interval \(sample\ statistic\pm2\ (standard\ error)\) Example: Proportion of Dog Owners At the beginning of the Spring 2017 semester a representative sample of 501 STAT 200 students were surveyed and asked if they owned a dog. The sample proportion was 0.559. Bootstrapping methods, which we will learn later in this lesson, were used to compute a standard error of 0.022. We can use this information to construct a 95% confidence interval for the proportion of all STAT 200 students who own a dog. 0.559 ± 2(0.022) 0.559 ± 0.044 [0.515, 0.603] I am 95% confident that the population proportion is between 0.515 and 0.603. Example: Mean Height In a random sample of 525 Penn State World Campus students the mean height was 67.009 inches with a standard deviation of 4.462 inches. The standard error was computed to be 0.195. Construct a 95% confidence interval for the mean height of all Penn State World Campus students. 95% confidence interval: 67.009 ± 2(0.195) 67.009 ± 0.390 [66.619, 67.399] I am 95% confident that the mean height of all Penn State World Campus students is between 66.619 inches and 67.399 inches. 4.2.1 - Interpreting Confidence Intervals4.2.1 - Interpreting Confidence Intervals Confidence intervals are often misinterpreted. The logic behind them may be a bit confusing. Remember that when we're constructing a confidence interval we are estimating a population parameter when we only have data from a sample. We don't know if our sample statistic is less than, greater than, or approximately equal to the population parameter. And, we don't know for sure if our confidence interval contains the population parameter or not. The correct interpretation of a 95% confidence interval is that "we are 95% confident that the population parameter is between X and X." Example: Correlation Between Height and Weight At the beginning of the Spring 2017 semester a sample of World Campus students were surveyed and asked for their height and weight. In the sample, Pearson's r = 0.487. A 95% confidence interval was computed of [0.410, 0.559]. The correct interpretation of this confidence interval is that we are 95% confident that the correlation between height and weight in the population of all World Campus students is between 0.410 and 0.559. Example: Seatbelt Usage A sample of 12th grade females was surveyed about their seatbelt usage. A 95% confidence interval for the proportion of all 12th grade females who always wear their seatbelt was computed to be [0.612, 0.668]. The correct interpretation of this confidence interval is that we are 95% confident that the proportion of all 12th grade females who always wear their seatbelt in the population is between 0.612 and 0.668. Example: IQ Scores A random sample of 50 students at one school was obtained and each selected student was given an IQ test. These data were used to construct a 95% confidence interval of [96.656, 106.422]. The correct interpretation of this confidence interval is that we are 95% confident that the mean IQ score in the population of all students at this school is between 96.656 and 106.422. 4.2.2 - Applying Confidence Intervals4.2.2 - Applying Confidence Intervals A confidence interval contains a range of acceptable estimates of the population parameter. Values in a confidence interval are reasonable estimates for the true population value. Values not in the confidence interval are not reasonable estimates for the population value. Example: Correlation Between Height and Weight At the beginning of the Spring 2017 semester a sample of World Campus students were surveyed and asked for their height and weight. In the sample, Pearson's r = 0.487. A 95% confidence interval was computed of [0.410, 0.559]. Research question: Is there evidence of a positive correlation between height and weight in the population of all World Campus students? The entire confidence interval is greater than zero which means that all reasonable estimates of the population correlation are positive. Yes, there is evidence of a positive correlation between height and weight in the population of all World Campus students. Example: Seatbelt Usage A sample of 12th grade females was surveyed about their seatbelt usage. A 95% confidence interval for the proportion of all 12th grade females who always wear their seatbelt was computed to be [0.612, 0.668]. Research question: Is there evidence that the proportion of all 12th grade females who always wear their seatbelt is different from 0.65? The value of 0.65 is contained within our confidence interval. This means that 0.65 is a reasonable value of the population proportion. We cannot conclude that the population proportion is different from 0.65. Example: IQ Scores A random sample of 50 students at one school was obtained and each selected student was given an IQ test. These data were used to construct a 95% confidence interval of [96.656, 106.422]. Research question: Is there evidence that the mean IQ score at this school is different from the known national average of 100? The 95% confidence interval contains 100. This means that 100 is a reasonable estimate for the mean IQ score of students at this school. There is not evidence that the mean at this school is different from 100.
In Srednicki's QFT Chapter 9, he first computed the vacuum expectation value of the field $\varphi(x)$ without including the counterterms in $\mathcal L$, then he found that the VEV is not zero, so he included the linear counterterm $Y\varphi$ to cancel the nonzero terms in the VEV. He computed the $O(g)$ term in $Y$ and said the $O(g^3)$ term in $Y$ can also be determined if we sum up the corresponding diagrams at $O(g^3)$. Here I have a question, he seemed to have ignored the single source diagrams which contain the vertex corresponding to the other counterterm $$\mathcal L_c=-\frac12(Z_\varphi-1)\partial^\mu\varphi\partial_\mu\varphi-\frac12(Z_m-1)m^2\varphi^2,$$ diagrams containing this vertex do not appear at $O(g)$ in the VEV, so the $O(g)$ term in $Y$ doesn't change, but new diagrams containing this new vertex appear at $O(g^3)$ in the VEV, so the $O(g^3)$ term in $Y$ will change if we include the new vertex. Is Srednicki wrong for ignoring the effect of this vertex on the VEV? In Srednicki's QFT Chapter 9, he first computed the vacuum expectation value of the field $\varphi(x)$ without including the counterterms in $\mathcal L$, then he found that the VEV is not zero, so he included the linear counterterm $Y\varphi$ to cancel the nonzero terms in the VEV. He computed the $O(g)$ term in $Y$ and said the $O(g^3)$ term in $Y$ can also be determined if we sum up the corresponding diagrams at $O(g^3)$. $Y$ is a function of $g$. We can expand it in series of $g$,i.e. $$Y(g) = y_1 g + y_3 g^3 + \cdots$$ When we want to determine the value of $y_1$, the counterterms can be neglected because it is of order $O(g^3)$. But when we want to determine the value of $y_3$ and higher order terms, diagrams with counterterms must be included to ensure the higher order terms of $\rm VEV$ vanish. And $Y(g)$ can be calculated order by order. That is the so called perturbation quantum field theory. Srednicki's book says, Thus, at $O(g^3)$, we sum up the diagrams of figs. 9.4 and 9.12, and then add to $Y$ whatever $O(g^3)$ term is needed to maintain $\langle 0|\phi(x)|0\rangle = 0$. In this way we can determine the value of $Y$ order by order in powers of $g$. Fig9.4 and 9.12 do not include diagram with counterterms. So it may be a negligence of the author.
Here begins a new series, akin to the “Silly Proofs” series. Obscure results which are cool, but which you probably haven’t heard of. Suppose we’ve got a pair of measurable spaces (sets with a \sigma algebra on them) \(X, Y\). We make \(X \times Y\) by taking the \sigma algebra generated by sets of the form \(A \times B\). in the case where we have topologies on \(X\) and \(Y\) and are giving them their Borel algebras, we might suppose this agrees with the Borel algebra of the product. Alas, ’tis not so! It does in the second countable case, but \in general not: Theorem (Nedoma’s Pathology): Let \(X\) be a measurable space with \(|X| > 2^{\aleph_0}\). Then the product algebra on \(X^2\) does not contain the diagonal. In particular, if \(X\) is Hausdorff then the diagonal is a closed set in the product topology which is not contained in the product algebra. The proof proceeds by way of two lemmas: Lemma: Let \(X\) be a set, \(\mathcal{A} \subseteq P(X)\) and \(U \in \sigma(\mathcal{A})\). There exist \(A_1, \ldots, A_n, \ldots\) with \(U \in \sigma(A_1, \ldots, A_n, \ldots)\) Proof: The set of \(U\) satisfying the conclusion of the theorem is a \(\sigma\) algebra containing \(\mathcal{A}\). Lemma: Let \(U \subseteq X^2\) be measurable. Then \(U\) is the union of at most \(2^{\aleph_0}\) sets of the form \(A \times B\). Proof: By the preceding lemma we can find \(A_n\) with \(U \in \sigma (\{ A_m \times A_n \})\) For \(x \in \{0, 1\}^{\mathbb{N}}\), define \(B_x = \bigcap C_n\) where \(C_n = A_n\) if \(x_n = 1\) and \(A_n^c\) otherwise. Sets of the form \(B_x \times B_y\) then form a partition of of \(X^2\). Thus the sets which can be written as a union of sets of the form \(B_x \times B_y\) form a \(\sigma\) algebra. This contains each of the \(A_m \times A_n\), and so contains \(U\). Thus \(U = \bigcup { B_x \times B_y : B_x \times B_y \subseteq U }\). There are at most \(2^{\aleph_0}\) sets in this union. Hence the desired result. Finally we have the proof of the theorem: Let \(D\) be the diagonal. Suppose \(D\) is measurable. Then \(D\) is the union of at most \(2^{\aleph_0}\) sets of the form \(A \times B\). Because \(|D| > 2^{\aleph_0}\) at least one of these sets must contain two points. Say \((u, u)\) and \((v, v)\). But then it also contains \((u, v) \not\in D\). This is a contradiction. QED To be honest, this theorem doesn’t seem that useful to me. But knowing about it lets you avoid a potential pitfall – when you’re dealing with measures on large spaces (e.g. on \(\{0, 1\}^{\kappa}\), which it’s really important to be able to do when you’re playing with certain forcing constructions) things are significantly less well behaved than you might hope.
There are several possibilities for your confusion - and you bring up several related concepts (Work, potential energy, kinetic energy). When this happens I think it's helpful to isolate one thing which you know must, absolutely be true. The definition of work (for a constant force) is $$W=\vec{F}\cdot \Delta \vec{x}=F\Delta x \cos\theta.$$ First ask "what is the work due to gravity?". Then $F=mg$, $\Delta x =x_f-x_i=$ 25 m (call this $h$), but the angle between $\vec{F}$ and $\Delta x$ is 180$^\circ$, so $cos(180)=-1$, and $W_g=-mgh=-1000$ J (the work due to gravity, as you've correctly identified). In my view, this makes the most intuitive sense of all the things you've said. Force of gravity is acting to pull the object downward, so it's doing negative work on the object. So, what now? We have various options for understanding the rest of your question. Let's try conservation of energy: $$\Delta P+\Delta K=W$$ Ok, but when you write this equation, you need to know what "the system" is - so you can complete the statement, "I am looking for the change in potential and kinetic energy of the system". In this case, "the system" is the block, and the work being done on the block is the work by your hand. That's what makes the block go up, that's what adds energy to the system. So, the work done by your hand is opposite the work done by gravity, and $W_h=mgh$. Now, in order to understand $\Delta K$, we need to know what the initial and final velocities are (which we have not used yet, or even needed). You've said they are both zero - fine, then we understand very well that $$\Delta P=W_h=mgh$$ Slight aside: How can you tell when the change in potential energy is zero? Potential energy is how much energy an object "could use to do something". When you raise a book a certain distance, the book wants to move downward. Therefore, it has gained potential energy "to do something". This makes sense to me, but the definition is work is a bit more unambiguous so that's why I gave that first.
According to Cardy's lectures on Conformal Field theory, the general form of an operator product expansion is $$ \phi_{i}(r_{i})\cdot\phi_{j}(r_{j})=\sum_{k}C_{ijk}(r_{i}-r_{j})\phi_{k}((r_{i}+r_{j})/2)), $$ see page 5 Eq.(4). I don't understand why the fields on the right hand side of this equation necessarily only depend on $(r_{i}+r_{j})/2$. In principle it seems to me that they could depend on both $r_{i}$ and $r_{j}$ or equivalently on both $(r_{i}+r_{j})/2$ and $r_{i}-r_{j}$. Take the simple example of vertex operators which fulfills $$ :e^{i\alpha\phi(z)}::e^{i\beta\phi(w)}: = e^{-\alpha\beta\langle\phi(z)\phi(w)\rangle} :e^{i\alpha\phi(z)+i\beta\phi(w)}: $$ Typically we have $e^{-\alpha\beta\langle\phi(z)\phi(w)\rangle}\propto\ln(z-w)$ from which I can understand why the $C_{ijk}$ only depend on $z-w$. But it is not clear to me, why the field $e^{i\alpha\phi(z)+i\beta\phi(w)}$ only depends on $(z+w)/2$. I would be very happy if someone could explain this point to me.
In this module: Random number generator – using volatile functions – with number of observations \(n\) switch Dynamic range as a statistical data reference Dynamic range as a chart series Creating an X Y scatter chart Random number generator Pseudo random numbers \(\text{N(0,1) i.i.d.}\) are generated by the Excel RAND function and then passed as the probability argument to the NORM.S.INV function. The correlation structure is incorporated by using the Cholesky decomposition method. Cholesky decomposition Generate \(Z_1,Z_2\) Generate \(Z_1,Z_2 \sim \text{N(0,1) i.i.d.}\) Set \(X=Z_1\) and \(Y=\rho Z_1 + Z_2 \sqrt{1-\rho^2}\), where \(\rho\) is the correlation coefficient The X Y data worksheet \(Z_1,Z_2\) are generated on the worksheet named: X Y data. Pseudo random numbers \(\text{N(0,1) i.i.d.}\) are generated with the nested NORM.S.INV(RAND()) formula. Cell B3: =IF(ROW()-2<=NoObs,NORM.S.INV(RAND()),"") Cell C3: =IF(ROW()-2<=NoObs,rho*B3+SQRT(1-rho^2)*NORM.S.INV(RAND()),"")where rhoand NoObsare links to the selector and validator panel Copy B3:C3and paste to the range B4:B5002. Name B3:B5002as X, and C3:C5002as Y Summary statistics and the X-Y scatter plot Refer to the X Y Interface worksheet shown in the figure 1 Excel Web App #1. The selector / validator panel The correlation selector D5: the correlation selector - is set up by Data Validation Select Data > Data Tools > Data Validation In the Data Validation dialog box - Settingstab Set Validation criteria; Allow: Decimal; Data: between; Minimum: -1; Maximum 1 In the Data Validation dialog box - Input Messagetab Set Title: Correlation value; Input message: Enter rho in the range ... -1 <= rho <= +1 The "No of points" validator D6: the "No of point"s validator - is set up by Data Validation Select Data > Data Tools > Data Validation In the Data Validation dialog box - Settingtab Set Validation Criteria; Allow: List; Source: 625,1250,2500,5000 In the Data Validation dialog box - Input Messagetab Set Title: Number of observations; Input message: 625, 1250, 2500, 5000, ... Select 625 to reduce worksheet recalculation time The descriptive statistics panel Note: INDIRECT(D$10) uses the value in cell D10, X, to point to the range named X. Formulas in the summary statistics section: C11: =MIN(INDIRECT(D$10) C12: =MAX(INDIRECT(D$10) C13: =AVERAGE(INDIRECT(D$10) C14: =VAR.S(INDIRECT(D$10) C15: =STDEV.S(INDIRECT(D$10) C16: =SKEW(INDIRECT(D$10) C17: =KURT(INDIRECT(D$10) C18: =COUNT(INDIRECT(D$10) C19: =MAX(INDIRECT(D$10)-MIN(INDIRECT(D$10) C21: =COVARIANCE.S(X,Y) C22: =CORREL(X,Y) C23: =COVARIANCE.S(X,Y)/(STDEV.S(X)*STDEV.S(Y)) The X Y scatter plot To select the data, type XV,YVin the Name Box. If you are using the Web App file, you will need to Unhidethe X Y dataworksheet before you select the data. Create the chart on that worksheet, then move it to the X Y interfaceworksheet Insert the chart- follow the ribbon sequence: Insert > Charts > Charts Dialog Launcher > All Charts tab > X Y (Scatter) > Scatter. Click OKto display the chart shown in figure 2 Using the Insert > Chart sequence does not retain the reference to the dynamic range names, instead a static name is applied (see figure 4). When a vector length shorter than the static range is selected, the X Y chart collapses as shown in figure 3. The x axis (the Horizontal (Value) Axis) displays the observation numbers, with the observation 626 onwards displaying as blank (the static vector being 1,250 data points in length) To display the Edit Seriesdialog box, use the ribbon sequence Design > Data > Select Data to display the Select Data Sourcedialog box, then, in Legend Entries (Series)select Series1, then click Edit. The Edit Seriesdialog box will now be displayed - figure 4 In step 1, the dynamic ranges XYand YVwere selected, but these were overwritten by Series X values: ='X Y data!$B$3:$B$1252and Series Y values: ='X Y data!$C$3:$C$1252as shown in the Edit Series dialog box in figure 4. The series reference need to be replaced with links to the dynamic vectors Link the chart series to the dynamic range- by editing the series values. Set Series X values: ='xlf-x-y-scatter.xlsx'!XV, and Series Y values: ='xlf-x-y-scatter.xlsx'!YV Format the X Y chart- from the Chart Tools tab, select Format > Current Selection > Chart Elements box then the Chart Area element. From the same ribbon tab, select Size, and set Height: 12.5 cm, and Width: 12.5 cm A two vector X Y plot has only one series, and the Series namevalue in figure 4 was left blank, thus the default name is Series1. Select Series1from the Chart Element box Click Format Selection to display the Format Data Series task pane. This is the Office 2013 implementation of an Office 2010 dialog box. An example is shown in figure 5 Select Marker > Marker Options, then Built-in, and set Size: 2 Select Series Options > Horizontal (Value) Axis. This will display the Format Axistask pane Select Axis Options (the three column icon), then set Bounds; Minimum: -4.0, and Maximum: 4.0. Vertical axis crosses; Axis value: 0.0 Repeat step 10 for the Vertical (value) Axis. Select the axis from the Axis Options > Elements List box - figure 6 Add a trendline- to display the Format Trendline task pane. Select the Fill & Linetab, then set Line: Solid; Color: Standard Colors, Red; Width: 1.5 pt; Dash type: Solid; then Closethe task pane. Hide the chart title- Chart Tools > Design > Add Chart Elements > Chart Title > None Cut and paste the chart to the X Y interfaceworksheet The completed chart appears in figure 7 Selected Excel functions used in this module and associated statistical functions. Excel functions Description NORM.S.INV(probability) [2010] Returns the inverse of the standard normal cumulative distribution \(\text{N(0,1) i.i.d.}\). RAND() Returns an evenly distributed random number \(0 \le x \lt 1\). The rand function takes no arguments ROW([reference]) Returns the row number of a reference SQRT(number) Returns the positive square root of a number \(\sqrt{x}; \quad x^{0.5} \) This example was developed in Excel 2013 Pro 64 bit. Last modified: 27 Mar 2017, 7:15 pm [Australian Eastern Time (AET)]
(Sorry was asleep at that time but forgot to log out, hence the apparent lack of response) Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference
I have a hard time with selective precipitation problem. A mixture of $\ce{BaSO4}$ and $\ce{BaS2O3}$ is shaken with pure water until a saturated solution is formed. Both solids remain in excess. What is $[\ce{Ba^{2+}}]$ in the saturated solution? $K_\mathrm{sp}(\ce{BaSO4}) = 9 \times 10^{-11}$ and $K_\mathrm{sp}(\ce{BaS2O3}) = 4 \times 10^{-10}$. My solution: Because $K_\mathrm{sp}$ of $\ce{BaSO4} < \ce{BaS2O3}$, $\ce{BaS2O3}$ will dissolved first. When $\ce{BaS_2O_3}$ dissolved, the concentration of $[\ce{Ba^{2+}}] = [\ce{S2O3^2-}] = \sqrt{K_\mathrm{sp}(\ce{BaS2O3})} = 2 \times 10^{-5}$. That means, in the solution, there are: \begin{align} [\ce{Ba^2+}] &= 2 \times 10^{-5}\\ [\ce{S2O3^2-}] &= 2 \times 10^{-5}\\ [\ce{SO4^2-}] &= \frac{K_\mathrm{sp}(\ce{BaSO4})}{[\ce{Ba^2+}]} = \frac{9 \times 10^{-11}}{2 \times 10^{-5}} = 4.5 \times 10^{-6} \end{align} But the solution is $\ce{[Ba^2+]} = \sqrt{4.9 \times 10^{-10}}$. What have I missed? EDIT 1 : \begin{align} \ce{BaSO4 &<=> Ba^2+ + SO4^2- }\\ \ce{BaS2O3 &<=> Ba^2+ + S2O3^2- } \end{align} The Solubility Product Equation : \begin{align} K_\mathrm{sp}(\ce{BaSO4}) &= [\ce{Ba^2+}] \times [\ce{SO4^2-}] \\ K_\mathrm{sp}(\ce{BaS2O3}) &= [\ce{Ba^2+}] \times [\ce{S2O3^2-}] \end{align} When the solids dissolves, it will produce $\ce{Ba^2+}$ ion, $\ce{SO4^2-}$ ion and $\ce{S2O3^2-}$ ion. $\ce{BaSO4}$ will stop dissolving when $\ce{Ba^2+}$ ion concentration in solution when multiply with $\ce{SO4^2-}$ ion concentration is equal to solubility product of $\ce{BaSO4}$, which is $K_\mathrm{sp}(\ce{BaSO4}) = 9 \times 10^{-11}$ And at the same time, $\ce{BaS2O3}$ will stop dissolving when $\ce{Ba^2+}$ ion concentration in solution when multiply with $\ce{S2O3^2-}$ ion concentration is equal to solubility product of $\ce{BaS2O3}$, which is $K_\mathrm{sp}(\ce{BaS2O3}) = 4 \times 10^{-10}$ Solubility of $\ce{BaSO4}$ in water is \begin{align} s &= \sqrt{K_\mathrm{sp}(\ce{BaSO4})}\\ s &= \sqrt{9 \times 10^{-11}} \\ s &= 3 \times 10^{-5.5} \end{align} Solubility of $\ce{BaS2O3}$ in water is \begin{align} s &= \sqrt{K_\mathrm{sp}(\ce{BaS2O3})}\\ s &= \sqrt{4 \times 10^{-10}}\\ s &= 2 \times 10^{-5} \end{align} Now, when the solids is mixed in water, There should be $[\ce{SO4^2-}] =3 \times 10^{-5.5} $ and $[\ce{S2O3^2-}] =2 \times 10^{-5} $. Now I confuse what ion concentration should I use to determine $[\ce{Ba^2+}]$ EDIT 2: \begin{align} K_\mathrm{sp}(\ce{BaSO4}) &= [\ce{Ba^2+}] \times [\ce{SO4^2-}] \\ \frac{K_\mathrm{sp}(\ce{BaSO4})}{[\ce{Ba^2+}]} &= [\ce{SO4^2-}] \\ \end{align} and \begin{align} K_\mathrm{sp}(\ce{BaS2O3}) &= [\ce{Ba^2+}] \times [\ce{S2O3^2-}] \\ \frac{K_\mathrm{sp}(\ce{BaS2O3})}{[\ce{Ba^2+}]} &= [\ce{S2O3^2-}] \\ \end{align} From @santimirandarp answer, I got that : \begin{align} [\ce{Ba^2+}] &= [\ce{S2O3^2-}] + [\ce{SO4^2-}]\\ [\ce{Ba^2+}] &= \frac{K_\mathrm{sp}(\ce{BaS2O3})}{[\ce{Ba^2+}]} + \frac{K_\mathrm{sp}(\ce{BaSO4})}{[\ce{Ba^2+}]}\\ [\ce{Ba^2+}] &= \frac{4 \times 10^{-10}}{[\ce{Ba^2+}]} + \frac{9 \times 10^{-11}}{[\ce{Ba^2+}]}\\ [\ce{Ba^2+}] &= \frac{4.9 \times 10^{-10}}{[\ce{Ba^2+}]} \\ [\ce{Ba^2+}]^2 &= 4.9 \times 10^{-10} \\ [\ce{Ba^2+}] &= \sqrt{4.9 \times 10^{-10}} \\ \end{align} My problem solved. Thank You
Example 7-11: Spouse Data (Question 2) Section Next we will return to the second question posed earlier in this lesson. Question 2: Do the husbands and wives accurately perceive the responses of their spouses? To understand this question, let us return to the four questions asked of each husband and wife pair. The questions were: What is the level of passionate love you feel for your partner? What is the level of passionate love your partner feels for you? What is the level of companionate love you feel for your partner? What is the level of companionate love your partner feels for you? Notice that these questions are all paired. The odd numbered questions ask about how each person feel about their spouse, while the even numbered questions ask how each person thinks their spouse feels towards them. The question that we are investigating now asks about perception, so here we are trying to see if the husbands accurately perceive the responses of their wives and conversely to the wives accurately perceive the responses of their husbands. In more detail, we may ask: In this case we are asking if the wife accurately perceives the amount of passionate love her husband feels towards her. Here, we are asking if the husband accurately perceives the amount of passionate love his wife feels towards him. Does the husband's answer to question 1 match the wife's answer to question 2. Secondly, does the wife's answer to question 1 match the husband's answer to question 2. Similarly, does the husband's answer to question 3 match the wife's answer to question 4, and Does the wife's answer to question 3 match the husband's answer to question 4. To address the research question we need to define four new variables as follows: \(Z_{i1} = X_{1i1} - X_{2i2}\) - (for the \(i^{th}\) couple, the husbands response to question 1 minus the wives response to question 2.) \(Z_{i2} = X_{1i2} - X_{2i1}\) - (for the \(i^{th}\) couple, the husbands response to question 2 minus the wives response to question 1.) \(Z_{i3} = X_{1i3} - X_{2i4}\) - (for the \(i^{th}\) couple, the husbands response to question 3 minus the wives response to question 4.) \(Z_{i4} = X_{1i4} - X_{2i3}\) - (for the \(i^{th}\) couple, the husbands response to question 4 minus the wives response to question 3.) These Z's can then be collected into a vector. We can then calculate the sample mean for that vector... \(\mathbf{\bar{z}} = \dfrac{1}{n}\sum_{i=1}^{n}\mathbf{Z}_i\) as well as the sample variance-covariance matrix... \(\mathbf{S}_Z = \dfrac{1}{n-1}\sum_{i=1}^{n}\mathbf{(Z_i-\bar{z})(Z_i-\bar{z})'}\) Here we will make the usual assumptions about the vector \(Z_{i}\) containing the responses for the \(i^{th}\) couple: The \(Z_{i}\)'s have common mean vector \(\mu_{Z}\) The \(Z_{i}\)'s have common variance-covariance matrix \(\Sigma_{Z}\) Independence. The \(Z_{i}\)'s are independently sampled. Multivariate Normality. The \(Z_{i}\)'s are multivariate normally distributed. Question 2 is equivalent to testing the null hypothesis that the mean \(\mu_{Z}\) is equal to 0, against the alternative that \(\mu_{Z}\) is not equal to 0 as expressed below: \(H_0\colon \boldsymbol{\mu}_Z = \mathbf{0}\) against \(H_a\colon \boldsymbol{\mu}_Z \ne \mathbf{0}\) We may then carry out the Paired Hotelling's T-Square test using the usual formula with the sample mean vector z-bar replacing the mean vector y-bar from our previous example, and the sample variance-covariance matrix \(S_{Z}\) replacing the the sample variance-covariance matrix \(S_{Y}\) also from our previous example: \(n\mathbf{\bar{z}'S^{-1}_Z\bar{z}}\) We can then form the F-statistic as before: \(F = \frac{n-p}{p(n-1)}T^2 \sim F_{p, n-p}\) And, under \(H_{o} \colon \mu_{Z} = 0\) we will reject the null hypothesis \(H_{o}\) at level \(\alpha\) if this F-statistic exceeds the critical value from the F-table with p and n- p degrees of freedom evaluated at \(α\). \(F > F_{p, n-p, \alpha}\) Using SAS The analysis may be carried out using the SAS program as shown below: The SAS program hopefully resembles the program spouse.sas used to address question #1. Download the SAS Program: spouse2.sasView the video below to see how to find Hotelling's \(T^{2}\) using the SAS statistical software application. The output contains on its first page a printing of the data for each of the matrixes, including the transformations d1 through d4. Page two contains the output from the iml procedure which carries out the Hotelling's \(T^{2}\) test. Again, we can see that \(\mu_{0}\) is defined to be a vector of 0's. The sample mean vectors is given under YBAR. S is our sample variance-covariance matrix for the Z's. Using Minitab At this time Minitab does not support this procedure. Analysis The Hotelling's \(T^{2}\) statistic comes out to be 6.43 approximately with a corresponding F of 1.44, with 4 and 26 degrees of freedom. The p-value is 0.24 which exceeds 0.05, therefore we do not reject the null hypothesis at the 0.05 level. Conclusion In conclusion, we can state that there is no statistically significant evidence against the hypothesis that the husbands and wives accurately perceive the attitudes of their spouses. Our evidence includes the following statistics: ( \(T^{2}\) = 6.43; F = 1.44; d.f. = 4, 26; p = 0.249).
"Abstract: We prove, once and for all, that people who don't use superspace are really out of it. This includes QCDers, who always either wave their hands or gamble with lettuce (Monte Zuma calculations). Besides, all nonsupersymmetric theories have divergences which lead to problems with things like renormalons, instantons, anomalons, and other phenomenons. Also, they can't hide from gravity forever." Can a gravitational field possess momentum? A gravitational wave can certainly possess momentum just like a light wave has momentum, but we generally think of a gravitational field as a static object, like an electrostatic field. "You have to understand that compared to other professions such as programming or engineering, ethical standards in academia are in the gutter. I have worked with many different kinds of people in my life, in the U.S. and in Japan. I have only encountered one group more corrupt than academic scientists: the mafia members who ran Las Vegas hotels where I used to install computer equipment. " so Ive got a small bottle that I filled up with salt. I put it on the scale and it's mass is 83g. I've also got a jup of water that has 500g of water. I put the bottle in the jug and it sank to the bottom. I have to figure out how much salt to take out of the bottle such that the weight force of the bottle equals the buoyancy force. For the buoyancy do I: density of water * volume of water displaced * gravity acceleration? so: mass of bottle * gravity = volume of water displaced * density of water * gravity? @EmilioPisanty The measurement operators than I suggested in the comments of the post are fine but I additionally would like to control the width of the Poisson Distribution (much like we can do for the normal distribution using variance). Do you know that this can be achieved while still maintaining the completeness condition $$\int A^{\dagger}_{C}A_CdC = 1$$? As a workaround while this request is pending, there exist several client-side workarounds that can be used to enable LaTeX rendering in chat, including:ChatJax, a set of bookmarklets by robjohn to enable dynamic MathJax support in chat. Commonly used in the Mathematics chat room.An altern... You're always welcome to ask. One of the reasons I hang around in the chat room is because I'm happy to answer this sort of question. Obviously I'm sometimes busy doing other stuff, but if I have the spare time I'm always happy to answer. Though as it happens I have to go now - lunch time! :-) @JohnRennie It's possible to do it using the energy method. Just we need to carefully write down the potential function which is $U(r)=\frac{1}{2}\frac{mg}{R}r^2$ with zero point at the center of the earth. Anonymous Also I don't particularly like this SHM problem because it causes a lot of misconceptions. The motion is SHM only under particular conditions :P I see with concern the close queue has not shrunk considerably in the last week and is still at 73 items. This may be an effect of increased traffic but not increased reviewing or something else, I'm not sure Not sure about that, but the converse is certainly false :P Derrida has received a lot of criticism from the experts on the fields he tried to comment on I personally do not know much about postmodernist philosophy, so I shall not comment on it myself I do have strong affirmative opinions on textual interpretation, made disjoint from authoritorial intent, however, which is a central part of Deconstruction theory. But I think that dates back to Heidegger. I can see why a man of that generation would be leaned towards that idea. I do too.
I am not a fluid dynamicist, and I really just began thinking about this problem as my curiousity drew me into building an answer for the question What really allows airplanes to fly?. It is very clear from the answers to the following questions: that viscous flows cannot be everywhere irrotational. Moreover, some handwaving justification is given in answer to the first question that low viscosity fluid can be irrotational. Now, the spin angular momentum per unit volume of a fluid is the vector $\rho\,\nabla \wedge \vec{v}$. So the assumption of irrotational flow is the assumption of lack of spin angular momentum. I can accept that it is reasonable in some cases to accept this to be true. But is there a deeper theoretical justification as to why and when a fluid's spin angular momentum should be nought? In electromagnetism, we have specific equations - the Ampère and Faraday laws i.e. $\nabla\wedge\vec{H} = \vec{J}+\partial_t\vec{D}$ and $\nabla\wedge\vec{E} = -\partial_t\vec{B}$ - for the "source" of such vorticity in the fields. At an elementary level, we can see that in electrostatics conservation of energy behests that $\oint\vec{E}\cdot \mathrm{d} \vec{r} = 0$, so we immediately grasp irrotationalhood for the electrostatic field. Is there any such analogue in fluid dynamics? The Navier-Stokes equation doesn't seem "split" the velocity field up into curl and divergence terms like the Maxwell equations do, or to put it more pithily, the Maxwell equations say that the source distribution must be an exact form $\mathrm{d} \vec{F} = \mu_0 \vec{J}$ and we can thus "invert" $\mathrm{d}$, to within a constant (which we can set to nought the grounds that it would have infinite energy). Are there any other ways to "split" the Navier-Stokes equation like Maxwell's equations are to shed intuitive light on the nature of an irrotational flow?
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
UPDATE (March 14, 2019, 1:18 p.m.): On Thursday, Google announced that one of its employees, Emma Haruka Iwao, had found nearly 9 trillion new digits of pi, setting a new record. Humans have now calculated the never-ending number to 31,415,926,535,897 (get it?) — about 31.4 trillion — decimal places. It’s a Pi Day miracle! Previously, we published a story about humans’ pursuit of pi’s infinite string of digits. To celebrate Pi Day, and the extra 9 trillion known digits, we’ve updated that story below. Depending on your philosophical views on time and calendars and so on, today is something like the 4.5 billionth Pi Day that Earth has witnessed. But that long history is nothing compared to the infinity of pi itself. A refresher for those of you who have forgotten your seventh-grade math lessons 1: Pi, or the Greek letter \(\pi\), is a mathematical constant equal to the ratio of a circle’s circumference to its diameter — C/d. It lurks in every circle, and equals approximately 3.14. (Hence Pi Day, which takes place on March 14, aka 3/14.) But the simplicity of its definition belies pi’s status as the most fascinating, and most studied, number in the history of the world. While treating pi as equal to 3.14 is often good enough, the number really continues on forever, a seemingly random series of digits ambling infinitely outward and obeying no discernible pattern — 3.14159265358979…. That’s because it’s an irrational number, meaning that it cannot be represented by a fraction of two whole numbers (although approximations such as 22/7 can come close). But that hasn’t stopped humanity from furiously chipping away at pi’s unending mountain of digits. We’ve been at it for millennia. People have been interested in the number for basically as long we’ve understood math. The ancient Egyptians, according to a document that also happens to be the world’s oldest collection of math puzzles, knew that pi was something like 3.1. A millennium or so later, an estimate of pi showed up in the bible: The Old Testament, in 1 Kings, seems to imply that pi equals 3: “And he made a molten sea, ten cubits from the one brim to the other: it was round all about … and a line of thirty cubits did compass it round about.” Archimedes, the greatest mathematician of antiquity, got as far as 3.141 by around 250 B.C. Archimedes approached his calculation of pi geometrically, by sandwiching a circle between two straight-edged regular polygons. Measuring polygons was easier than measuring circles, and Archimedes measured pi-like ratios as the number of the polygons’ sides increased, until they closely resembled circles. Meaningful improvement on Archimedes’s method wouldn’t come for hundreds of years. Using the new technique of integration, mathematicians like Gottfried Leibniz, one of the fathers of calculus, could prove such elegant equations for pi as: \begin{equation*}\frac{\pi}{4}=1-\frac{1}{3}+\frac{1}{5}-\frac{1}{7}+\frac{1}{9}-\ldots\end{equation*} The right-hand side, just like pi, continues forever. If you add and subtract and add and subtract all those simple fractions, you’ll inch ever closer to pi’s true value. The problem is that you’ll inch very, very slowly. To get just 10 correct digits of pi, you’d have to add about 5 billion fractions together. But more efficient formulas were discovered. Take this one, from Leonhard Euler, probably the greatest mathematician ever, in the 18th century: \begin{equation*}\frac{\pi^2}{6}=\frac{1}{1^2}+\frac{1}{2^2}+\frac{1}{3^2}+\ldots\end{equation*} And Srinivasa Ramanujan, a self-taught mathematical genius from India, discovered the totally surprising and bizarre equation below in the early 1900s. Each additional term in this sum adds eight correct digits to an estimate of pi: \begin{equation*}\frac{1}{\pi}=\frac{2\sqrt{2}}{9801}\sum_{k=0}^{\infty}\frac{(4k)!(1103+26390k)}{(k!)^4 396^{4k}}\end{equation*} Much like with the search for large prime numbers, computers blasted this pi-digit search out of Earth orbit and into deep space starting in the mid-1900s. ENIAC, an early electronic computer and the only computer in the U.S. in 1949, calculated pi to over 2,000 places, nearly doubling the record. As computers got faster and memory became more available, digits of pi began falling like dominoes, racing down the number’s infinite line, impossibly far but also never closer to the end. Building off of Ramanujan’s formula, the mathematical brothers Gregory and David Chudnovsky calculated over 2 billion digits of pi in the early 1990s using a homemade supercomputer housed in a cramped and sweltering Manhattan apartment. They’d double their tally to 4 billion digits after a few years. The current record now stands at about 31.4 trillion digits — thousands of times more than the Chudnovskys’ home-brewed supercomputer managed. It was calculated by a Google employee over 121 days using a freely available program called y-cruncher and verified with another 48 hours of number-crunching sessions. The calculation took up about as much storage space as the entire digital database of the Library of Congress. Emma Haruka Iwao, the woman behind the record, has been calculating pi on computers since she was a child. Iwao’s feat of calculation increased humanity’s collective knowledge of the digits of pi by about 40 percent. The previous record stood at over 22 trillion digits, worked out after 105 days of computation on a Dell server, also using y-cruncher. That program, which uses both the Ramanujan and Chudnovsky formulas, has been used to find record numbers of digits of not only pi, but also of other endless, irrational numbers, including e, \(\sqrt{2}\), \(\log{2}\) and the golden ratio. But maybe 31 trillion digits is just a bit of overkill. NASA’s Jet Propulsion Laboratory uses only 15 digits of pi for its highest-accuracy calculations for interplanetary navigation. Heck, Isaac Newton knew that many digits 350 years ago. “A value of \(\pi\) to 40 digits would be more than enough to compute the circumference of the Milky Way galaxy to an error less than the size of a proton,” a group of researchers wrote in a useful history of the number. So why would we ever need 31 trillion digits? Sure, we’ve learned a bit of math theory while digging deep into pi: about fast Fourier transforms and that pi is probably a so-called normal number. But the more satisfying answer seems to me to have nothing to do with math. Maybe it has to do with what President John F. Kennedy said about building a space program. We do things like this “not because they are easy, but because they are hard; because that goal will serve to organize and measure the best of our energies and skills.” But there’s one major difference: The moon is not infinitely far away; we can actually get there. Maybe this famous quote about chess is more apt: “Life is not long enough for chess — but that is the fault of life, not of chess.” Pi is too long for humankind. But that is the fault of humankind, not of pi. Happy Pi Day.
In this case we are replacing the random variables \(X_{ij}\), for the \(j_{th}\) sample for the \(i_{th}\) population, with random vectors \(X_{ij}\), for the \(j_{th}\) sample for the \(i_{th}\) population. These vectors contain the observations from the p variables. In our notation, we will have our two populations: The data will be denoted in Population 1 as: \(X_{11}\),\(X_{12}\), ... , \(X_{1n_{1}}\) The data will be denoted in Population 2 as: \(X_{21}\), \(X_{22}\), ... , \(X_{2n_{2}}\) Here the vector \(X_{ij}\) represents all of the data for all of the variables for sample unit j, for population i. \(\mathbf{X}_{ij} = \left(\begin{array}{c}X_{ij1}\\X_{ij2}\\\vdots\\X_{ijp}\end{array}\right)\) This vector contains elements \(X_{ijk}\) where k runs from 1 to p, for p different observed variables. So, \(X_{ijk}\) is the observation for variable k of subject j from population i. The assumptions here will be analogous to the assumptions in the univariate setting. Assumptions The data from population iis sampled from a population with mean vector \(\mu_{i}\). Again, corresponding to the assumption that there are no sub-populations. Instead of assuming Homoskedasticity, we now assume that the data from both populations have common variance-covariance matrix \(Σ\). Independence. The subjects from both populations are independently sampled. Normality. Both populations are normally distributed. Consider testing the null hypothesis that the two populations have identical population mean vectors. This is represented below as well as the general alternative that the mean vectors are not equal. \(H_0: \boldsymbol{\mu_1 = \mu_2}\) against \(\boldsymbol{\mu_1 \ne \mu_2}\) So here what we are testing is: \(H_0\colon \left(\begin{array}{c}\mu_{11}\\\mu_{12}\\\vdots\\\mu_{1p}\end{array}\right) = \left(\begin{array}{c}\mu_{21}\\\mu_{22}\\\vdots\\\mu_{2p}\end{array}\right)\) against \(H_a\colon \left(\begin{array}{c}\mu_{11}\\\mu_{12}\\\vdots\\\mu_{1p}\end{array}\right) \ne \left(\begin{array}{c}\mu_{21}\\\mu_{22}\\\vdots\\\mu_{2p}\end{array}\right)\) Or, in other words... \(H_0\colon \mu_{11}=\mu_{21}\) and \(\mu_{12}=\mu_{22}\) and \(\dots\) and \(\mu_{1p}=\mu_{2p}\) The null hypothesis is satisfied if and only if the population means are identical for all of the variables. The alternative is that at least one pair of these means is different. This is expressed below: \(H_a\colon \mu_{1k}\ne \mu_{2k}\) for at least one \( k \in \{1,2,\dots, p\}\) To carry out the test, for each population i, we will define the sample mean vectors, calculated the same way as before, using data only from the\(i_{th}\) population. \(\mathbf{\bar{x}}_i = \dfrac{1}{n_i}\sum_{j=1}^{n_i}\mathbf{X}_{ij}\) Similarly, using data only from the \(i^{th}\) population, we will define the sample variance-covariance matrices: \(\mathbf{S}_i = \dfrac{1}{n_i-1}\sum_{j=1}^{n_i}\mathbf{(X_{ij}-\bar{x}_i)(X_{ij}-\bar{x}_i)'}\) Under our assumption of homogeneous variance-covariance matrices, both \(S_{1}\) and \(S_{2}\) are estimators for the common variance-covariance matrix Σ. A better estimate can be obtained by pooling the two estimates using the expression below: \(\mathbf{S}_p = \dfrac{(n_1-1)\mathbf{S}_1+(n_2-1)\mathbf{S}_2}{n_1+n_2-2}\) Again, each sample variance-covariance matrix is weighted by the sample size minus 1.
UPDATE - Answer edited to be consistent with the latest version of the question. The different definitions you mentioned are NOT definitions. In fact, what you are describing are different representations of the Lorentz Algebra. Representation theory plays a very important role in physics. As far as the Lie algebra are concerned, the generators $L_{\mu\nu}$ are simply some operators with some defined commutation properties. The choices $L_{\mu\nu} = J_{\mu\nu}, S_{\mu\nu}$ and $M_{\mu\nu}$ are different realizations or representations of the same algebra. Here, I am defining\begin{align}\left( J_{\mu\nu} \right)_{ab} &= - i \left( \eta_{\mu a} \eta_{\nu b} - \eta_{\mu b} \eta_{\nu a} \right) \\\left( S_{\mu\nu}\right)_{ab} &= \frac{i}{4} [ \gamma_\mu , \gamma_\nu ]_{ab} \\M_{\mu\nu} &= i \left( x_\mu \partial_\nu + x_\nu \partial_\mu \right) \end{align}Another possible representation is the trivial one, where $L_{\mu\nu}=0$. Why is it important to have these different representations? In physics, one has several different fields (denoting particles). We know that these fields must transform in some way under the Lorentz group (among other things). The question then is, How do fields transform under the Lorentz group? The answer is simple. We pick different representations of the Lorentz algebra, and then define the fields to transform under that representation! For example Objects transforming under the trivial representation are called scalars. Objects transforming under $S_{\mu\nu}$ are called spinors. Objects transforming under $J_{\mu\nu}$ are called vectors. One can come up with other representations as well, but these ones are the most common. What about $M_{\mu\nu}$ you ask? The objects I described above are actually how NON-fields transform (for lack of a better term. I am simply referring to objects with no space-time dependence). On the other hand, in physics, we care about FIELDS. In order to describe these guys, one needs to define not only the transformation of their components but also the space time dependences. This is done by including the $M_{\mu\nu}$ representation to all the definitions described above. We then have Fields transforming under the trivial representation $L_{\mu\nu}= 0 + M_{\mu\nu}$ are called scalar fields. Fields transforming under $S_{\mu\nu} + M_{\mu\nu} $ are called spinor fields. Fields transforming under $J_{\mu\nu} + M_{\mu\nu}$ are called vector fields. Mathematically, nothing makes these representations any more fundamental than the others. However, most of the particles in nature can be grouped into scalars (Higgs, pion), spinors (quarks, leptons) and vectors (photon, W-boson, Z-boson). Thus, the above representations are often all that one talks about. As far as I know, Clifford Algebras are used only in constructing spinor representations of the Lorentz algebra. There maybe some obscure context in some other part of physics where this pops up, but I haven't seen it. Of course, I am no expert in all of physics, so don't take my word for it. Others might have a different perspective of this. Finally, just to be explicit about how fields transform (as requested) I mention it here. A general field $\Phi_a(x)$ transforms under a Lorentz transformation as$$\Phi_a(x) \to \sum_b \left[ \exp \left( \frac{i}{2} \omega^{\mu\nu} L_{\mu\nu} \right) \right]_{ab} \Phi_b(x)$$where $L_{\mu\nu}$ is the representation corresponding to the type of field $\Phi_a(x)$ and $\omega^{\mu\nu}$ is the parameter of the Lorentz transformation. For example, if $\Phi_a(x)$ is a spinor, then$$\Phi_a(x) \to \sum_b \left[ \exp \left( \frac{i}{2} \omega^{\mu\nu} \left( S_{\mu\nu} + M_{\mu\nu} \right) \right) \right]_{ab} \Phi_b(x)$$
These are just some rather handwaving comments related to why the spectrum of the fluctuations in the CMB are of expected to be scale invariant. The variations of the CMB we observe today correspond to the expanded initial over-dense and under-dense regions created by vacuum fluctuations. The theoretical expectation of scale invariance of the spectrum of these fluxtuations can for example be explained by looking at a scalar field with the Lagrangian density \(L = \frac{1}{2}\dot{\phi^2} + \frac{1}{2} (\nabla\phi)^2 - V(\phi)\) Multiplying by the volume of space $a^3(t)$, assuming that any spatial gradients are stretched out by inflation, and making use of the Euler-Lagrange equations it can be shown that the equation of motion corresponds to damped harmonic oscillator \(\ddot{\phi} + 3 H \dot{\phi } = -\frac{\partial V}{\partial \phi}\) where the second term in the L.H.S. is the so-called Hubble friction. Together with the quantum fluctuations, a scale invariant CMB spectrum can then theoretically be expected, as quantum fluctuations are excited at small scales, subsequently enlarged by the expansion of the universe, and finally dissipated (frozen out) by the Hubble friction at large scales as soon as their wave length gets larger than the light horizon (separation over which information can be exchanged). I am not sure what processes there could be that lead to deviations of the CMB spectrum from scale invariance.
A particle moves along the x-axis so that at time t its position is given by $x(t) = t^3-6t^2+9t+11$ during what time intervals is the particle moving to the left? so I know that we need the velocity for that and we can get that after taking the derivative but I don't know what to do after that the velocity would than be $v(t) = 3t^2-12t+9$ how could I find the intervals Fix $c\in\{0,1,\dots\}$, let $K\geq c$ be an integer, and define $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$.I believe I have numerically discovered that$$\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n \quad \text{ as } K\to\infty$$but cannot ... So, the whole discussion is about some polynomial $p(A)$, for $A$ an $n\times n$ matrix with entries in $\mathbf{C}$, and eigenvalues $\lambda_1,\ldots, \lambda_k$. Anyways, part (a) is talking about proving that $p(\lambda_1),\ldots, p(\lambda_k)$ are eigenvalues of $p(A)$. That's basically routine computation. No problem there. The next bit is to compute the dimension of the eigenspaces $E(p(A), p(\lambda_i))$. Seems like this bit follows from the same argument. An eigenvector for $A$ is an eigenvector for $p(A)$, so the rest seems to follow. Finally, the last part is to find the characteristic polynomial of $p(A)$. I guess this means in terms of the characteristic polynomial of $A$. Well, we do know what the eigenvalues are... The so-called Spectral Mapping Theorem tells us that the eigenvalues of $p(A)$ are exactly the $p(\lambda_i)$. Usually, by the time you start talking about complex numbers you consider the real numbers as a subset of them, since a and b are real in a + bi. But you could define it that way and call it a "standard form" like ax + by = c for linear equations :-) @Riker "a + bi where a and b are integers" Complex numbers a + bi where a and b are integers are called Gaussian integers. I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd... Does anyone know if $T: V \to R^n$ is an inner product space isomorphism if $T(v) = (v)_S$, where $S$ is a basis for $V$? My book isn't saying so explicitly, but there was a theorem saying that an inner product isomorphism exists, and another theorem kind of suggesting that it should work. @TobiasKildetoft Sorry, I meant that they should be equal (accidently sent this before writing my answer. Writing it now) Isn't there this theorem saying that if $v,w \in V$ ($V$ being an inner product space), then $||v|| = ||(v)_S||$? (where the left norm is defined as the norm in $V$ and the right norm is the euclidean norm) I thought that this would somehow result from isomorphism @AlessandroCodenotti Actually, such a $f$ in fact needs to be surjective. Take any $y \in Y$; the maximal ideal of $k[Y]$ corresponding to that is $(Y_1 - y_1, \cdots, Y_n - y_n)$. The ideal corresponding to the subvariety $f^{-1}(y) \subset X$ in $k[X]$ is then nothing but $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$. If this is empty, weak Nullstellensatz kicks in to say that there are $g_1, \cdots, g_n \in k[X]$ such that $\sum_i (f^* Y_i - y_i)g_i = 1$. Well, better to say that $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$ is the trivial ideal I guess. Hmm, I'm stuck again O(n) acts transitively on S^(n-1) with stabilizer at a point O(n-1) For any transitive G action on a set X with stabilizer H, G/H $\cong$ X set theoretically. In this case, as the action is a smooth action by a Lie group, you can prove this set-theoretic bijection gives a diffeomorphism
Characteristic of Integral Domain is Zero or Prime Theorem Let $\struct {D, +, \circ}$ be an integral domain. Let $\operatorname{Char} \left({D}\right)$ be the characteristic of $D$. Then $\operatorname{Char} \left({D}\right)$ is either $0$ or a prime number. Proof If $\struct {D, +, \circ}$ is finite, then from Characteristic of Finite Ring with No Zero Divisors, $\operatorname{Char} \left({D}\right)$ is prime. On the other hand, suppose $\struct {D, +, \circ}$ is not finite. Then there are no $x, y \in D, x \ne 0 \ne y$ such that $x + y = 0$. Thus it follows that $\operatorname{Char} \left({D}\right)$ is zero. $\blacksquare$
What motivates canonical correlation analysis? Section It is possible to create pairwise scatter plots with variables in the first set (e.g., exercise variables), and variables in the second set (e.g., health variables). But if the dimension of the first set is p and that of the second set is q, there will be pq such scatter plots, it may be difficult, if not impossible, to look at all of these graphs together and interpret the results. Similarly, you could compute all correlations between variables from the first set (e.g., exercise variables), and variables in the second set (e.g., health variables), however interpretation is difficult when pq is large. Canonical Correlation Analysis allows us to summarize the relationships into a lesser number of statistics while preserving the main facets of the relationships. In a way, the motivation for canonical correlation is very similar to principal component analysis. It is another dimension reduction technique. Canonical Variates Section Let's begin with the notation: We have two sets of variables \(X\) and \(Y\). Suppose we have p variables in set 1: \(\textbf{X} = \left(\begin{array}{c}X_1\\X_2\\\vdots\\ X_p\end{array}\right)\) and suppose we have q variables in set 2: \(\textbf{Y} = \left(\begin{array}{c}Y_1\\Y_2\\\vdots\\ Y_q\end{array}\right)\) We select X and Y based on the number of variables that exist in each set so that \(p ≤ q\). This is done for computational convenience. We look at linear combinations of the data, similar to principal components analysis. We define a set of linear combinations named U and V. U corresponds to the linear combinations from the first set of variables, X, and V corresponds to the second set of variables, Y. Each member of U is paired with a member of V. For example, \(U_{1}\) below is a linear combination of the p X variables and \(V_{1}\) is the corresponding linear combination of the q Y variables. Similarly, \(U_{2}\) is a linear combination of the p X variables, and \(V_{2}\) is the corresponding linear combination of the q Y variables. And, so on.... \begin{align} U_1 & = a_{11}X_1 + a_{12}X_2 + \dots + a_{1p}X_p \\ U_2 & = a_{21}X_1 + a_{22}X_2 + \dots + a_{2p}X_p \\ & \vdots \\ U_p & = a_{p1}X_1 +a_{p2}X_2 + \dots +a_{pp}X_p\\ & \\ V_1 & = b_{11}Y_1 + b_{12}Y_2 + \dots + b_{1q}Y_q \\ V_2 & = b_{21}Y_1 + b_{22}Y_2 + \dots +b_{2q}Y_q \\ & \vdots \\ V_p & = b_{p1}Y_1 +b_{p2}Y_2 + \dots +b_{pq}Y_q\end{align} Thus define \((U_i, V_i)\) as the \(i^{th}\) canonical variate pair. ( \(U_{1}\), \(V_{1}\)) is the first canonical variate pair, similarly ( \(U_{2}\), \(V_{2}\)) would be the second canonical variate pair and so on. With \(p ≤ q\) there are p canonical covariate pairs. We hope to find linear combinations that maximize the correlations between the members of each canonical variate pair. We compute the variance of \(U_{i}\) variables with the following expression: \(\text{var}(U_i) = \sum\limits_{k=1}^{p}\sum\limits_{l=1}^{p}a_{ik}a_{il}cov(X_k, X_l)\) The coeffcients \(a^{i1}\) through \(a^{ip}\) that appear in the double sum are the same coefficients that appear in the definition of \(U_{i}\). The covariances between the \(k^{th}\) and \(l^{th}\) X-variables are multiplied by the corresponding coefficients \(a^{ik}\) and \(a^{il}\) for the variate \(U_{i}\). Similar calculations can be made for the variance of \(V_{j}\) as shown below: \(\text{var}(V_j) = \sum\limits_{k=1}^{p} \sum\limits_{l=1}^{q} b_{jk}b_{jl}\text{cov}(Y_k, Y_l)\) The covariance between \(U_{i}\) and \(V_{j}\) is: \(\text{cov}(U_i, V_j) = \sum\limits_{k=1}^{p} \sum\limits_{l=1}^{q}a_{ik}b_{jl}\text{cov}(X_k, Y_l)\) The correlation between \(U_{i}\) and \(V_{j}\) is calculated using the usual formula. We take the covariance between the two variables and divide it by the square root of the product of the variances: \(\dfrac{\text{cov}(U_i, V_j)}{\sqrt{\text{var}(U_i) \text{var}(V_j)}}\) The canonical correlation is a specific type of correlation. The canonical correlation for the \(i^{th}\) canonical variate pair is simply the correlation between \(U_{i}\) and \(V_{i}\): \(\rho^*_i = \dfrac{\text{cov}(U_i, V_i)}{\sqrt{\text{var}(U_i) \text{var}(V_i)}} \) This is the quantity to maximize. We want to find linear combinations of the X's and linear combinations of the Y's that maximize the above correlation. Canonical Variates Defined Section Let us look at each of the p canonical variates pair one by one. First canonical variate pair: \( \left( U _ { 1 } , V _ { 1 } \right)\): The coefficients \(a_{11}, a_{12}, \dots, a_{1p}\) and \(b_{11}, b_{12}, \dots, b_{1q}\) are selected to maximize the canonical correlation \(\rho^*_1\) of the first canonical variate pair. This is subject to the constraint that variances of the two canonical variates in that pair are equal to one. \(\text{var}(U_1) = \text{var}(V_1) = 1\) This is required to obtain unique values for the coefficients. Second canonical variate pair: \( \left( U _ { 2 } , V _ { 2 } \right)\) Similarly we want to find the coefficients \(a_{21}, a_{22}, \dots, a_{2p}\) and \(b_{21}, b_{22}, \dots, b_{2q}\) that maximize the canonical correlation \(\rho^*_2\) of the second canonical variate pair, \( \left( U _ { 2 } , V _ { 2 } \right)\). Again, we will maximize this canonical correlation subject to the constraints that the variances of the individual canonical variates are both equal to one. Furthermore, we require the additional constraints that \( \left( U _ { 1 } , U _ { 2 } \right)\), and \( \left( V_{1} , V_{2} \right)\) are uncorrelated. In addition, the combinations \( \left( U_{1} , V_{2} \right)\) and \( \left( U_{2} , V_{1} \right)\) must be uncorrelated. In summary, our constraints are: \(\text{var}(U_2) = \text{var}(V_2) = 1\), \(\text{cov}(U_1, U_2) = \text{cov}(V_1, V_2) = 0\), \(\text{cov}(U_1, V_2) = \text{cov}(U_2, V_1) = 0\). Basically, we require that all of the remaining correlations equal zero. This procedure is repeated for each pair of canonical variates. In general, ... \( i^{th} \) canonical variate pair: \( \left( U _ { i } , V _ { i } \right)\) We want to find the coefficients \(a_{i1}, a_{i2}, \dots, a_{ip}\) and \(b_{i1}, b_{i2}, \dots, b_{iq}\) that maximize the canonical correlation \(\rho^*_i\) subject to the constraints that \(\text{var}(U_i) = \text{var}(V_i) = 1\), \(\text{cov}(U_1, U_i) = \text{cov}(V_1, V_i) = 0\), \(\text{cov}(U_2, U_i) = \text{cov}(V_2, V_i) = 0\), \(\vdots\) \(\text{cov}(U_{i-1}, U_i) = \text{cov}(V_{i-1}, V_i) = 0\), \(\text{cov}(U_1, V_i) = \text{cov}(U_i, V_1) = 0\), \(\text{cov}(U_2, V_i) = \text{cov}(U_i, V_2) = 0\), \(\vdots\) \(\text{cov}(U_{i-1}, V_i) = \text{cov}(U_i, V_{i-1}) = 0\). Again, requiring all of the remaining correlations to be equal to zero. Next, let's see how this is carried out in SAS...
Your best bet for this will be to use the Clausius-Clapeyron relation. This is an exponential function that describes the vapor pressure of a substance as a function of temperature, and uses the enthalpy of vaporization (latent heat) and a constant as parameters. The constant is dependent on the particular substance, but there is a way to eliminate it from the equation if you can get one data point: the vapor pressure at a specific temperature. If you can do that, then you can use the two-point form to predict vapor pressure at another temperature: $\ln\frac{p_2}{p_1} = \frac{-\Delta H_{vap}}{R}(\frac{1}{T_2}-\frac{1}{T_1})$ This version only has one parameter - the enthalpy of vaporization ($\Delta H_{vap}$). $p_1$ and $T_1$ are the vapor pressure and temperature at point 1 (an arbitrary point), and $p_2$ and $T_2$ are the vapor pressure and temperature at a different arbitrary point. For a pure substance, if you know the enthalpy of vaporization and the boiling point, you an calculate the vapor pressure at any other temperature. That is because the boiling point is the temperature at which the vapor pressure equals one atmosphere. In other words, you would set: $p_1 = 1 \space \mathrm{atm}$ $T_1 = T_b$, Then solve for $p_2$. The equation would be:$p_2 = p_1\exp([\frac{-\Delta H_{vap}}{R}](\frac{1}{T_2}-\frac{1}{T_1})) \tag{1}$ When you use this equation, make sure you first convert the temperatures to Kelvin. This only works with an absolute temperature scale. Pressure can be in whatever units you want, as long as both pressures have the same units. In other words, convert 1 atm to whatever units you want to use for vapor pressure. The data you listed didn't have the enthalpy of vaporization for either component, but they are available from NIST: $\Delta H_{vap} = 91.7 \space \mathrm{\frac{kJ}{mol}}$ - glycerol $\Delta H_{vap} = 71.2 \space \mathrm{\frac{kJ}{mol}}$ - ppg Incidentally, NIST lists different values for the boiling point than you have. I would double check those before plugging numbers in. Now, you don't have a pure substance; you have a mixture. Depending on how non-ideal it is (which basically means how much the molecules of one component will affect the other components) you might have to account for that if you want to get very accurate results. On the other hand, if you just want an estimate and are planning to experimentally adjust the power output from there, you can get a good starting point by assuming that the mixture and the vapor are both ideal. Under those conditions, the total vapor pressure of the mixture will be equal to the sum of the partial pressures... $p_t = p_{gly} + p_{ppg}$ ... and you can use the Clausius Clapeyron equation from above for each component individually to find the respective vapor pressures. Then you can add them together if you want the total vapor pressure. However, you may be more interested in the vapor pressure of each component individually, since (from what I have heard) glycerol is more responsible for the "cloud" that forms, while propylene glycol is the better organic solvent. It would be helpful to know how much vapor of each component a given mixture would produce at a given temperature. That's how you can predict vapor pressure from temperature. To relate this to the amount of heat you need, you will need to do another step. Temperature is a function of heat supplied and the specific heat of the substance: $q = mC\Delta T \tag{2}$ where $q$ is the amount of heat transferred, $m$ is mass, and $C$ is the specific heat. You know the specific heat of the mixture, and the mass is an independent variable since you will control the composition and amount of the mixture. If you assume that volumes and heat capacities are additive, that mixing is fast, and that heat transfer is fast (at least locally), you can estimate the temperature (or rather, the maximum temperature) by connecting this to the power supplied by your device. Power in watts is the same as Joules/second. Therefore, you can multiply the power output by the time to get a total (maximum) amount of heat transfer (in Joules): $q = P*t \tag{3}$ We can combine equations $(3)$ and $(2)$ and solve for the operating temperature: $T=\frac{Pt}{mC}+T_i \tag{4}$ where $T_i$ is the starting temperature of the mixture (before you apply power). Now we can substitute this back into $(1)$ to come up with an estimate of the vapor pressure of each component as a function of the mass of the liquid, the power applied, and the length of time that it is applied for: $p_{vap} = (1 \space \mathrm{atm}) \exp([\frac{-\Delta H_{vap}}{R}](\frac{1}{\frac{Pt}{mC}+T_i}-\frac{1}{T_b})) \tag{5}$ The real vapor pressure will likely be lower, but I think this would get you to a good starting point. A couple of things to remember: $m$ refers to the mass of each component that is in close proximity to the heating element. $Delta H_{vap}$ is in kJ/mol - make sure that the $R$ you use has the right units. $T$ must be in Kelvin (both of them) The units of $P$, $m$, and $C$ have to cancel out to give K. If you want to get a (rough) upper estimate of the mass that will be vaporized, you could assume an ideal gas and use $m=\frac{pVM}{RT}$ where $V$ is the volume of the vaporizer chamber, and $M$ is the molar mass of the component.
So I've been recently been studying types from David Marker's book and have some issues understanding them and in particular why did Marker choose to present certain proof of the following theorem Theorem:Let $\mathcal{M}$ be an $\mathcal{L}$-structure, $A\subseteq M$, and $p$ an $n$-type over $A$. There is $\mathcal{N}$ an elementary extension of $\mathcal{M}$ such that $p$ is realized in $\mathcal{N}$. I start by defining types. If $\mathcal{M}$ is an $\mathcal{L}$-structure and $A\subseteq M$ then $\text{Th}_A(\mathcal{M})$ is the theory of $\mathcal{M}$ in an extended language $\mathcal{L}_A$ adding a constant symbol for each $a\in A$ and interpreting $\mathcal{M}$ in the obvious manner. $p$ is an $n$-type if it is a set of $\mathcal{L}_A$ formulas in free variables $v_1, \ldots, v_n$ and $p \cup \text{Th}_A(\mathcal{M})$ is satisfiable. The type is complete if $\phi\in p$ or $\neg\phi\in p$ for all $\mathcal{L}_A$ formulas in free variables $v_1, \ldots, v_n$. The way I undertand the definition is at follows. In a language $\mathcal{L}^*$ that extends $\mathcal{L}_A$ by adding a constant symbol for each variable $v_i$ in the formulas in $p$, $p$ is a type if $p \cup \text{Th}_A(\mathcal{M})$ is satisfiable. The type is complete if $p \cup \text{Th}_A(\mathcal{M})$ is complete in said language. $p \cup \text{Th}_A(\mathcal{M})$ is satisfiable (equiv. finitely satisfiable) is equivalent to, for any $q$ finite subset of $p$, and if $\psi$ is the conjuntion of all formulas in $q$, the sentence $\exists v_1,\ldots, v_n \;\psi$ belongs in $\text{Th}_A(\mathcal{M})$. I like to call this "$p$ is finitely realised by $\mathcal{M}$". So let's get back to the theorem. Note firstly that $\text{Th}_M(\mathcal{M})=ED(\mathcal{M})$ is the elementary diagram of $\mathcal{M}$. We must proof that $p\cup ED(\mathcal{M})$ is consistent. The proof in Marker's book is at follows. Suppose not and let without loss of generality $\phi(\bar{v},\bar{a}) \wedge \psi(\bar{a},\bar{b})$ be the formula that is inconsistent (with $\phi$ conjunction of formulas in $p$ and $\psi$ conjunction of formulas in $ED(\mathcal{M})$, and with $\bar{a}$ being an array of elements in $A$, $\bar{b}$ an array in $M\setminus A$ and $\bar{v}$ the $n$-array of type variables). Now clearly $\exists \bar{w} \psi(\bar{a},\bar{w})\in\text{Th}_A(\mathcal{M})$ and so given a model of $p \cup \text{Th}_A(\mathcal{M})$ we can interpret $\bar{b}$ to be the elements that witness $\exists \bar{w} \psi(\bar{a},\bar{w})$ proving that the above formula is in fact satisfiable. I wonder however if the theorem was not trivial given the understading of a type as a set of formulas "finitely witnessed" by $\mathcal{M}$. Basically, $\mathcal{M}$ will satisfy any finite subset of $p\cup \text{Th}_A(\mathcal{M})$ as much as it will satisfy any finite subset of $p\cup ED(\mathcal{M})$. Am I making sense? Or is my take on types entirely or partially mistaken? Am I missing something in this proof?
Let $a_1,a_2,a_3,...$ and $b_1,b_2,b_3,...$ be Cauchy sequences in $[0,\infty)$, and let $c_n = a_n^2+\sqrt{b_n} + \sin(a_n+b_n)$. Prove that $c_1,c_2,c_3,...$ is also a Cauchy sequence by using the fact that a sequence of real numbers is a Cauchy sequence if and only if it converges. closed as off-topic by Delta-u, user370967, Martin Sleziak, Strants, Ethan Bolker May 24 '18 at 14:48 This question appears to be off-topic. The users who voted to close gave this specific reason: " This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – Delta-u, Community, Martin Sleziak, Strants Since the sequences $(a_n),(b_n)$ are Cauchy sequences, they converge. Suppose $(a_n)$ converges to $A$, and $(b_n)$ converges to $B$. Since $b_n \ge 0$ for all $n$, it follows that $B \ge 0$. Since the functions $x\mapsto x^2$ and $x\mapsto \sin(x)$ are continuous, and the function $x\to\sqrt{x}$ is continuous on the interval $[0,\infty)$, it follows that $(a_n^2)$ converges to $A^2$.$\\[4pt]$ $(\sqrt{b_n})$ converges to $\sqrt{B}$.$\\[4pt]$ $(a_n+b_n)$ converges to $A+B$, hence $\bigl(\sin(a_n+b_n)\bigr)$ converges to $\sin(A+B)$ It follows that $(c_n)$ converges to $A^2+\sqrt{B}+\sin(A+B)$. Hence, since the sequence $(c_n)$ converges, it's a Cauchy sequence.
9.1.2.1 - Normal Approximation Method Formulas 1. Check any necessary assumptions and write null and alternative hypotheses. Section To use the normal approximation method a minimum of 10 successes and 10 failures in each group are necessary (i.e., \(n p \geq 10\) and \(n (1-p) \geq 10\)). The two groups that are being compared must be unpaired and unrelated (i.e., independent). Below are the possible null and alternative hypothesis pairs: Research Question Are the proportions of group 1 and group 2 different? Is the proportion of group 1 greater than the proportion of group 2? Is the proportion of group 1 less than the proportion of group 2? Null Hypothesis, \(H_{0}\) \(p_1 - p_2=0\) \(p_1 - p_2=0\) \(p_1 - p_2=0\) Alternative Hypothesis, \(H_{a}\) \(p_1 - p_2 \neq 0\) \(p_1 - p_2> 0\) \(p_1 - p_2<0\) Type of Hypothesis Test Two-tailed, non-directional Right-tailed, directional Left-tailed, directional The null hypothesis is that there is not a difference between the two proportions (i.e., \(p_1 = p_2\)). If the null hypothesis is true then the population proportions are equal. When computing the standard error for the difference between the two proportions a pooled proportion is used as opposed to the two proportions separately (i.e., unpooled). This pooled estimate will be symbolized by \(\widehat{p}\). This is similar to a weighted mean, but with two proportions. Pooled Estimate of \(p\) \(\widehat{p}=\frac{\widehat{p}_1n_1+\widehat{p}_2n_2}{n_1+n_2}\) The standard error for the difference between two proportions is symbolized by \(SE_{0}\). The subscript 0 tells us that this standard error is computed under the null hypothesis (\(H_0: p_1-p_2=0\)). Standard Error \(SE_0={\sqrt{\frac{\widehat{p} (1-\widehat{p})}{n_1}+\frac{\widehat{p}(1-\widehat{p})}{n_2}}}=\sqrt{\widehat{p}(1-\widehat{p})\left ( \frac{1}{n_1}+\frac{1}{n_2} \right )}\) Note that the default in many statistical programs, including Minitab Express, is to estimate the two proportions separately (i.e., unpooled). In order to obtain results using the pooled estimate of the proportion you will need to change the method. Also note that this standard error is different from the one that you used when constructing a confidence interval for \(p_1-p_2\). While the hypothesis testing procedure is based on the null hypothesis that \(p_1-p_2=0\), the confidence interval approach is not based on this premise. The hypothesis testing approach uses the pooled estimate of \(p\) while the confidence interval approach will use an unpooled method. Test Statistic for Two Independent Proportions \(z=\frac{\widehat{p}_1-\widehat{p}_2}{SE_0}\) The \(z\) test statistic found in Step 2 is used to determine the \(p\) value. The \(p\) value is the proportion of the \(z\) distribution (normal distribution with a mean of 0 and standard deviation of 1) that is more extreme than the test statistic in the direction of the alternative hypothesis. If \(p \leq \alpha\) reject the null hypothesis. If \(p>\alpha\) fail to reject the null hypothesis. Based on your decision in Step 4, write a conclusion in terms of the original research question.
Motivating the classical momentum $\mathbf{p} = m\mathbf{v}$ is quite easy: it is meant to represent the quantity of motion of the particle, and since the mass is one measure of quantity of matter it should be proportional to mass (how much thing is moving) and should be proportional to velocity (how fast and to where it is moving). Now, in Special Relativity the momentum changes. The new quantity of motion becomes $$\mathbf{p} = \dfrac{m\mathbf{v}}{\sqrt{1-\dfrac{v^2}{c^2}}}$$ Or, using $\gamma$ the Lorentz factor $\mathbf{p} = \gamma(v) m\mathbf{v}$ where I write $\gamma(v)$ to indicate that the velocity is that of the particle relative to the frame in which the movement is being observed. The need for this new momentum is because the old one fails to be conserved and because using the old one in Newton's second law leads to a law which is not invariant under Lorentz transformations. So the need for a new momentum is perfectly well motivated. What I would like to know is how can one motivate that the correct choice for $\mathbf{p}$ is the $\gamma(v)m\mathbf{v}$. There are some arguments using the mass: considering a colision, requiring momentum to be conserved, transform the velocity and then find how mass should transform. Although this work, it doesn't seem natural, and it is derived in one particular example. On my book there's even something that Einstein wote saying that he didn't think it was a good idea to try transforming the mass from $m$ to $M = \gamma(v)m$, that it was better to simply keep $\gamma$ on the new momentum without trying to combine it with the mass. So I would like to know: without resorting to arguments based on transformation of the mass, how can one motivate the new form of momentum that works for special relativity?
Category:Path-Connected Sets Let $T = \left({S, \tau}\right)$ be a topological space. Let $U \subseteq S$ be a subset of $S$. Let $T' = \left({U, \tau_U}\right)$ be the subspace of $T$ induced by $U$. That is, $U$ is a path-connected set in $T$ if and only if: for every $x, y \in U$, there exists a continuous mapping $f: \left[{0 \,.\,.\, 1}\right] \to U$ such that $f \left({0}\right) = x$ and $f \left({1}\right) = y$. Pages in category "Path-Connected Sets" The following 5 pages are in this category, out of 5 total. C E Equivalence of Definitions of Path Component/Equivalence Class equals Union of Path-Connected Sets Equivalence of Definitions of Path Component/Union of Path-Connected Sets is Maximal Path-Connected Set
I am typesetting my phd thesis, and I was hoping to use Palatino as main font. For some of the headings I wanted to use a fancy TTF font, so I switched from pdfLaTeX to LuaLaTeX to compile my document. Using some trickery it seems I can still use the good old fonts from mathpazo, which look nice [*]. It seems however a good idea to avoid trickery and just use Tex Gyre Pagella, which is supposedly similar, and to include fonts properly using fontspec. However, with TG Pagella the text looks significantly different/worse. See the screenshot below: right: the version with mathpazo, top left: Tex Gyre Pagella, bottom left: a version with mathpazo + tgpagella. The version of the file typeset with Tex Gyre Pagella looks much darker than the other two, which I don't really like. It seems as if the letter spacing has changed. This seems quite a change for just 'a slightly different version of' a font. Is there a way to avoid this ``darkening''? That is, get the overall text to appear lighter with Tex Gyre Pagella, so that it looks similar to the result of using just mathpazo or tgpagella? [*] Of course one could say: if it works why not stick with it: of course I'm now trying to change more stuff, for which I need more of the LuaLaTeX features. This brakes this setup. The latex source used to produce the versions above: \documentclass{article}\usepackage{fontspec} % used only for top left\usepackage{unicode-math} % used only for top left% \usepackage[osf,sc]{mathpazo} % used for right + bottom left% \usepackage{tgpagella} % used only for bottom left\linespread{1.05}% \usepackage[T1]{fontenc} % for bottom left and right \setmainfont{TeX Gyre Pagella} \setmathfont{TG Pagella Math}\newcommand{\R}{\mathbb{R}}\begin{document}\section{General Approach}We study the problem of segmenting a trajectory with respect to a criterion.In the continuous case, this problem can be defined as follows. We define atrajectory $T$ as a function from the interval $I=[0,1]$ to $\R^2$ (or $\R^d$)and a subtrajectory, also called \emph{segment}, $T[a,b]$ as the functionrestricted to the subinterval $[a,b] \subseteq I$.A criterion $C$ is a function $C \colon I \times I \rightarrow\{\textsc{True}, \textsc{False}\}$, which is defined on all possible segments of $T$.We say an interval $[a,b] \subseteq I$ \emph {satisfies} a criterion $C$ if$C(a,b) = \textsc{True}$; in this case we call the segment \emph{valid}. Apartitioning of $I$ (or of $T$) into non-overlapping segmentswhose union covers $I$ is called a \emph{segmentation}.A segmentation of \emph{size} $k$ can be denoted by its segments$[\tau_0,\tau_1],[\tau_1,\tau_2],\ldots,[\tau_{k-1},\tau_k]$; $\tau_0 = 0$and $\tau_k = 1$.A segmentation is \emph{valid} if and only if all of its segments are valid,and \emph{segmenting} a function refers to partitioning into valid segments.We say a valid segmentation is \emph{minimal} (optimal)with respect to $C$ if its size is minimum;the segmentation problem is to compute a valid minimal segmentation.We will often omit the word ``valid'' because onlyvalid segmentations are useful.\end{document}
I wish to calculate the degree of $\mathbb{Q}(\zeta_{10}) $ over $\mathbb{Q}$. Using the dimensions theorem: $[\mathbb{Q}(\zeta_{10}) : \mathbb{Q}]=[\mathbb{Q}(\zeta_{10}) : \mathbb{Q(\zeta_5)}] \cdot[\mathbb{Q}(\zeta_{5}) : \mathbb{Q}] = 2\cdot4=8$. Due to the fact that $x^4+x^3+x^2+x+1$ is irreducible and $x^2-\zeta_5$ is the minimal polynomial for $\zeta_{10}$ over $\mathbb{Q}(\zeta_5)$. However: $x^5+1|_{\zeta_{10}} = {(e^{\frac{2\pi i}{10}})}^5 + 1=e^{\pi i}+1=0$. I am confused cause the extension should be $8$, yet it seems like $\zeta_{10}$ is a root of $x^5+1$.
Because you are looking only at the so-called global part, i.e. the part of the gauge transformation which resembles a group action. Recall that the vector bosons transform as$$A_\mu \to g A_\mu g^{-1} - (\partial_\mu g) g^{-1}$$where the first part is the global part of the gauge transformation, which tells you that $A_\mu$ transform in the Adjoint representation (for non-abelian gauge symmetry), while the second part is the intrinsic local part. The second part, what is strictly speaking the gauge invariance, clearly forbids the mass term as you would get non-homogeneous terms like$$ A^\mu (\partial_\mu g) g^{-1} $$ Now we can ask why the term with the Higgs is allowed. First recall it comes from the covariant derivative$$(D_\mu H)^\dagger D^\mu H$$and when jointly transforming both fields you can show that the action is invariant, i.e. there are no surplus non-homogeneous terms. It is for this same reason why one has to consider $F_{\mu\nu}$ as the kinetic term for $A_\mu$ instead of something like $\partial_\mu A_\nu \partial^\mu A^\nu$, which is also included in $F^2$ but in a way that the non-homogeneous term cancels (antisymmetric nature of the indices $\mu, \nu$, to be more exact). Bottom-line: It is not suffice to write down group invariants if the symmetry is local. There is a non-homogeneous part of the transformation when acting on the vector bosons. This intrinsic gauge invariance further constraints how one can write down Lagrangians. NOTE: My notation is simplistic, I am assuming that $A_\mu \simeq A^a_\mu T^a$ where $T^a$ are the Lie algebra generators, up to some normalisation and conventions. References:The vast majority of books on the Standard Model and Quantum Field Theory will use similar arguments and notation.
PCTeX Talk Discussions on TeX, LaTeX, fonts, and typesetting Author Message murray Joined: 07 Feb 2006 Posts: 47 Location: Amherst, MA, USA Posted: Wed Mar 15, 2006 10:53 pm Post subject: Y&Y TeX vs. MiKTeX with MathTimePro2 fonts I used updated fonts just prior to posting of 0.98. Initially I had different page breaks with a test document of my own in the two TeX systems. but that seems to have disappeared once I updated geometry.sty in Y&Y to the same version I use with MiKTeX. There are two other peculiarities, indirectly related to the fonts: 1. With Y&Y, when mtpro2.sty is loaded, I get messages that \vec is already defined, and then ditto for \grave, \acute, \check, \breve, \bar, \hat, \dot, \tilde, \ddot. Perhaps this is due to different versions of amsmath.sty? 2. In Y&Y, I must include \usepackage[T1]{fontenc} for otherwise I get message: OT1/ptm/m/n/10.95=ptm7t at 10.95pt not loadable: Metric (TFM) file not found I am clearly using different psnfss package files with Y&Y than with MiKTeX. I tried updating the Y&Y versions to be the same as those for MiKTeX, but then all hell breaks loose over encodings -- with Y&Y expecting to find TeXnAnsi encodings and not finding them. (It may be that in Y&Y I have to update tfm's for Times, too. But I am loathe to mess further with Y&Y with respect to a working font configuration.) WaS Joined: 07 Feb 2006 Posts: 27 Location: Erlangen, Germany Posted: Thu Mar 16, 2006 3:47 am Post subject: please, send me <w.a.schmidt@gmx.net> your test document and the log files that would result with and without \usepackage[T1]{fontenc} TIA Walter WaS Joined: 07 Feb 2006 Posts: 27 Location: Erlangen, Germany Posted: Fri Mar 17, 2006 9:27 am Post subject: Preliminary anwers: 1) Using T1 encoding with Times cannot work on Y&Y-TeX. Y&Y-TeX supports Times and other fonts from the non-TeX world only with LY1 encoding. 2) Updating psnfss on Y&Y-TeX is pointless. The psnfsss collection supports the Base35 fonts with OT1 and T1/TS1 encoding, which does not work on Y&Y-TeX; see above. 3) Loading fontenc should not be necessary at all, but I do not yet understand why you get the error re. OT1/ptm. Does it help to issue \usepackage[LY1]{fontenc} before loading mtpro2? 4) The errors re. \vec etc. may be due to an obsolete amsmath.sty, as compared with MikTeX. Please, run a minimal test document that does not use amsmath to check this. More info on Sunday. murray Joined: 07 Feb 2006 Posts: 47 Location: Amherst, MA, USA Posted: Fri Mar 17, 2006 7:49 pm Post subject: Your answers identified the problems & solutions! WaS wrote: Preliminary anwers: 1) Using T1 encoding with Times cannot work on Y&Y-TeX.... 2) Updating psnfss on Y&Y-TeX is pointless.... 3) Loading fontenc should not be necessary at all, but I do not yet understand why you get the error re. OT1/ptm. Does it help to issue \usepackage[LY1]{fontenc} before loading mtpro2? 4) The errors re. \vec etc. may be due to an obsolete amsmath.sty, as compared with MikTeX. Please, run a minimal test document that does not use amsmath to check this. Re 3): Yes, \usepackage[LY1]{fontenc} in my test documen avoides he error about OT1. 5) Yes, the error about \vec, etc., was due to an obsolete amsmath.sty. Refreshing the amsmath files fixed this. Thank you! All times are GMT - 7 Hours You can post new topics in this forum You can reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum Powered by phpBB © 2001, 2005 phpBB Group
Let $R$ be a principal ideal domain but not a field, and let $M$ be an $R$-module. Show the following: (i) Let $p \in R$ be an irreducible element and $r \in R \setminus \{0\}$. Then $(R/ \langle r \rangle)[p] \cong R/ \langle p^n \rangle$, where $n=\max\{k \in \mathbb N_0 : p^k\mid r\}$. (ii) $M$ is simple iff $\exists \space p \in R$ irreducible such that $M \cong R/ \langle p \rangle$. I am pretty stuck on both items. In (i) I've tried to prove it by induction on $\mathbb N_0$, but I could only prove it for the base case $n=0$: If $n=0$, then $R/ \langle p^n \rangle=R/ \langle 1 \rangle=0$. Now, $$(R/ \langle r \rangle)[p]=\{\overline{a} \in R/ \langle r \rangle : p^m\overline{a}=0 \space \text{for some } m \in \mathbb N\}=\{a \in R : p^ma \in \langle r \rangle \space \text{for some } m \in \mathbb N \}$$ If I call this set $S$ (which is also a submodule), then I would like to conclude $S=\langle r \rangle$. The inclusion $\langle r \rangle \subset S$ is immediate. Now take $s \in S$, then $p^ms=rq$. Using the fact that $R$ is a UFD, one can deduce that $p^m \sim q$, then $p^ms=rup^m$ for some $u \in \mathcal U(R)$, from here it follows $s=ru \in \langle r \rangle$. I couldn't prove the induction step, maybe induction is not the best way attack this problem. As for (ii), I could show that $M \cong R/ \langle p \rangle \implies M$ is simple: since $p$ is irreducible, it is also prime, as we are in a PID, this implies $\langle p \rangle$ is maximal, so $R/ \langle p \rangle$ is simple, it immediately follows $M$ is simple. I would appreciate suggestions to prove (i) and the other implication in (ii). Thanks in advance.
Let $M$ be a compact manifold of dimension $m$ and of class at least $C^2$ embedded in $R^n$. For $x\in M$ let $N_x(\varepsilon)$ be the intersection $T_xM^\bot\cap B(x,\varepsilon)$, where $T_xM^\bot$ is orthogonal complement of $T_xM$ in $T_x\mathbb R^n$ and $B(x,\varepsilon)$ is the open ball. Under this assumptions the tubular neighbourhood theorem (the version I know) states that there exists $\varepsilon>0$ such that $$N(\varepsilon)=\bigcup_{x\in M} N_x(\varepsilon)$$ is an open subset of $\mathbb R^n$ and $N_x(\varepsilon)\cap N_y(\varepsilon)=\emptyset$. Since we can identify $(-\varepsilon,\varepsilon)$ with $\mathbb R^n$ we can deduce that there exists a diffeomorphism between the normal bundle and an open neighbourhood of $M$. Let $BM(\delta) = \bigcup_{x\in M} B(x,\delta)$. Using compactness we can find $\delta>0$ such that $\delta$-neighbourhood $BM(\delta)$ of $M$ fits in $N(\varepsilon)$. This gives us the standard formulation from the web: $\delta$-neighbourhood of $M$ is diffeomorphic with some open neighbourhood of the zero-section of the normal bundle. My question is: Is it true that $BM(\varepsilon) \subseteq N(\varepsilon)$ ($\supseteq$ is obvious)? I have never seen such a geometric formulation and it is obviously false for some noncompact manifolds even if the tubular neighbourhood thm is true for them [for example by extending them to compact manifolds] (consider an open interval in $\mathbb R^2$). But it seems so true...
I have a question. Given a vector equation such as F = ma, how can we obtain a general expression for m, the mass? If the equation was scalar, this could easily be done by dividing F by a; however, we are dealing with vectors, and, to my knowledge, a vector divided by another vector is not defined in vector algebra. Therefore, how can we obtain a general expression for m? You could take vector norms: $$m=\frac{|F|}{|a|}$$ If the vector $\vec F$ is parallel to the vector $\vec a$ then $m$ is simply the ratio between their norms: $$m=\frac{|\vec F|}{|\vec a|}$$ If they are not parallel, then no scalar $m$ can satisfy this equation. You can better understand this by thinking of the equation $\vec F=m\vec a$ as a system of 3 equations (if we're talking about 3 dimensional space), i.e. $$\begin{pmatrix}F_x\\F_y\\F_z\end{pmatrix}= m\begin{pmatrix}a_x\\a_y\\a_z\end{pmatrix}$$ as a system of equations that you want to solve of the variable $m$. Since there are 3 equations and only 1 unknown, a solution is not guaranteed to exist. First, as you and others stated, there is no vector division. The first thing you need to see is that: If there exist an acceleration in some direction, then there exist a force in same direction. This is one of newtons laws. If force and acceleration were not in same direction, then we can't find m such that satisfies the equation as yohBS stated. If you have two vectors, then their magnitude should be equal right? Let $ma = F_1 $, then we have $F = F_1 $ which implies $|F| = |F_1|$ Then $$|F| = |ma| = m|a|$$ where you simply divide norm of $F$ to norm of $a$. I think what you missing is that. Think about a box has mass M. This box sits on ground. Then you apply some force F. This force is not parallel to ground. Refer to figure below: Here, you can't divide |F| to |a|. This is because acceleration is caused by net force acting on the body. Net force is tangential component of $F$ since other component is balanced by normal force from ground. Then you need to find tangential component in the direction of acceleration. Where you find m by $$|F_t| = |ma| = m|a|$$
Colin Beveridge tweeted about the following Formula for Pi the other day: \(\pi = 1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} – \frac{1}{5} + \frac{1}{6} + \ldots\) With the exact formula being: After the first two terms, the signs are determined as follows: If the denominator is a prime of the form 4 m – 1, the sign is positive; if the denominator is a prime of the form 4 m + 1, the sign is negative; for composite numbers, the sign is equal the product of the signs of its factors. This is due to Euler. (If you just want to see a proof of this, skip to near the end of this post) I opined that it wasn’t that surprising, because for any real number \(x\) you can get a sequence of alternating signs for this sum which converge to it as follows: Inductively define \(\epsilon_n\) as follows: \(\epsilon_{n+1}\) is \(1\) if \(\sum\limits_{i \leq n} \frac{\epsilon_i}{n} < x\), else \(\epsilon_{n+1} = -1\). After the first time the partial sums \(s_n = \sum\limits_{i \leq n} \frac{\epsilon_i}{n} \) cross \(x\), you must have \(|s_n – x| \leq \frac{1}{n}\), so the sum converges to \(x\). There are many inequivalent sums that lead to \(x\) too. e.g. You can make the convergence arbitrarily slow if you like: If you have some sequence of non-negative numbers \(b_n \to 0\) then you can construct a sequence \(\epsilon_n\) as above where instead of changing the sign every time you cross \(x\), you change the sign to negative when you cross \(x + b_n\), then back to positive when you cross \(x – b_{n+1}\), etc. There’s nothing special about \(\frac{1}{n}\) here either – all that matters is that it’s a non-negative sequence tending to zero whose sum does not converge. But there is something special about \(\frac{1}{n}\) here that means we can probably expect a particularly rich set of “natural” examples from it due to a slightly stronger condition it satisfies: It’s in \(l^2\). That is, \(\sum\frac{1}{n^2}\) converges (specifically to \(\frac{\pi^2}{6}\), but that doesn’t matter for now). Why does this matter? Well it turns out to be interesting for the following reason: If \(a_n\) is a sequence in \(l^2\), and \(B_1, \ldots B_n, \ldots\) are independent Bernoulli random variables, then \(\sum (-1)^{B_n} a_n\) converges to a finite value with probability 1. That is, if we just randomly pick each sign then then result will converge to something. This follows from the following stronger theorem: Let \(A_i\) be a sequence of independent random variables with \(E(A_i) = 0\) and \(\sum E(A_i)^2 < \infty\). Then \(\sum A_i\) converges to a finite value with probability one. First a thing to note: Because of Kolmogorov’s zero-one law, such a sequence will converge with probability \(0\) or \(1\) – no in between values are possible. I think this makes the result significantly less surprising (though we won’t actually use this fact in the proof). The key idea of the proof is this: We’ll use the bound on the tail behaviour that convergence gives us to look at the \(\lim\inf\) and the \(\lim\sup\) (the limit inferior and limit superior) of the sums of the random variables and show that these are equal almost surely. This implies that the limit exists and is equal to their common value (which might be infinite). And the way we’ll do that is we’ll consider the possibility that they are well separated: Let \(D_{a, b} = \{ \lim\inf S_n < a < b < \lim\sup S_n\}\), where \(S_n = \sum\limits_{i \leq n} A_i\). Then \(\{\lim\inf S_n < \lim\sup S_n\} = \bigcup\limits_{a, b \in \mathbb{Q}} D_{a, b}\), which is a countable union, so if each \(D_{a, b}\) has probability zero then so does the whole set, and the limit exists. So let’s now establish an upper bound on the probability of lying in \(D_{a, b}\). Pick any \(N\). If \(D_{a, b}\) occurs then the sums lie below \(a\) and above \(b\) (not at the same time) infinitely often after \(N\) because of the definitions of limit inferior and superior. So we can find some \(N < m < n\) with \(S_m < a\) and \(S_n > b\). But then \(\sum\limits_{i=m}^n A_i > b – a\), and hence \(\sum\limits_{i=m}^n |A_i| > b – a\). By the Cauchy-Schwartz inequality, for any \(t_1, \ldots t_n \geq 0\) we have \(n \sum t_i = (1, \ldots, 1) \cdot (t_1, \ldots, t_n) \leq ||(1, \ldots, 1)|| ||(t_1, \ldots, t_n)|| = \sqrt{n} (\sum t_i^2)^{\frac{1}{2}}\), so \(\sum t_i^2 \geq n (\sum t_i)^2\). Edit: This step is wrong. The result is still true because of deeper theory than I go into here (it comes from Martingale theory) and this step may be recoverable. I’ll try to fix it later. Thanks to @sl2c for pointing this out. In particular applied to the above this means that \(\sum\limits_{i=m}^n |A_i|^2 \geq (n – m) (\sum |A_i|)^2 \geq (n – m)(b – a)^2 \geq (b – a)^2 \). But \(\sum\limits_{i=m}^n |A_i|^2 \leq \sum\limits_{i=N}^\infty |A_i|^2\). So we must have \(\sum\limits_{i=N}^\infty |A_i|^2 \geq (b – a)^2\). But for any non-negative random variable, \(P(X \geq x) \leq \frac{E(X)}{x}\). This means we must have \(P(D_{a, b}) \leq P\left(\sum\limits_{i=N}^\infty |A_i|^2 \geq (b – a)^2\right) \leq \frac{E(\sum\limits_{i=N}^\infty |A_i|^2)}{(b – a)^2} = \frac{\sum\limits_{i=N}^\infty E(|A_i|^2))}{(b – a)^2} \). But \(N\) was arbitrary, and we know \(\sum\limits_{i=N}^\infty E(|A_i|^2) < \infty\), so as \(N \to 0\) the right hand side tends to zero. Therefore \(P(D_{a, b}) = 0\) as desired and the result is almost proved. Note that we’ve not used the independence of the \(A_i\) so far! It’s only necessary to prove that the sums converge to a finite value. To see that we need some sort of condition for that, consider the following: Let \(X\) be a random variable that takes the values \(1, -1\) with equal probability. Let \(A_i = \frac{X}{n}\). Then \(E(A_i) = \) and \(\sum E(A_i^2) < \infty\), but \(\sum A_i\) takes the values \(\infty, -\infty\) with equal probability (and no other values). But with independence this can’t happen for the following reason: For any random variable we have \(E(X) \leq E(X^2)^{\frac{1}{2}}\) (because \(0 \leq \mathrm{Var}(X) = E(X^2) – E(X)^2\), though it also follows from other less elementary results). So we have \(E(|S_n|) \leq E(S_n^2)^{\frac{1}{2}}\). But \(E(S_n)^2 = \sum\limits_{i, j \leq n} A_i A_j = \sum\limits_{i \leq n} A_n^2\), because \(A_i, A_j\) are independent for \(i \neq j\) so \(E(A_i A_j) = E(A_i) E(A_j) = 0\). This means that \(E(|\sum A_i|) = \lim E(|S_n|) \leq \sqrt{\sum E(A_i)^2} < \infty\). Thus, necessarily, \(P(|\sum A_i| = \infty) = 0\). And now the result is proved. The proper setting for the above result is really the theory of Martingales, and the proof I gave above is mostly a modified proof of Doob’s Second Martingale convergence theorem, where I’ve used a stronger hypothesis to bound the difference in the inferior and the superior limits. In fact, for the special case of our coin tossing a stronger statement is true: Suppose \(a_n\) is a sequence of positive numbers such that \(\sum a_n = \infty\) and \(\sum a_n^2 < \infty\). Let \(A_n\) be a sequence of independent random variables with \(A_n = \pm a_n\), taking each value with equal probability. Then for any interval \(a < b\), \(P(a < \sum A_n < b) > 0\). To see this, write \((a, b) = (x – \epsilon, x + \epsilon)\) by letting \(x = \frac{a + b}{2}\) and \(\epsilon = \frac{b – a}{2}\). Run the process above for constructing a sequence converging to \(x\) until we have \(N\) such that our choices so far have lead us to \(|\sum_{i \leq N} A_i – x| < \frac{\epsilon}{2}\) and \(\sum_{i > N} a_i^2 < \frac{\epsilon^2}{8}\). The initial sequence of choices that lead us here happens with probability \(2^{-N}\), and the condition on the sum of the tail guarantees via Chebyshev’s inequality that \(P(|\sum_{i \geq N} A_i| \leq \frac{\epsilon}{2}) \geq \frac{1}{2}\) so we have \(P(|\sum A_i – x| < \epsilon) \geq P(\sum\limits_{i=0}^N A_i – x < \frac{\epsilon}{2}) P(\sum\limits_{i > N} A_i) < \frac{\epsilon}{2}) \geq 2^{-N-1} > 0\). So any value is not only possible but at least somewhat probable. But this is a true continuous distribution, in the sense that \(P(\sum A_i = u) = 0\) for any value \(u\). To see this, consider the values taken only on a sum of indices. Let \(A(T) = \sum\limits_{i \in T} A_i\). Then \(A(T)\) and \(A(\mathbb{N} \setminus T)\) are independent, so if \(A\) has atoms then both of \(A(T)\) and \(A(\mathbb{N} \setminus T)\) must also. But if we pick \(T\) to be some subsequence \(t_n\) where \(a_{t_n} < 2^{-n}\) then all assignments of signs produce a unique result, so the probability of achieving any given value is zero. (Thanks to Robert Israel for this proof). But still, \(\pi\) is a bit of an outlier: The variance of this distribution is of course \(\sum \frac{1}{n^2} = \frac{\pi^2}{6}\), so \(\pi\) is \(\sqrt{6} \approx 2.45\) standard deviations out. I don’t have an exact estimate of how probable that is for this distribution, but again by Chebyshev’s inequality we know that we can’t get a result at least this extreme more than a sixth of the time. So why do we get this particular value here? Well it turns out to be an unsurprising result for another reason, which is that it comes from a general technique for producing interesting infinite sums by starting from interesting products. Suppose we have \(\sum f(n) \frac{1}{n}\) where \(f(n)\) is some multiplicative function. i.e. \(f(mn) = f(m) f(p)\). Then we can write this as \(\prod (1 – f(p_i) \frac{1}{p_i})^{-1}\), with \(p_i\) being the i’th prime. There are issues of convergence we should worry about here, but in the finest Eulerian style we won’t. Both sides converging is probably a sufficient condition. A lot of this can probably be fixed by introducing a \(t\) parameter, assuming \(|t| < 1\) and taking the limit \(t \to 1\) of \(\sum t^n f(n) \frac{1}{n}\) and \(\\prod (1 – t f(p_i) p_i)^{-1}\), then because mumble mumble analytic continuation the two limits are equal. I haven’t checked the details, so I won’t do this here and will just follow Euler’s convention of assuming infinity isn’t a problem. To prove this, we’ll consider the following sequence of infinite sums, \(P_0 = \sum\limits_{n \in R_n} f(n) \frac{1}{n} \), where \(R_n\) is the set of numbers not divisible by any of the first \(n\) primes (so \(R_0 = \mathbb{N}\)). If we split \(R_n\) up into the set of numbers which are a multiple of \(p_{n+1}\) and those that are not, we get the recurrence relationship that \(P_n = \frac{f(p_{n+1})}{p_{n+1}} P_n + P_{n+1}\), by just taking out a single factor of \(\frac{f(p_{n+1})}{p_{n+1}} \) from the components of the sum over the values that are divisible by \(p_{n+1}\). So \((1 – \frac{f(p_{n+1})}{p_{n+1}} ) P_n = P_{n+1}\) Iterating this, we get \(P_0 \prod\limits_1^\infty (1 – (\frac{a_{p_{n+1}}}{p_{n+1}} ) = \lim\limits_{n \to \infty} P_n = 1\). i.e. \(\sum f(n) \frac{1}{n} = P_0 = \prod\limits_1^\infty (1 – \frac{f(p_{n+1)}}{p_{n+1}})^{-1} \) as desired. We can also go from products to sums: If we have \(\prod (1 – b_i )^{-1} \) then we can write it as \(\sum S_n \), where \(S_n\) is the sum of all products of sequences of \(n\) of the \(b_i\) (allowing repetitions). If we then have \(b_i = f(p) \frac{1}{p_i}\), this becomes \(\prod (1 – f(p_i) \frac{1}{p_i})^{-1}\), the products all become unique, and we have \(\sum \frac{a_n}{n}\) again, where we extend \(f\) to all numbers by multiplying its values on their prime factors. Using these tools, proving the sequence result for \(\pi\) becomes a matter of (somewhat tedious) computation. We start with \(\frac{\pi}{4} = \arctan(1) = 1 – \frac{1}{3} + \frac{1}{5} – \ldots\) using the standard power series for arctan. This can be written in the above form with \(f(n) = 0\) if \(n\) is even, \(f(n) = -1\) if \(n \mod 4 = 3\) and \(f(n) = 1\) if \(n = 1 \mod 4\). So we have: \(\frac{\pi}{4} = \prod (1 – f(p_i) \frac{1}{p_i})^{-1}\). We can do the same with the standard sum \(\frac{\pi^2}{6} = \sum \frac{1}{n^2} = \prod (1 – \frac{1}{p_i^2})^{-1}\) (because \(n \to \frac{1}{n}\) is certainly multiplicative). Now, divide the second product by the first and we get: \(\frac{2}{3} \pi = \frac{1}{1 – \frac{1}{2^2}} \prod\limits_{p_i > 2} \left( \frac{1 – \frac{1}{p_i^2}}{1 – f(p_i) \frac{1}{p_i}} \right)^{-1}\) The term outside the product comes from the fact that there’s no \(2\) term in our product for \(\frac{\pi}{4}\) and is equal to \(\frac{4}{3}\), so rearranging we get \(\frac{\pi}{2} = \prod\limits_{p_i > 2} \ldots\). The term inside the product actually splits up fairly nicely: Because we can factor \(1 – \frac{1}{p_i^2} = (1 – \frac{1}{p_i})(1 + \frac{1}{p_i})\). The bottom is now one of these two terms, so this factors into whichever of the two the bottom is not. i.e. as \(1 + f(p_i)\). So from this we conclude that \(\frac{\pi}{2} = \prod_{p_i > 2} (1 + f(p_i))^-1\), or \(\pi = \frac{1}{1 – \frac{1}{2}} \prod_{p_i > 2} (1 + f(p_i))^-1\). If we define \(g\) as \(g(2) = 1\), \(g(p) = -f(p)\) for \(p\) an odd prime, and extend to composite numbers multiplicatively, this then becomes \(\pi = \prod (1 – g(p_i))^{-1}) = \sum g(n) \frac{1}{n}\), which was our desired sum. This proof more or less follows Euler’s, via the translation by Springer (which I borrowed off a friend and would honestly recommend you do the same rather than paying more than £100 for). The details are a bit different mostly because it was the only way I could follow them – in particular he tends to eschew the detailed algebra and just manipulate examples – and unlike him I feel guilty about the lack of precise analysis of convergence, but it’s certainly derived from his. The product representation also allows us to strengthen our original result: Not only can any number be the limit of the sum by choosing primes appropriately, we can insist that the signs are multiplicative in the sense that the sign assigned to \(mn\) is the product of the signs assigned to \(m\) and \(n\). The reason for this is that \(\sum \frac{1}{p_i}\) diverges. As a result, \(\prod\limits_{p_i > n} (1 + \frac{1}{p_i}) = \infty\) and \(\prod\limits_{p_i > n} (1 – \frac{1}{p_i}) = 0\). This means we can just repeat our original construction with the product instead of the sum: Assign positive signs as long as the partial product is less than the desired result, assign negative ones as long as it’s greater. The result is a product converging to the desired value, which we can then turn into a sum converging to the desired value with multiplicative signs. So that more or less completes the journey down this particular rabbit hole: You can get any number by manipulating the signs, you should expect most choices of numbers to be at least reasonably plausible, and we have some useful machinery for picking good choices of signs for leading to specific numbers. Whether or not it’s still surprising is up to you. (This post turned out to be a ridiculous amount of work to write. If you want more like it, or just want to say thanks, I do have a Patreon for this blog. Any donations you want to send my way would be warmly appreciated)
Both the Royal Society of Chemistry journals and the American Chemical Society journals use ScholarOne Manuscripts™ as their web submission platform. In both cases, the system is able to compile LaTeX documents into a PDF file. There are, however, strict requirements about the type of LaTeX document you can use. These are spelt out in a specific help section entitled “Preparing and Submitting Manuscripts Using TeX/LaTeX”. ACS Authors have two distinct options for submitting work authored in TeX: FAST Submission: Submit your own PDF file—and provide the native TeX and figures in a .zip file—and your own PDF file will be used for the review process. STANDARD Submission: Submit a complete and properly styled TeX file, figures, and references using the achemso style package. The TeX files will be converted to PDF by ACS Paragon Plus and used for the review process. The help further states: When you upload your TeX/LaTeX Manuscript File, the system will analyze the file, and identify additional resource files referenced within the file (such as image files and bibliographic files) that are necessary to complete the document. and Manuscript files prepared in TeX/LaTeX (Version 2.02 and earlier) will be used in journal production provided you adhere to the following guidelines: Use only LaTeX2e. Use of plain TeX and RevTeX is discouraged. Use generic style files whenever possible. Minimal formatting is all that is required in the document. Include all sections of the article in a single file. Include the list of references within the LaTeX file. Captions must be created in the TeX/LaTeX document. References should be cited in text using \cite{}, and the list of references should be itemized using \bibitem{}. Use \frac to build fractions. Do not use \over or \stackrel to build fractions in displayed equations. Use \sum_{}^{} for summations and \prod_{}^{} for products. Use the array environment only to build true matrices, not for aligning multiline equations. Use characters/symbols in the generic LaTeX character set only. Symbols from other sets may not translate correctly. Avoid extensive use of \newcommand and `\def . Some style files (text and bibliographic) that are available in the public domain may be used for most ACS journals, e.g., jacs.sty and jacs.bst; jpc.sty and jpc.bst; achemso. The use of the achemso style package is strongly encouraged. Please note that the ACS does not provide support for using these files. A further requirement, which is not actually listed, is that all figures should be in the same directory (you cannot upload a directory structure). And, as noted, if you use bibtex, you have to run it manually and include the content of the .bbl file into your .tex file before upload. So, in conclusion: ScholarOne Manuscripts™ has the capability to support LaTeX compilation. Whether this is enable or not for your specific journal/publisher, I cannot know. In any case, check the documentation! (or ask the editorial office)
You could say IS-LM is too simplistic. The fundamental discrete time consumption-based asset pricing equation says that if $ r_t $ is the risk-free interest rate at time $ t $, we have $$ \beta (1+r_t) \frac{u'(C_{t+1})}{u'(C_t)} = 1 $$ (I'm dropping the expectation since I will be working with a perfect foresight model later on) where $ u $ is the utility function of any consumer, $ 0 < \beta < 1 $ is a discount factor which measures "impatience" of the same consumer, and $ c_t, c_{t+1} $ is the consumption of the same consumer in time $ t $ and $ t+1 $ respectively. The most elegant form of the relation is with the assumption of power utility $ u(c) = c^{1 - \gamma}/(1 - \gamma) $ with $ \gamma > 0 $, in which case we get (after log-linearization) $$ r_t = R + \gamma \Delta c_t $$ where $ \Delta c_t $ is log consumption growth and $ R = -\log(\beta) $ is the real interest rate consistent with flat consumption. The intuition is simple: people want to smooth consumption over time. If they expect their consumption in the future to be much higher than their consumption today, they will try to borrow against the better future and push real interest rates up, and vice versa. Consider a baseline economy with no investment, perfect foresight, and a representative consumer first. (This is unrealistic, but IS-LM is even more unrealistic, so...) In this case, the resource constraint of the economy reads $ Y_t = C_t + G_t $ in every period, where $ G_t $ is government spending. If you assume consumers have utility that is additively separable over time (the asset pricing relation assumes this as well), then the no investment assumption turns their decision problem into a set of one-period decision problems. (This is a technical condition to ensure what I am doing is well-defined.) Define $ D_t $ as the level of output consistent at time $ t $ with $ G_t = 0 $, and let $ m_t $ be the "fiscal multiplier", defined by $$ m_t = \frac{dY_t}{dG_t} $$ Most models won't give constant $ m $, but if necessary we may linearize the model around somewhere other than $ G_t = 0 $ and make this assumption locally. These assumptions imply $ Y_t = D_t + m_t G_t $ at any time $ t $, so we may express consumption as $ C_t = D_t + (m_t -1) G_t $. Plugging into the relation gives $$ r_t \approx R + \gamma \Delta d_t + \gamma (m_t -1) \Delta \left( \frac{g_t}{1 - m_t g_t} \right) $$ where $ g_t = G_t/Y_t $ is the ratio of government spending to GDP and $ \Delta d_t = \Delta \log(D_t) $. (Ignore the pole of the denominator - we're working around $ g_t = 0 $.) The fraction $ \frac{g_t}{1 - m_t g_t} $ is increasing in $ g_t $ with $ m \geq 0 $, so we see that the real interest rate at time $ t $ depends on two things: the "natural" growth rate of the economy given by $ \Delta d_t $, and the consumption growth induced by changes in government spending rel to GDP $ g_t $. However, the sign of the effect of an anticipated increase in government spending depends on the value of the fiscal multiplier $ m $. If $ m < 1 $, then government spending crowds out consumption, and rising government spending is associated with low real interest rates, i.e expectations of high spending in the future would push rates down today, and vice versa if $ m > 1 $. Most everyone would agree that the fiscal multiplier in the US economy today is less than $ 1 $, so news of higher spending in the future should be associated with lower real interest rates now, if with anything at all. It also matters, of course, if government spending affects $ \Delta d_t $ - if we're speaking of an infrastructure investment program, for instance, then we should expect that the change in government spending also leads to a change in the path of $ D_t $. The two effects for such a program act in opposite directions, so the net effect of a government investment program tomorrow on real interest rates today could go either way, even in this simple economy. I won't work out the model with investment (allowing consumers to invest) here, but the effect of investment is to allow consumers to trade consumption now for consumption in the future in aggregate; so it dampens the impact of a change in government spending on real interest rates (since now consumers have a tool for smoothing consumption in the face of government expenditure shocks). It's also true that the announcement in the State of the Union address probably carried less information than an announcement in this model, since market participants were already aware of such an infrastructure investment plan for months (maybe a year?), but with a fiscal multiplier less than $ 1 $ it's not easy to say anything about the direction of the real interest rate response.
Or, “Oh, Wikipedia, How I Love Thee. Let me count the ways: one, two, phi…” It has been noted disquisitively [link] that the number 1001 of Duchamp’s entry at the 1912 Indépendants catalogue also happens to represent an integer based number of the Golden ratio base, related to the golden section, something of much interest to the Duchamps and others of the Puteaux Group. Representing integers as golden ratio base numbers, one obtains the final result 1000.1001. This, of course, was by chance—and it is not known whether Duchamp was familiar enough with the mathematics of the golden ratio to have made such a connection—as it was by chance too the relation to Arabic Manuscript of The Thousand and One Nights dating back to the 1300s. φ Euhhhhh, non. As best I can tell, all this is saying is that the catalogue number of Duchamp’s painting contains only 0s and 1s. The idea behind the “golden ratio base” is that we can write integer numbers in terms of the golden ratio, $\phi$, if we add up different powers of $\phi$. For example, anything to the zeroth power is 1, so $1 = \phi^0$. Less obviously, we can say from the definition of $\phi$ that $\frac{1}{\phi} = \phi – 1.$ Squaring both sides of this equation, $\frac{1}{\phi^2} = (\phi – 1)^2 = \phi^2 – 2\phi + 1.$ So, $\frac{1}{\phi^2} + \phi = \phi^2 – \phi + 1 = \phi(\phi – 1) + 1.$ Referring back to our first equation, $\frac{1}{\phi^2} + \phi = \phi\left(\frac{1}{\phi}\right) + 1,$ which means that $\frac{1}{\phi^2} + \phi = 2.$ Another way of writing this would be to say $2 = \phi^{-2} + \phi^1.$ With more cleverness, we can write any positive integer as a sum of powers of $\phi$: $N = \phi^{k_1} + \phi^{k_2} + \cdots + \phi^{k_n},$ where the numbers $k_1$ through $k_n$ are distinct integers. Notice that we don’t have any coefficients in front of the terms—or, to say it more carefully, the coefficient of any term in the sum is either zero or one. So, “1001” could be a representation of a number in the golden-ratio base, if we read it as $1001_\phi = 1\cdot\phi^3 + 0\cdot\phi^2 + 0\cdot\phi^1 + 1\cdot\phi^0.$ In the same way, “1000.1001” can stand for a number in base $\phi$. It’s the number we normally write as 5. It is not the “final result” of “representing integers as golden ratio base numbers.” I tried making sense of the disquisition to which Wikipedia credits this observation. The stuff about writing numbers in the golden-ratio base isn’t even there. What we do get is that the number 1001 is le nombre figuré pentagonal en relation avec le mythique « nombre d’or » que l’on retrouve dans toute forme pentagonale et dans l’étoile à cinq branches. [the pentagonal number in relation with the mythic “golden number” which one finds in all pentagonal forms and in the five-pointed star.] It’s true: 1001 is a pentagonal number (so are 1, 2, 5, 7, 12, 15, 22, 26, 35, …). The sense of the argument appears to be, “1001 is a pentagonal number [true], and because pentagon therefore GOLDEN RATIO!” The golden ratio occurs in a regular pentagon, as the ratio of the diagonal length to the side length. That doesn’t make the free word association of “pentagon” and “mystical golden number” a valid argument. But hey, when you feel the need for uninhibited babble slicked over with a superficial veneer of pseudoscholarship, there’s no better place to find it than an encyclopaedia article, right? “Painters who definitely did make use of GR include Paul Serusier, Juan Gris, and Giro Severini, all in the early 19th century, and Salvador Dali in the 20th, but all four seem to have been experimenting with GR for its own sake rather than for some intrinsic aesthetic reason. Also, the Cubists did organize an exhibition called “Section d’Or” in Paris in 1912, but the name was just that; none of the art shown involved the Golden Ratio.” —Keith Devlin EDIT TO ADD (12 August 2014): I might as well include the proof that 1000.1001 φ is 5. We figured out above that $2 = \phi^{-2} + \phi^1.$ So, we square this: $4 = (\phi^{-2} + \phi^1)^2.$ Using the binomial theorem, $4 = \phi^{-4} + 2 \phi^{-1} + \phi^2.$ We want all the coefficients to be 0 or 1, so we split up the middle term: $4 = \phi^{-4} + \phi^{-1} + \phi^{-1} + \phi^2.$ Next, we use the basic fact we know about the golden ratio. $4 = \phi^{-4} + \phi^{-1} + \phi – 1 + \phi^2.$ To find an expression for the integer five, we add one to our expression for the integer four: $5 = \phi^{-4} + \phi^{-1} + \phi + \phi^2.$ We can shorten this by the following move: $5 = \phi^{-4} + \phi^{-1} + \phi^2(\phi^{-1} + 1).$ Again using our basic fact about the golden ratio, we recognize the expression in parentheses: $5 = \phi^{-4} + \phi^{-1} + \phi^2 \cdot \phi.$ Therefore, $5 = \phi^{-4} + \phi^{-1} + \phi^3.$ Q.E.D. UPDATE (14 November 2015): The article was “fixed” in July, with the comment, “2 equals 10.01 in the golden ratio base, previous sentence made no sense” (true). Unfortunately, this just means that the article attributes to its source a statement which is not actually there. The claim is now, It has been noted disquisitively[4] that the number 1001 of Duchamp’s entry at the 1912 Indépendants catalogue also happens to represent in the form 10.01 the integer 2 in the Golden ratio base, related to the golden section, something of much interest to the Duchamps and others of the Puteaux Group. This looks more respectable: the mathematics is not complete bafflegab, and “10.01” is now related to the title of the piece. But the new and improved claim isn’t anywhere in reference [4]! It’s not quite classic citogenesis, but it’s similar.
You didn't show your equations, so it's hard to say why you got that unphysical answer. Perhaps you tried to add speeds linearly instead of using the proper relativistic formula for composition of velocities. For collinear motion, the formula (which can easily be derived from the Lorentz transformations) is $$w = \frac{u+v}{1+\frac{uv}{c^2}}$$ Let's say we have a body $A$ moving in the $x$ direction with constant speed $u$ according to an inertial observer $O$. A body $B$ moving in the $x$ direction that has speed $v$ in $A$'s reference frame will not have a speed of $u+v$ in $O$'s frame. Instead, its speed will be $w$ computed by the above formula. To calculate the correct speed for your total annihilation problem, we just need to use the laws of conservation of energy and of momentum, along with the relevant formulae from Special Relativity for energy and momentum. Of course, this is a highly idealised scenario, since the photons produced in the annihilation will be emitted in random directions and it's not possible to force them to go in one direction: you'd need a reaction chamber lined with some amazing material that can reflect gamma photons perfectly, without producing waste heat (which gets emitted in random directions). But anyway... Here are the equations we need for our calculations. Firstly, the energy–momentum relation of Special Relativity: $$E^2 = (pc)^2 + (mc^2)^2 \tag{1}$$ where $E$ is the total energy, $p$ is the momentum, and $m$ is the (rest) mass of the object; $c$, of course, is the speed of light. We also need the relativistic equation for momentum:$$p = mv\gamma \tag{2}$$ where $\gamma$ is the Lorentz factor:$$\gamma = \frac{1}{\sqrt{1 - v^2/c^2}} \tag{3}$$ Note that $$\gamma^2 = \frac{c^2}{c^2 - v^2} \tag{3a}$$ For a massless body (eg a photon), equation (1) simplifies to$$E = |pc| \tag{1a}$$ And by combining (1) and (2) we get$$E = mc^2\gamma \tag{1b}$$ for a body with non-zero mass. For an object at rest, that simplifies to the famous$$E = mc^2 \tag{1c}$$ Let the initial mass of the body be $m$, and its final mass (after some of it's annihilated) be $km$, where $0 < k \le 1$. Let $E_i$ be the initial energy of the body, $E_f$ its final energy, and $E_l$ the energy of the emitted light. By conservation of energy, $$E_i = E_l + E_f \tag{4}$$ The initial momentum of the body (in the rest frame) is zero, let $v$ be its final speed and $p$ be its final momentum. By conservation of momentum, the momentum of the emitted light must be $-p$ and hence its energy is $pc$. The final mass of the object is $km$, so from (2) $p = kmv\gamma$ Thus $$\begin{align}\\E_i & = mc^2\\E_f & = kmc^2\gamma\\E_l & = kmvc\gamma\\\end{align}$$ Note that $E_l/E_f = v/c$ Putting it all together:$$\begin{align}\\E_i & = E_f + E_l\\ mc^2 & = kmc^2\gamma + kmvc\gamma\\mc^2 & = kmc(c+v)\gamma\\c & = k(c+v)\gamma\\c^2 & = k^2(c+v)^2\frac{c^2}{c^2 - v^2}\\k^2 & = \frac{c^2 - v^2}{(c+v)^2}\\k^2 & = \frac{c-v}{c+v}\\k^2c + k^2v & = c-v\\v + k^2v & = c - k^2c\\v(1 + k^2) & = c(1 - k^2)\\v & = \frac{1 - k^2}{1 + k^2}\,c \tag{5}\\\end{align}$$ Thus when $k=1/2$, $v=3c/5$ With a little more algebra, it can be shown that$$\begin{align}\\\gamma & = \frac{1+k^2}{2k}\\E_f/E_i & = \frac{1+k^2}{2}\\E_l/E_i & = \frac{1-k^2}{2}\\\end{align}$$ If the body is decaying exponentially, then it will experience constant acceleration, in the sense that if you were traveling on a spaceship powered by this process you'd experience an acceleration that feels just like a constant gravitational force. To show this, we need some calculus and the formula for composition of velocities given at the start of this answer. First, we'll rearrange equation (5) slightly. $$\begin{align}\\v & = \frac{1 - k^2}{1 + k^2}\,c\\v & = \frac{k^{-1} - k}{k^{-1} + k}\,c\\\end{align}$$ Now let $k = e^{-\lambda T}$, where $\lambda$ is the decay rate and $T$ is the proper time of the object. Thus the body is undergoing exponential decay, according to clocks that are traveling with it. $$\begin{align}\\v & = \frac{e^{\lambda T} - e^{-\lambda T}}{e^{\lambda T} + e^{-\lambda T}}\,c\\v & = c \tanh(\lambda T) \tag{5a}\\\end{align}$$ Now we need to show that this is the same formula that arises for an object undergoing constant acceleration. It's sometimes said that the formulas of Special Relativity only apply to constant velocity, but that's not strictly true: they can be used for accelerating objects in flat spacetime, you just need to be careful. ;) The trick is to use a sequence of inertial reference frames that at each step match velocities with the accelerating body. Let $a$ be the acceleration and $v$ the body's current velocity. We apply a small boost $a\Delta T$ to its velocity and use the velocity composition formula to see how much that numerically increases its velocity. $$\begin{align}\\v + \Delta v & = \frac{v + a \Delta T}{1 + va \Delta T / c^2}\\\Delta v & = \frac{v + a \Delta T - v - v^2a \Delta T / c^2}{1 + va \Delta T / c^2}\\\frac{\Delta v}{\Delta T} & = \frac{a (1 - v^2 / c^2)}{1 + va \Delta T / c^2}\\ \frac{\Delta T}{\Delta v} & = \left(\frac{1}{a}\right) \frac{1 + va \Delta T / c^2}{1 - v^2 / c^2}\\ \end{align}$$ Taking the limit as $\Delta T \to 0$, the $va \Delta T / c^2$ term vanishes,$$\frac{dT}{dv} = \left(\frac{1}{a}\right) \frac{1}{1 - v^2 / c^2}$$ Integrating,$$\begin{align}\\T & = \frac{1}{a} \int \frac{dv}{1 - v^2 / c^2}\\T & = \frac{c}{a} \tanh^{-1} \left(\frac{v}{c}\right)\\v & = c \tanh \left(\frac{aT}{c}\right) \tag{6}\\\end{align}$$ The constant of integration is zero because $v=0$ when $T=0$. We see that equation (6) has the same form as equation (5a), with $\lambda = a/c$, so the acceleration of the object is simply $c\lambda$. Addendum To answer your original (totally unphysical) question, where the annihilated mass is magically converted to kinetic energy of the object with nothing being emitted, we can use $mc^2 = kmc^2\gamma$, i.e., $k\gamma = 1$, and $v = c \sqrt{1-\frac{1}{\gamma^2}}$. So for $k = 1/2$ (half the original mass is annihilated), $\gamma = 2$ and $v = c \sqrt 3 / 2 \approx 0.866c$.
12.2 - Correlation In this course we have been using Pearson's \(r\) as a measure of the correlation between two quantitative variables. In a sample, we use the symbol \(r\). In a population, we use the symbol \(\rho\) ("rho"). Pearson's \(r\) can easily be computed using Minitab Express. However, understanding the conceptual formula may help you to better understand the meaning of a correlation coefficient. Pearson's \(r\): Conceptual Formula \(r=\frac{\sum{z_x z_y}}{n-1}\) where \(z_x=\frac{x - \overline{x}}{s_x}\) and \(z_y=\frac{y - \overline{y}}{s_y}\) When we replace \(z_x\) and \(z_y\) with the \(z\) score formulas and move the \(n-1\) to a separate fraction we get the formula in your textbook: \(r=\frac{1}{n-1}\Sigma{\left(\frac{x-\overline x}{s_x}\right) \left( \frac{y-\overline y}{s_y}\right)}\) If conducting a test by hand, a \(t\) test statistic with \(df=n-2\) is computed: \(t=\frac{r- \rho_{0}}{\sqrt{\frac{1-r^2}{n-2}}} \) In this course you will never need to compute \(r\) or the test statistic by hand, we will always be using Minitab Express to perform these calculations. MinitabExpress – Computing Pearson's r We previously created a scatterplot of quiz averages and final exam scores and observed a linear relationship. Here, we will compute the correlation between these two variables. Open the data set: On a PC: Select STATISTICS > Correlation > Correlation On a MAC: Select Statistics > Regression > Correlation Double click the Quiz_Averageand Finalin the box on the left to insert them into the Variablesbox Click OK This should result in the following output: Pearson correlation of Quiz_Average and Final = 0.608630 P-Value = <0.0001 Select your operating system below to see a step-by-step guide for this example. Properties of Pearson's r Section \(-1\leq r \leq +1\) For a positive association, \(r>0\), for a negative association \(r<0\), if there is no relationship \(r=0\) The closer \(r\) is to 0 the weaker the relationship and the closer to +1 or -1 the stronger the relationship (e.g., \(r=-.88\) is a stronger relationship than \(r=+.60\)); the sign of the correlation provides direction only Correlation is unit free; the \(x\) and \(y\) variables do NOT need to be on the same scale (e.g., it is possible to compute the correlation between height in centimeters and weight in pounds) It does not matter which variable you label as \(x\) and which you label as \(y\). The correlation between \(x\) and \(y\) is equal to the correlation between \(y\) and \(x\). The following table may serve as a guideline when evaluating correlation coefficients Absolute Value of \(r\) Strength of the Relationship 0 - 0.2 Very weak 0.2 - 0.4 Weak 0.4 - 0.6 Moderate 0.6 - 0.8 Strong 0.8 - 1.0 Very strong
I have a matrix $A$ with dimensions $M \times N$ and I want to compute $A'$ such that: $$ A'_{i,j} = \alpha A'_{i,j-1} + (1 - \alpha) A_{i,j} \\ 1 \leq i \leq M, 1 \leq j \leq N, \alpha \in [0, 1] $$ Where $A'_{i,0}$ is some given constant. I want to perform this computation as part of a machine learning training process on the GPU (I am using TensorFlow), but the only way to do it I can think of is with a loop over $j$, which makes the training tremendously slower (even if it's a TensorFlow loop, not a regular Python loop). I know that being each value dependant on the previous one this is not a parallel-friendly computation, but I was wondering if there is some trick or reformulation that I am missing to make this in a smarter way.
Univariate Normal Distributions Section Before defining the multivariate normal distribution we will visit the univariate normal distribution. A random variable X is normally distributed with mean \(\mu\) and variance \(\sigma^{2}\) if it has the probability density function of X as: \(\phi(x) = \frac{1}{\sqrt{2\pi\sigma^2}}\exp\{-\frac{1}{2\sigma^2}(x-\mu)^2\}\) This result is the usual bell-shaped curve that you see throughout statistics. In this expression, you see the squared difference between the variable x and its mean, \(\mu\). This value will be minimized when x is equal to \(\mu\). The quantity \(-\sigma^{-2}(x - \mu)^{2}\) will take its largest value when x is equal to \(\mu\) or likewise, since the exponential function is a monotone function, the normal density takes a maximum value when x is equal to \(\mu\). The variance \(\sigma^{2}\) defines the spread of the distribution about that maximum. If \(\sigma^{2}\) is large, then the spread is going to be large, otherwise, if the \(\sigma^{2}\) value is small, then the spread will be small. As shorthand notation we may use the expression below: \(X \sim N(\mu, \sigma^2)\) indicating that X is distributed according to (denoted by the wavey symbol 'tilde') a normal distribution (denoted by N), with mean \(\mu\) and variance \(\sigma^{2}\). Multivariate Normal Distributions Section If we have a p x 1 random vector \(\mathbf{X} \) that is distributed according to a multivariate normal distribution with population mean vector \(\mu\) and population variance-covariance matrix \(\Sigma\), then this random vector, \(\mathbf{X} \), will have the joint density function as shown in the expression below: \(\phi(\textbf{x})=\left(\frac{1}{2\pi}\right)^{p/2}|\Sigma|^{-1/2}\exp\{-\frac{1}{2}(\textbf{x}-\mathbf{\mu})'\Sigma^{-1}(\textbf{x}-\mathbf{\mu})\}\) \(| \Sigma |\) denotes the determinant of the variance-covariance matrix \(\Sigma\) and \(\Sigma^{-1}\)is just the inverse of the variance-covariance matrix \(\Sigma\). Again, this distribution will take maximum values when the vector \(\mathbf{X} \) is equal to the mean vector \(\mu\), and decrease around that maximum. If p is equal to 2, then we have a bivariate normal distribution and this will yield a bell-shaped curve in three dimensions. The shorthand notation, similar to the univariate version above, is \(\mathbf{X} \sim N(\mathbf{\mu},\Sigma)\) We use the expression that the vector \(\mathbf{X} \) 'is distributed as' multivariate normal with mean vector \(\mu\) and variance-covariance matrix \(\Sigma\). Some things to note about the multivariate normal distribution: The following term appearing inside the exponent of the multivariate normal distribution is a quadratic form: \((\textbf{x}-\mathbf{\mu})'\Sigma^{-1}(\textbf{x}-\mathbf{\mu})\) This particular quadratic form is also called the squared Mahalanobis distancebetween the random vector and the mean vector \(\mu\). x If the variables are uncorrelated then the variance-covariance matrix will be a diagonal matrix with variances of the individual variables appearing on the main diagonal of the matrix and zeros everywhere else: \(\Sigma = \left(\begin{array}{cccc}\sigma^2_1 & 0 & \dots & 0\\ 0 & \sigma^2_2 & \dots & 0\\ \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & \dots & \sigma^2_p \end{array}\right)\) Multivariate Normal Density Function In this case the multivariate normal density function simplifies to the expression below: \(\phi(\textbf{x}) = \prod_{j=1}^{p}\frac{1}{\sqrt{2\pi\sigma^2_j}}\exp\{-\frac{1}{2\sigma^2_j}(x_j-\mu_j)^2\}\) Note!The product term, given by 'captial' pi, (\(Π\)), acts very much like the summation sign, but instead of adding we multiply over the elements ranging from j=1 to j= p. Inside this product is the familiar univariate normal distribution where the random variables are subscripted by j. In this case, the elements of the random vector, \(\mathbf { X } _ { 1 } , \mathbf { X } _ { 2 , \cdots } \mathbf { X } _ { p }\), are going to be independent random variables. We could also consider linear combinations of the elements of a multivariate normal random variable as shown in the expression below: \(Y = \sum_{j=1}^{p}c_jX_j =\textbf{c}'\textbf{X}\) Note!To define a linear combination, the random variables \(X_{j}\) need not be uncorrelated. The coefficients \(c_{j}\) are chosen arbitrarily, specific values are selected according to the problem of interest and so is influenced very much by subject matter knowledge. Looking back at the Women's Nutrition Survey Data, for example, we selected the coefficients to obtain the total intake of vitamins A and C. Now suppose that the random vector is multivariate normal with mean \(\mu\) and variance-covariance matrix \(\Sigma\). X \(\textbf{X} \sim N(\mathbf{\mu},\Sigma)\) Then Yis normally distributed with mean: \(\textbf{c}'\mathbf{\mu} = \sum_{j=1}^{p}c_j\mu_j\) and variance: \(\textbf{c}'\Sigma \textbf{c} =\sum_{j=1}^{p}\sum_{k=1}^{p}c_jc_k\sigma_{jk}\) See previous lesson to review the computation of the population mean of a linear combination of random variables. In summary, Yis normally distributed with mean transposed \(\mu\) and variance c transposed times \(\Sigma\) times c . c \(Y \sim N(\textbf{c}'\mathbf{\mu},\textbf{c}'\Sigma\textbf{c})\) As we have seen before, these quantities may be estimated using sample estimates of the population parameters. Other Useful Results for the Multivariate Normal Section For variables with a multivariate normal distribution with mean vector \(\mu\) and covariance matrix \(\Sigma\), some useful facts are: Each single variable has a univariate normal distribution. Thus we can look at univariate tests of normality for each variable when assessing multivariate normality. Any subset of the variables also has a multivariate normal distribution. Any linear combination of the variables has a univariate normal distribution. Any conditional distribution for a subset of the variables conditional on known values for another subset of variables is a multivariate distribution. The full meaning of this statement will be clear after Lesson 6. Example 4-1 - Linear Combination of the Cholesterol Measurements Section Measurements were taken on n heart-attack patients on their cholesterol levels. For each patient, measurements were taken 0, 2, and 4 days following the attack. Treatment was given to reduce cholesterol level. The sample mean vector is: Variable Mean X 1 = 0-Day 259.5 X 2 = 2-Day 230.8 X 3 = 4-Day 221.5 The covariance matrix is 0-Day 2-Day 4-day 0-Day 2276 1508 813 2-Day 1508 2206 1349 4-Day 813 1349 1865 Suppose that we are interested in the difference \(X _ { 1 } - X _ { 2 }\), the difference between the 0-day and the 2-day measurements. We can write the linear combination of interest as \(\textbf{a}'\textbf{x}= \left(\begin{array}{ccc}1 & -1 & 0 \end{array}\right) \left(\begin{array}{c}x_1\\ x_2\\ x_3 \end{array}\right)\) The mean value for the difference is \begin{align} &=\left(\begin{array}{ccc}1 & -1 & 0 \end{array}\right)\left(\begin{array}{c}259.5\\230.8\\221.5 \end{array}\right)\\ & = 28.7 \end{align} The variance is \begin{align} &=\left(\begin{array}{ccc}1 & -1 & 0 \end{array}\right) \left(\begin{array}{ccc}2276 & 1508 & 813\\ 1508 & 2206 & 1349\\ 813 & 1349 & 1865 \end{array}\right) \left(\begin{array}{c}1\\-1\\0 \end{array}\right)\\&= \left(\begin{array}{ccc}768 & -698 & -536 \end{array}\right)\left(\begin{array}{c}1\\-1\\0 \end{array}\right) \\ &= 1466 \end{align} If we assume the three measurements have a multivariate normal distribution, then the distribution of the difference \(X _ { 1 } - X _ { 2 }\) has a univariate normal distribution.
closed as too broad by CuriousOne, ACuriousMind♦, Martin, Kyle Kanos, John Rennie May 13 '15 at 6:21 Please edit the question to limit it to a specific problem with enough detail to identify an adequate answer. Avoid asking multiple distinct questions at once. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question. What are the main differences between a quantum and classical system? How does one can distinguish them? An experimentalist's answer, in other words, why do we need two different theories. In a classical system if we do experiments with massive bodies , they have in principle well measured (x,y,z) positions at time t. From falling apples to the planetary system an elegant mathematically theory was developed called classical mechanics, modeling our everyday world . In studying the wave properties of light the theory of classical electrodynamics was developed, complementing classical mechanics describing and predicting new experimental situations. With thermodynamics and statistical mechanics the tool box of theories for describing physical reality was considered complete. There even were statements of prominent physicists of the nineteenth century that now only engineers would be needed, physics was complete. Then the worm turned, because experimental observations appeared that were not predicted and could not be theoretically explained within the classical system, and the mathematical theory of quantum mechanics had to be invented. What were these observations that separate a classical from a quantum mechanical system? 1) Black body radiation, which could not be explained classically and had to assume that the electromagnetic radiation came in quanta, units, carrying energy h*nu 2)The photoelectric effect which also needed quanta ( packages of energy) for the electrons emitted from materials 3)The light spectra from atoms, both emission and absorption, which instead of showing the classical continuous behavior showed absorption and emission lines, again showing discrete packages, quanta , of energy . For hydrogen these lines even followed mathematical series. Trying to understand a model of how electrons could revolve around nuclei , incompatibility with the classical EM theory was found . There was no reason that the electrons would always stay in fixed orbits around the nucleus. Fixed orbit solutions with classical electromagnetism could be found, but there was no theoretical reason once disturbed that the electron would not fall into the nucleus diminishing its charge by one. The constraint was imposed by Bohr, that the orbits had to be fixed/quantized with the Bohr atom. It could explain the Balmer series, but it was an ad hoc model, not a theory. The time came for the Schrodinger equation to enter the stage which allowed the development of a mathematical theoretical model that explained not only simple atoms , but set the stage for the further theoretical study of elementary particle interactions. It was complemented with the Born rule, which is the postulate that allows to relate theoretical predicted numbers to data We are now at the stage where just these two frameworks , the classical and the quantum mechanical , are sufficient to explain observations and predict new phenomena successfully, mainly for large dimensions classically, and for small dimension quantum mechanically. The real size is determined , as stated in other answers, by the value of h_bar and the mathematical relations that constrain the solutions, which give rise to the Heisenberg uncertainty principle. Sometimes, as in superconductivity, quantum mechanical effects can exist in large dimensions, but in general the framework is the underlying one for small dimensions. As dimensions grow large the quantum mechanical mathematical formulations lead at the limit consistently to the classical formulations. The main difference between classical and quantum physics is the fact that observables (Hermitian operators whose eigenvalues determine the possible values of physical quantities of the system) do not commute. For classical systems commutativity is trivially satisfied. A direct consequence of this is the uncertainty principle and the fact that via Bell's theorem there is no local realism. If it wasn't for this simple but important fact, quantum mechanics could essentially be reduced to a classical (albeit stochastic) theory. $\newcommand{\pdv}[2]{\frac{\partial #1}{\partial #2}}$ $\newcommand{\ket}[1]{\left| #1 \right>}$ Quanta vs. Continuous Quantum, as the name suggests, deals with quanta ie packages of something. For example for a given particle and given potential the energy is quantised. Eg. for quantum mechanical SHO the energy levels are $E_n=\hbar \omega (n+\frac{1}{2})$. Notice that the only allowed energies in this system are $E_0= \hbar \omega \frac{1}{2}$, $E_1= \hbar \omega \frac{3}{2}$, $E_2= \hbar \omega \frac{5}{2}$ etc. However there shouldn't be any confusion that QM forbids energies like $E=\hbar \omega \frac{1}{4}$ or any energy for that matter. For this specific potential and for a given $\omega$ however this is not a possible energy eigenvalue. Position and Momentum vs. Wave Function In classical mechanics a system is completely described by a point on the phase space, which usually consists of the momentums and positions of the objects involved. In QM an isolated system of one or more particles is completely characterised by a complex-valued function called the wave function and usually denoted with greek capital letter psi $\Psi$ or with lower case psi $\psi$. Note that the information about the position and the momentum of the particle is “encoded” in this wave function. Some times the system is described by a vector $\ket \psi$ called ket in a Hilbert space, which is of course mathematically equivalent to the wave function. Deterministic vs. Probabilistic Classical mechanics describes a deterministic system, which means that what will happen at the next instant of time is completely determined, where as QM describes a probabilistic system. The norm squared of the wave function $|\psi|^2$ gives the probability density. Notice that in QM a wave function can be in superposition of more than one states ie. $\psi=\alpha \psi_1+\beta \psi_2$ then the probability that the particle is in state $\psi_1$ is given by $|\alpha|^2$ and the probability that the particle is in state $\psi_2$ is given by $|\beta|^2$. Since the total probability must add to one the wave function is usually normalized in such a way that $|\alpha|^2+|\beta|^2=1$. Mathematical Differences There is also mathematical difference between them. CM almost always deals with real numbers. You can see here and there some complex numbers but they are mainly computational tools, which have no physical meaning. On the other hand QM is governed by imaginary numbers, which have actual physical meanings. In particular the wave function is a complex valued function, which can be interpreted as probability amplitude. Equations of Motions The equation of motion for classical physics is a nice second order ordinary differential equation known as Newtons Law: $$\vec F = m \ddot a$$ Sometimes you also see it in the following form: $$m \ddot a = - \nabla V$$ The equation of motion for QM is a second order in space and first order in time partial differential equation known as the Schrödinger equation: $$i\hbar\pdv{}{t}\Psi(\vec r,t)=\frac{\hat { \mathbf {p}}^2}{2m}\Psi(\vec r, t)+V(\vec r)\Psi(\vec r, t)=-\frac{\hbar^2}{2m} \nabla^2 \Psi(\vec r,t)+V(\vec r) \Psi(\vec r,t)$$ or in matrix form as: $$i\hbar\pdv{}{t}\ket{\Psi(\vec r,t)}=H\ket{\Psi(\vec r,t)}$$ Orders of Magnitude The QM can be considered as the real world since it describes the world of atoms and what you see around you is made of atoms. The classical mechanics is usually an approximation of QM. For example you can take the mass of the particle to be large or the energy level of the system to be large and you usually arrive at classical predictions. Notice that QM deals with energies of the order $10^{-34}$ Joule and masses of the order $10^{-23}$ kg. This should give you a feeling of how small the scales are. In general you can make the assumption that if you are dealing with particles like electrons, protons photons or whatever you should consider the system to be a quantum mechanical system. However, in most applications quantum objects can be considered as classical object for convenience. There are probably more differences but these are the major ones that come to my mind. In a classical system (as we have studied), the behavior of a body can be exactly determined. For example, in projectile motion, the position, velocity, and acceleration of a system can be determined exactly by knowing only the time $t$ plus your initial conditions. In terms of size a classical system usually deals with macroscopic systems. In a quantum system, you are bound to uncertainties, and therefore, to a range of values in determining the motion of the body, as stated by Heisenberg Uncertainty Principle. In addition, we are dealing with particles and Waves, and the energies that these waves exhibit. But quantum systems can also be treated classically. For example, in geometric Optics, we focus on the path that light travels. However, if we focus on the particles of light that were in motion, we are dealing with quantum systems of an ensemble of particles.
Liouville's Theorem states that every bounded entire function must be constant. Does it work in real analysis? Justify your answer! I asked it because Liouville's Theorem is proved by complex analysis. Actually it does work in real analysis. The question is only which condition replaces the "entire" because it is certainly not true for all real-valued functions (take $\sin(x)$ as Chandru states). However, if a real-valued function $f$ is harmonic which means that: $$\frac{\partial^2f}{\partial x_1^2} +\frac{\partial^2f}{\partial x_2^2} +\cdots +\frac{\partial^2f}{\partial x_n^2} = 0$$ It actually has the Liouville Property, isn't that neat? Take $f(x)=\sin{x}$. clearly $|f| \leq 1$ is bounded and entire but is not constant
Venue: Seminar Room “-1” – Department of Mathematics – Via Sommarive, 14 Povo - Trento Time: 10.30 a.m. Pierpaola Santarsiero- PhD in Mathematics Abstract: Given a non degenerate irreducible projective variety $X$, its $2$-secant variety is defined as the Zariski closure of the union of the spans of any two points of the variety $X$. \\Given an $n$-dimensional smooth lattice polytope $P \subset \mathbb{R}^n $ not $AGL _n(\mathbb{Z})$-equivalent to $\Delta_n$, $2\Delta_n$, $(2\Delta_n)_k $ for $ 0\leq k \leq n-2$ or to $ \Delta_l \times \Delta_{n-l}$ for $ 1\leq l \leq n-1$, then the generic point of its $2 $-secant variety is identifiable, i.e. it lies on an unique secant line of $X_P $. This concludes the study of the dimension of the $2 $-secant variety of a variety associated to a smooth $n$-dimensional lattice polytope. The seminar corresponds to the final exam of Algebraic Geometry II, a planned course within Santarsiero's first year PHD study programme Contact Person: Eduardo Luis Sola Conde
I don't think this is possible for general $\epsilon$, and I doubt it's possible for remainder $0.0001$. Below are some solutions with remainder less than $0.01$. I produced them by randomized search from two different initial configurations. In the first one, I only placed the circle with curvature $2$ in the centre and tried placing the remaining circles randomly, beginning with curvature $12$; in the second one, I prepositioned pairs of circles that fit in the corners and did a deterministic search for the rest. The data structure I used was a list of interstices, each in turn consisting of a list of circles forming the interstice (where the lines forming the boundary of the square are treated as circles with zero curvature). I went through the circles in order of curvature and for each circle tried placing it snugly in each of the cusps where two circles touch in random order. If a circle didn't fit anywhere, I discarded it; if that decreased the remaining area below what was needed to get up to the target value (in this case $0.99$), I backtracked to the last decision. I also did this without using the circle with curvature $2$. For that case I did a complete search and found no configurations with remainder less than $0.01$. Thus, if there is a better solution in that case, it must involve placing the circles in a different order. (We can always transform any solution to one where each circle is placed snugly in a cusp formed by two other circles, so testing only such positions is not a restriction; however, circles with lower curvature might sit in the cusps of circles with higher curvature, and I wouldn't have found such solutions.) For the case including the circle with curvature $2$, the search wasn't complete (I don't think it can be done completely in this manner, without introducing further ideas), so I can't exclude that there's are significantly better configurations (even ones with in-order placement), but I'll try to describe how I came to doubt that there's much room for improvement beyond $0.01$, and particularly that this can be done for arbitrary $\epsilon$. The reasons are both theoretical and numerical. Numerically, I found that this seems to be a typical combinatorial optimization problem: There are many local minima, and the best ones are quite close to each other. It's easy to get to $0.02$; it's relatively easy to get to $0.011$; it takes quite a bit more optimization to get to $0.01$; and beyond that practically all the solutions I found were within $0.0002$ or so of $0.01$. So a solution with $0.0001$ would have to be of a completely different kind from everything that I found. Now of course a priori there might be some systematic solution that's hard to find by this sort of search but can be proved to exist. That might conceivably be the case for $0.0001$, but I'm pretty sure it's not the case for general $\epsilon$. To prove that it's possible to leave a remainder less than $\epsilon$ for any $\epsilon\gt0$, one might try to argue that after some initial phase it will always be possible to fit the remaining circles into the remaining space. The problem is that such an argument can't work, because we're trying to fill the rational area $1$ by discarding rational multiples of $\pi$ from the total area $\pi^3/6$, so we can't do it by discarding a finite number of circles, since $\pi$ is transcendental. Thus we can never reach a stage where we could prove that the remaining circles will exactly fit, and hence every proof that proves we can beat an arbitrary $\epsilon$ would have to somehow show that the remaining circles can be divided into two infinite subsets, with one of them exactly fitting into the remaining gaps. Of course this, too, is possible in principle, but it seems rather unlikely; the problem strikes me as a typical messy combinatorial optimization problem with little regularity. A related reason not to expect a clean solution is that in an Apollonian gasket with integer curvatures, some integers typically occur more than once. For instance, one might try to make use of the fact that the curvatures $0$, $2$, $18$ and $32$ form a quadruple that would allow us to fill an entire half-corner with a gasket of circles of integer curvature; however, in that gasket, many curvatures, for instance $98$, occur more than once, so we'd have to make exceptions for those since we're not allowed to reuse those circles. Also, if you look at the gaskets produced by $0$, $2$ and the numbers from $12$ to $23$ (which are the candidates to be placed in the corners), you'll find that the fourth number increases more rapidly than the third; that is, $0$, $2$ and $18$ lead to $32$, whereas $0$ $2$ and $19$ already lead to $(\sqrt2+\sqrt{19})^2\approx33.3$; so not only can you not place all the numbers from $12$ to $23$ into the corners (since only two of them fit together and there are only four corners), but then if you start over with $24$ (which is the next number in the gasket started by $12$), you can't even continue with the same progression, since the spacing has increased. The difference would have to be compensated by the remaining space in the corners that's not part of the gaskets with the big $2$-circle, but that's too small to pick up the slack, which makes it hard to avoid dropping several of the circles in the medium range around the thirties. My impression from the optimization process is that we're forced to discard too much area quite early on; that is, we can't wait for some initial irregularities to settle down into some regular pattern that we can exploit. For instance, the first solution below uses all curvatures except for the following: 3 4 5 6 7 8 9 10 11 16 17 20 22 25 30 31 33 38 46 48 49 52 53 55 56 57 59 79 81 94 96 101 106 107 108 113 125 132. Already at 49 the remaining area becomes less than would be needed to fill the square. Other solutions I found differed in the details of which circles they managed to squeeze in where, but the total area always dropped below $1$ early on. Thus, it appears that it's the irregular constraints at the beginning that limit what can be achieved, and this can't be made up for by some nifty scheme extending to infinity. It might even be possible to prove by an exhaustive search that some initial set of circles can't be placed without discarding too much area. To be rigorous, this would have to take a lot more possibilities into account than my search did (since the circles could be placed in any order), but I don't see why allowing the bigger circles to be placed later on should make such a huge difference, since there's precious little wiggle room for their placement to begin with if we want to fit in most of the ones between $12$ and $23$. So here are the solutions I found with remainder less than $0.01$. The configurations shown are both filled up to an area $\gtrsim0.99$ and have a tail of tiny circles left worth about another $0.0002$. For the first one, I checked with integer arithmetic that none of the circles overlap. (In fact I placed the circles with integer arithmetic, using floating-point arithmetic to find an approximation of the position and a single iteration of Newton's method in integer arithmetic to correct it.) The first configuration has $10783$ circles and was found using repeated randomized search starting with only the circle of curvature $2$ placed; I think I ran something like $100$ separate trials to find this one, and something like $1$ in $50$ of them found a solution with remainder below $0.01$; each trial took a couple of seconds on a MacBook Pro. The second configuration has $17182$ circles and was found by initially placing pairs of circles with curvatures $(12,23)$, $(13,21)$, $(14,19)$ and $(15,18)$ touching each other in the corners and tweaking their exact positions by hand; the tweaking brought a gain of something like $0.0005$, which brought the remainder down below $0.01$. The search for the remaining circles was carried out deterministically, in that I always tried first to place a circle into the cusps formed by the smaller circles and the boundary lines; this was to keep as much contiguous space as possible available in the cusps between the big circle and the boundary lines. I also tried placing pairs of circles with curvatures $(13,21)$, $(14,19)$, $(15,18)$ and $(16,17)$ in the corners, but only got up to $0.9896$ with that. Here are high-resolution version of the images; they're scaled down in this column, but you can open them in a new tab/window (where you might have to click on them to toggle the browser's autoscale feature) to get the full resolution. Randomized search: With pre-placed circles:
Definition:Isolated Singularity/Pole Jump to navigation Jump to search Then $z_0$ is a Definition Let $U$ be an open subset of a Riemann surface. Let $z_0 \in U$. Let $f: U \setminus \set {z_0} \to \C$ be a holomorphic function. Let $z_0$ be an isolated singularity of $f$. Then $z_0$ is a pole if and only if: $\displaystyle \lim_{z \mathop \to z_0} \cmod {\map f z} \to \infty$ Sources 2014: Christopher Clapham and James Nicholson: The Concise Oxford Dictionary of Mathematics(5th ed.) ... (previous) ... (next): Entry: pole
[pstricks] Searching for a numeric solution for coupled differential equation systems of order 2 Alexander Grahn A.Grahn at hzdr.de Fri May 25 13:02:23 CEST 2012 On Fri, May 25, 2012 at 01:52:18PM +0300, Manfred Braun wrote: > On 25.05.2012, at 12:15, Alexander Grahn wrote: > >> On Thu, May 24, 2012 at 10:21:06PM +0200, Juergen Gilg wrote: >>> Dear PSTricks list, >>> >>> is there anybody out there, knowing about how to generate a PS code for >>> "Runge-Kutta 4" to solve a "coupled differential equation system of >>> order 2"? >>> >>> There is the need for such a code to animate a double-pendulum in real >>> time, following the coupled differential equations with the variables >>> (\varphi_1, \varphi_2) and their derivatives like: >>> >>> (1) >>> l_1\ddot{\varphi}_1+\frac{m_2}{m_1+m_2}l_2\ddot{\varphi}_2\cos(\varphi_1-\varphi_2)-\frac{m_2}{m_1+m_2}l_2\dot{\varphi}_2^2\sin(\varphi_1-\varphi_2)+g\sin\varphi_1=0 >>> >>> (2) >>> l_2\ddot{\varphi}_2+l_1\ddot{\varphi}_1\cos(\varphi_1-\varphi_2)-l_1\dot{\varphi}_1^2\sin(\varphi_1-\varphi_2)+g\sin\varphi_2=0 >> >> I'd first transform this set of DEs of second order into a set of four >> DEs of first order, > > Yes, this is the right way. >> >> However, it may still not be amenable to Runge-Kutta, because the right >> hand sides of the resulting four equation set of DEs, put into normal >> form, still contain derivatives: > > No! Take the following steps: > > 1. Solve the two equations for \ddot{\varphi_1} and \ddot{\varphi_2}: > \ddot{\varphi}_1 = f_1(\varphi_1, \dot{\varphi}_1, \varphi_2, \dot{\varphi}_2) > \ddot{\varphi}_2 = f_2(\varphi_1, \dot{\varphi}_1, \varphi_2, \dot{\varphi}_2) > > 2. Define x_1 = \varphi_1, x_2 = \dot{\varphi}_1, x_3 = \varphi_2, x_4 = \dot{\varphi}_2 > > 3. The new variables satisfy the system of four 1st-order equations > \dot{x}_1 = x_2 > \dot{x}_2 = f_1(x_1, x_2, x_3, x_4) > \dot{x}_3 = x_4 > \dot{x}_4 = f_2(x_1, x_2, x_3, x_4) > > This can be solved by Runge-Kutta or any other procedure. > > Manfred Thanks, Manfred! In the meantime I tried to write the set of equations down. Here is the result, without errors I hope: The equations can be solved further to eliminate derivatives on the right hand side. The resulting set of first order ODEs can be integrated by \odesolve from the animate package documentation, provided the ODEs are not stiff. Initial conditions for $\varphi_1$, $\dot\varphi_1$, $\varphi_2$, $\dot\varphi_2$ are required: \documentclass{article} \usepackage{amsmath} \parindent0pt \begin{document} \section{Original set of DEs of second order} \begin{gather*} l_1\ddot{\varphi}_1+\frac{m_2}{m_1+m_2}l_2\ddot{\varphi}_2\cos(\varphi_1-\varphi_2)-\frac{m_2}{m_1+m_2}l_2\dot{\varphi}_2^2\sin(\varphi_1-\varphi_2)+g\sin\varphi_1=0\\ l_2\ddot{\varphi}_2+l_1\ddot{\varphi}_1\cos(\varphi_1-\varphi_2)-l_1\dot{\varphi}_1^2\sin(\varphi_1-\varphi_2)+g\sin\varphi_2=0 \end{gather*} \section{Set of first order ODEs in normal form, amenable to Runge-Kutta} \begin{align*} \dot\varphi_{11}& = \varphi_{12}\\ \dot\varphi_{12}& = \frac{\frac{a_1-b_1}{l_1}-\frac{c_1}{l_1}\frac{a_2-b_2}{l_2}}{1-\frac{c_1 c_2}{l_1 l_2}}\\ \dot\varphi_{21}& = \varphi_{22}\\ \dot\varphi_{22}& = \frac{\frac{a_2-b_2}{l_2}-\frac{c_2}{l_2}\frac{a_1-b_1}{l_1}}{1-\frac{c_1 c_2}{l_1 l_2}}\\ \end{align*} with \begin{align*} a_1& = \frac{m_2}{m_1+m_2} l_2 \varphi_{22}^2 \sin(\varphi_{11}-\varphi_{21})\\ b_1& = g \sin\varphi_{11}\\ c_1& = \frac{m_2}{m_1+m_2} l_2 \cos(\varphi_{11}-\varphi_{21})\\ a_2& = l_1 \varphi_{12} \sin(\varphi_{11}-\varphi_{21})\\ b_2& = g \sin\varphi_{21}\\ c_2& = l_1 \cos(\varphi_{11}-\varphi_{21}) \end{align*} \end{document} Alexander More information about the PSTricksmailing list
The next thing that we would like to be able to do is to describe the shape of this ellipse mathematically so that we can understand how the data are distributed in multiple dimensions under a multivariate normal. To do this we first must define the eigenvalues and the eigenvectors of a matrix. In particular we will consider the computation of the eigenvalues and eigenvectors of a symmetric matrix \(\textbf{A}\) as shown below: \(\textbf{A} = \left(\begin{array}{cccc}a_{11} & a_{12} & \dots & a_{1p}\\ a_{21} & a_{22} & \dots & a_{2p}\\ \vdots & \vdots & \ddots & \vdots\\ a_{p1} & a_{p2} & \dots & a_{pp} \end{array}\right)\) Note: we would call the matrix symmetric if the elements \(a^{ij}\) are equal to \(a^{ji}\) for each i and j. Usually \(\textbf{A}\) is taken to be either the variance-covariance matrix \(Σ\), or the correlation matrix, or their estimates S and R, respectively. Eigenvalues and eigenvectors are used for: Computing prediction and confidence ellipses Principal Components Analysis (later in the course) Factor Analysis (also later in this course) For the present we will be primarily concerned with eigenvalues and eigenvectors of the variance-covariance matrix. First of all let's define what these terms are... Eigenvalues If we have a px pmatrix \(\textbf{A}\) we are going to have p eigenvalues, \(\lambda _ { 1 , } \lambda _ { 2 } \dots \lambda _ { p }\). They are obtained by solving the equation given in the expression below: \(|\textbf{A}-\lambda\textbf{I}|=0\) On the left-hand side, we have the matrix \(\textbf{A}\) minus \(λ\) times the Identity matrix. When we calculate the determinant of the resulting matrix, we end up with a polynomial of order p. Setting this polynomial equal to zero, and solving for \(λ\) we obtain the desired eigenvalues. In general, we will have psolutions and so there are peigenvalues, not necessarily all unique. Eigenvectors The corresponding eigenvectors \(\mathbf { e } _ { 1 } , \mathbf { e } _ { 2 } , \ldots , \mathbf { e } _ { p }\)are obtained by solving the expression below: \((\textbf{A}-\lambda_j\textbf{I})\textbf{e}_j = \mathbf{0}\) Here, we have the difference between the matrix \(\textbf{A}\) minus the \(j^{th}\) eignevalue times the Identity matrix, this quantity is then multiplied by the \(j^{th}\) eigenvector and set it all equal to zero. This will obtain the eigenvector \(e_{j}\) associated with eigenvalue \(\mu_{j}\). This does not generally have a unique solution. So, to obtain a unique solution we will often require that \(e_{j}\) transposed \(e_{j}\) is equal to 1. Or, if you like, the sum of the square elements of \(e_{j}\) is equal to 1. \(\textbf{e}'_j\textbf{e}_j = 1\) Note!Eigenvectors also correspond to different eigenvalues are orthogonal. In situations, where two (or more) eigenvalues are equal, corresponding eigenvectors may still be chosen to be orthogonal. Example 4-3: Consider the 2 x 2 matrix Section To illustrate these calculations consider the correlation matrix R as shown below: \(\textbf{R} = \left(\begin{array}{cc} 1 & \rho \\ \rho & 1 \end{array}\right)\) Then, using the definition of the eigenvalues, we must calculate the determinant of \(R - λ\) times the Identity matrix. \(\left|\bf{R} - \lambda\bf{I}\bf\right| = \left|\color{blue}{\begin{pmatrix} 1 & \rho \\ \rho & 1\\ \end{pmatrix}} -\lambda \color{red}{\begin{pmatrix} 1 & 0 \\ 0 & 1\\ \end{pmatrix}}\right|\) So, \(\textbf{R}\) in the expression above is given in blue, and the Identity matrix follows in red, and \(λ\) here is the eigenvalue that we wish to solve for. Carrying out the math we end up with the matrix with \(1 - λ\) on the diagonal and \(ρ\) on the off-diagonal. Then calculating this determinant we obtain \((1 - λ)^{2} - \rho ^{2}\) squared minus \(ρ^{2}\). \(\left|\begin{array}{cc}1-\lambda & \rho \\ \rho & 1-\lambda \end{array}\right| = (1-\lambda)^2-\rho^2 = \lambda^2-2\lambda+1-\rho^2\) Setting this expression equal to zero we end up with the following... \( \lambda^2-2\lambda+1-\rho^2=0\) To solve for \(λ\) we use the general result that any solution to the second order polynomial below: \(ay^2+by+c = 0\) is given by the following expression: \(y = \dfrac{-b\pm \sqrt{b^2-4ac}}{2a}\) Here, \(a = 1, b = -2\) (the term that precedes \(λ\)) and c is equal to \(1 - ρ^{2}\) Substituting these terms in the equation above, we obtain that \(λ\) must be equal to 1 plus or minus the correlation \(ρ\). \begin{align} \lambda &= \dfrac{2 \pm \sqrt{2^2-4(1-\rho^2)}}{2}\\ & = 1\pm\sqrt{1-(1-\rho^2)}\\& = 1 \pm \rho \end{align} Here we will take the following solutions: \( \begin{array}{ccc}\lambda_1 & = & 1+\rho \\ \lambda_2 & = & 1-\rho \end{array}\) Next, to obtain the corresponding eigenvectors, we must solve a system of equations below: \((\textbf{R}-\lambda\textbf{I})\textbf{e} = \mathbf{0}\) This is the product of \(R - λ\) times I and the eigenvector e set equal to 0. Or in other words, this is translated for this specific problem in the expression below: \(\left\{\left(\begin{array}{cc}1 & \rho \\ \rho & 1 \end{array}\right)-\lambda\left(\begin{array}{cc}1 &0\\0 & 1 \end{array}\right)\right \}\left(\begin{array}{c} e_1 \\ e_2 \end{array}\right) = \left(\begin{array}{c} 0 \\ 0 \end{array}\right)\) This simplifies as follows: \(\left(\begin{array}{cc}1-\lambda & \rho \\ \rho & 1-\lambda \end{array}\right) \left(\begin{array}{c} e_1 \\ e_2 \end{array}\right) = \left(\begin{array}{c} 0 \\ 0 \end{array}\right)\) Yielding a system of two equations with two unknowns: \(\begin{array}{lcc}(1-\lambda)e_1 + \rho e_2 & = & 0\\ \rho e_1+(1-\lambda)e_2 & = & 0 \end{array}\) Note! This does nothave a unique solution. If \((e_{1}, e_{2})\) (\(e_{1}\), \(e_{2}\)) is one solution, then a second solution can be obtained by multiplying the first solution by any non-zero constant c, i.e., \((ce_{1}, ce_{2})\). Therefore, we will require the additional condition that the sum of the squared values of \((e_{1}\) and \(e_{2})\) are equal to 1 (ie., \(e^2_1+e^2_2 = 1\)) Consider the first equation: \((1-\lambda)e_1 + \rho e_2 = 0\) Solving this equation for \(e_{2}\) and we obtain the following: \(e_2 = -\dfrac{(1-\lambda)}{\rho}e_1\) Substituting this into \(e^2_1+e^2_2 = 1\) we get the following: \(e^2_1 + \dfrac{(1-\lambda)^2}{\rho^2}e^2_1 = 1\) Recall that \(\lambda = 1 \pm \rho\). In either case we end up finding that \((1-\lambda)^2 = \rho^2\), so that the expression above simplifies to: \(2e^2_1 = 1\) Or, in other words: \(e_1 = \dfrac{1}{\sqrt{2}}\) Using the expression for \(e_{2}\) which we obtained above, \(e_2 = -\dfrac{1-\lambda}{\rho}e_1\) we get \(e_2 = \dfrac{1}{\sqrt{2}}\) for \(\lambda = 1 + \rho\) and \(e_2 = \dfrac{1}{\sqrt{2}}\) for \(\lambda = 1-\rho\) Therefore, the two eigenvectors are given by the two vectors as shown below: \(\left(\begin{array}{c}\frac{1}{\sqrt{2}}\\ \frac{1}{\sqrt{2}} \end{array}\right)\) for \(\lambda_1 = 1+ \rho\) and \(\left(\begin{array}{c}\frac{1}{\sqrt{2}}\\ -\frac{1}{\sqrt{2}} \end{array}\right)\) for \(\lambda_2 = 1- \rho\) Some properties of the eigenvalues of the variance-covariance matrix are to be considered at this point. Suppose that \(\mu_{1}\) through \(\mu_{p}\) are the eigenvalues of the variance-covariance matrix \(Σ\). By definition, the total variation is given by the sum of the variances. It turns out that this is also equal to the sum of the eigenvalues of the variance-covariance matrix. Thus, the total variation is: \(\sum_{j=1}^{p}s^2_j = s^2_1 + s^2_2 +\dots + s^2_p = \lambda_1 + \lambda_2 + \dots + \lambda_p = \sum_{j=1}^{p}\lambda_j\) The generalized variance is equal to the product of the eigenvalues: \(|\Sigma| = \prod_{j=1}^{p}\lambda_j = \lambda_1 \times \lambda_2 \times \dots \times \lambda_p\)
But if you don't want to have a Google account: Chrome is really good. Much faster than FF (I can't run FF on either of the laptops here) and more reliable (it restores your previous session if it crashes with 100% certainty). And Chrome has a Personal Blocklist extension which does what you want. : ) Of course you already have a Google account but Chrome is cool : ) Guys, I feel a little defeated in trying to understand infinitesimals. I'm sure you all think this is hilarious. But if I can't understand this, then I'm yet again stalled. How did you guys come to terms with them, later in your studies? do you know the history? Calculus was invented based on the notion of infinitesimals. There were serious logical difficulties found in it, and a new theory developed based on limits. In modern times using some quite deep ideas from logic a new rigorous theory of infinitesimals was created. @QED No. This is my question as best as I can put it: I understand that lim_{x->a} f(x) = f(a), but then to say that the gradient of the tangent curve is some value, is like saying that when x=a, then f(x) = f(a). The whole point of the limit, I thought, was to say, instead, that we don't know what f(a) is, but we can say that it approaches some value. I have problem with showing that the limit of the following function$$\frac{\sqrt{\frac{3 \pi}{2n}} -\int_0^{\sqrt 6}(1-\frac{x^2}{6}+\frac{x^4}{120})^ndx}{\frac{3}{20}\frac 1n \sqrt{\frac{3 \pi}{2n}}}$$equal to $1$, with $n \to \infty$. @QED When I said, "So if I'm working with function f, and f is continuous, my derivative dy/dx is by definition not continuous, since it is undefined at dx=0." I guess what I'm saying is that (f(x+h)-f(x))/h is not continuous since it's not defined at h=0. @KorganRivera There are lots of things wrong with that: dx=0 is wrong. dy/dx - what/s y? "dy/dx is by definition not continuous" it's not a function how can you ask whether or not it's continous, ... etc. In general this stuff with 'dy/dx' is supposed to help as some kind of memory aid, but since there's no rigorous mathematics behind it - all it's going to do is confuse people in fact there was a big controversy about it since using it in obvious ways suggested by the notation leads to wrong results @QED I'll work on trying to understand that the gradient of the tangent is the limit, rather than the gradient of the tangent approaches the limit. I'll read your proof. Thanks for your help. I think I just need some sleep. O_O @NikhilBellarykar Either way, don't highlight everyone and ask them to check out some link. If you have a specific user which you think can say something in particular feel free to highlight them; you may also address "to all", but don't highlight several people like that. @NikhilBellarykar No. I know what the link is. I have no idea why I am looking at it, what should I do about it, and frankly I have enough as it is. I use this chat to vent, not to exercise my better judgment. @QED So now it makes sense to me that the derivative is the limit. What I think I was doing in my head was saying to myself that g(x) isn't continuous at x=h so how can I evaluate g(h)? But that's not what's happening. The derivative is the limit, not g(h). @KorganRivera, in that case you'll need to be proving $\forall \varepsilon > 0,\,\,\,\, \exists \delta,\,\,\,\, \forall x,\,\,\,\, 0 < |x - a| < \delta \implies |f(x) - L| < \varepsilon.$ by picking some correct L (somehow) Hey guys, I have a short question a friend of mine asked me which I cannot answer because I have not learnt about measure theory (or whatever is needed to answer the question) yet. He asks what is wrong with \int_0^{2 \pi} \frac{d}{dn} e^{inx} dx when he applies Lesbegue's dominated convergence theorem, because apparently, if he first integrates and then derives, the result is 0 but if he first derives and then integrates it's not 0. Does anyone know?
The bounty period lasts 7 days. Bounties must have a minimum duration of at least 1 day. After the bounty ends, there is a grace period of 24 hours to manually award the bounty. Simply click the bounty award icon next to each answer to permanently award your bounty to the answerer. (You cannot award a bounty to your own answer.) @Mathphile I found no prime of the form $$n^{n+1}+(n+1)^{n+2}$$ for $n>392$ yet and neither a reason why the expression cannot be prime for odd n, although there are far more even cases without a known factor than odd cases. @TheSimpliFire That´s what I´m thinking about, I had some "vague feeling" that there must be some elementary proof, so I decided to find it, and then I found it, it is really "too elementary", but I like surprises, if they´re good. It is in fact difficult, I did not understand all the details either. But the ECM-method is analogue to the p-1-method which works well, then there is a factor p such that p-1 is smooth (has only small prime factors) Brocard's problem is a problem in mathematics that asks to find integer values of n and m for whichn!+1=m2,{\displaystyle n!+1=m^{2},}where n! is the factorial. It was posed by Henri Brocard in a pair of articles in 1876 and 1885, and independently in 1913 by Srinivasa Ramanujan.== Brown numbers ==Pairs of the numbers (n, m) that solve Brocard's problem are called Brown numbers. There are only three known pairs of Brown numbers:(4,5), (5,11... $\textbf{Corollary.}$ No solutions to Brocard's problem (with $n>10$) occur when $n$ that satisfies either \begin{equation}n!=[2\cdot 5^{2^k}-1\pmod{10^k}]^2-1\end{equation} or \begin{equation}n!=[2\cdot 16^{5^k}-1\pmod{10^k}]^2-1\end{equation} for a positive integer $k$. These are the OEIS sequences A224473 and A224474. Proof: First, note that since $(10^k\pm1)^2-1\equiv((-1)^k\pm1)^2-1\equiv1\pm2(-1)^k\not\equiv0\pmod{11}$, $m\ne 10^k\pm1$ for $n>10$. If $k$ denotes the number of trailing zeros of $n!$, Legendre's formula implies that \begin{equation}k=\min\left\{\sum_{i=1}^\infty\left\lfloor\frac n{2^i}\right\rfloor,\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\right\}=\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\end{equation} where $\lfloor\cdot\rfloor$ denotes the floor function. The upper limit can be replaced by $\lfloor\log_5n\rfloor$ since for $i>\lfloor\log_5n\rfloor$, $\left\lfloor\frac n{5^i}\right\rfloor=0$. An upper bound can be found using geometric series and the fact that $\lfloor x\rfloor\le x$: \begin{equation}k=\sum_{i=1}^{\lfloor\log_5n\rfloor}\left\lfloor\frac n{5^i}\right\rfloor\le\sum_{i=1}^{\lfloor\log_5n\rfloor}\frac n{5^i}=\frac n4\left(1-\frac1{5^{\lfloor\log_5n\rfloor}}\right)<\frac n4.\end{equation} Thus $n!$ has $k$ zeroes for some $n\in(4k,\infty)$. Since $m=2\cdot5^{2^k}-1\pmod{10^k}$ and $2\cdot16^{5^k}-1\pmod{10^k}$ has at most $k$ digits, $m^2-1$ has only at most $2k$ digits by the conditions in the Corollary. The Corollary if $n!$ has more than $2k$ digits for $n>10$. From equation $(4)$, $n!$ has at least the same number of digits as $(4k)!$. Stirling's formula implies that \begin{equation}(4k)!>\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\end{equation} Since the number of digits of an integer $t$ is $1+\lfloor\log t\rfloor$ where $\log$ denotes the logarithm in base $10$, the number of digits of $n!$ is at least \begin{equation}1+\left\lfloor\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)\right\rfloor\ge\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right).\end{equation} Therefore it suffices to show that for $k\ge2$ (since $n>10$ and $k<n/4$), \begin{equation}\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)>2k\iff8\pi k\left(\frac{4k}e\right)^{8k}>10^{4k}\end{equation} which holds if and only if \begin{equation}\left(\frac{10}{\left(\frac{4k}e\right)}\right)^{4k}<8\pi k\iff k^2(8\pi k)^{\frac1{4k}}>\frac58e^2.\end{equation} Now consider the function $f(x)=x^2(8\pi x)^{\frac1{4x}}$ over the domain $\Bbb R^+$, which is clearly positive there. Then after considerable algebra it is found that \begin{align*}f'(x)&=2x(8\pi x)^{\frac1{4x}}+\frac14(8\pi x)^{\frac1{4x}}(1-\ln(8\pi x))\\\implies f'(x)&=\frac{2f(x)}{x^2}\left(x-\frac18\ln(8\pi x)\right)>0\end{align*} for $x>0$ as $\min\{x-\frac18\ln(8\pi x)\}>0$ in the domain. Thus $f$ is monotonically increasing in $(0,\infty)$, and since $2^2(8\pi\cdot2)^{\frac18}>\frac58e^2$, the inequality in equation $(8)$ holds. This means that the number of digits of $n!$ exceeds $2k$, proving the Corollary. $\square$ We get $n^n+3\equiv 0\pmod 4$ for odd $n$, so we can see from here that it is even (or, we could have used @TheSimpliFire's one-or-two-step method to derive this without any contradiction - which is better) @TheSimpliFire Hey! with $4\pmod {10}$ and $0\pmod 4$ then this is the same as $10m_1+4$ and $4m_2$. If we set them equal to each other, we have that $5m_1=2(m_2-m_1)$ which means $m_1$ is even. We get $4\pmod {20}$ now :P Yet again a conjecture!Motivated by Catalan's conjecture and a recent question of mine, I conjecture thatFor distinct, positive integers $a,b$, the only solution to this equation $$a^b-b^a=a+b\tag1$$ is $(a,b)=(2,5).$It is of anticipation that there will be much fewer solutions for incr...
The equation in the title won't make since till halfway down. Let us suppose we are looking at some nonflat spacetime equipped with a metric $g_{\mu\nu}$. Furthermore our manifold admits a covering set of tetrads (vielbeins/framefields whatever you want to call them) such that we can define a local “lab frame”. $$\eta_{ab}=e_{a}^{\mu}g_{\mu\nu}e_{b}^{\nu}$$ I'm curious here about two different points on a manifold. Since each local frame is Minkowskian , It makes sense to say that any two tetrad frames (say at points $A$ and $B$) can be related by a Lorentz transformation: $$\eta(B)_{cd}=\Lambda_{c}^{a}e_{a}^{\mu}(A)g_{\mu\nu}(A)e_{b}^{\nu}(A)\Lambda_{d}^{b}$$ The fact that the Minkowski metric is unchanged is hardly suprising; however, our $\Lambda$s though now have to be a function of spacetime positions since different positions will be related to point $A$'s frame by different Lorentz transformations (whereas normally one uses a “global” Lorentz transformation). We could do the same thing with some vector $V^{\mu}$. $$V(B)^{b}=\left(\Lambda(x^{i})\right)_{a}^{b}e_{\mu}^{a}(A)V^{\mu}$$ $$e_{\mu}^{a}(x_{B})=\Lambda_{b}^{a}(x_{B})e_{\mu}^{b}$$ Suppose there is one finite region of our spacetime that appears Minkowskian, Then the tetrad would simply be the identity matrix within this region. However, we can still represent the tetrad in any other point by the position dependent Lorentz transformation of this frame. Therefore it would appear that: $$e_{\mu}^{a}=\Lambda(x)_{b}^{a}\mathbf{1}_{\mu}^{b}=\Lambda(x)_{\mu}^{a}$$ $$g_{\mu\nu}=\Lambda(x)_{\mu}^{a}\eta_{ab}\Lambda(x)_{\nu}^{b}$$ Which implies we can represent the tetrad and thus the gravitational field by a position dependent Lorentz transformation! It's probably obvious to the reader that there is no way to uniquely define such a transformation between points, which gives the whole setup a large degree of arbitrariness. In particular one could follow two different paths between the points and one should very well expect to obtain two differing $\Lambda$'s. So the whole setup is then defined only up to an overall global Lorentz transformation. Rather than condemning our construction, this should be of great comfort! It means that we can locally annihilate $\Lambda(x)$ at any point, (and thus the gravitational field) by a choice of global Lorentz transformation. (very much in keeping with the equivalence principle) Note that since we are using the Lorentz group (a Lie group), we know we can apply Lorentz transformations in the form of rotations (usually denoted $\overrightarrow{J}$) and boosts (denoted $\overrightarrow{K}$). This allows us to write our general transformations as (I will omit constant factors just to unclutter notation) : $$\Lambda(x)=e^{i\overrightarrow{J}\cdot\overrightarrow{\theta(x)}+i\overrightarrow{K}\cdot\overrightarrow{\phi(x)}}$$ I thought it was pretty interesting. If you wanted to extend this to properly moving fields and whatnot, I imagine you'd want to extend the transformation to the whole of Poincare group. Note I've swept any talk about conformal changes to the metric under the rug, but I was curious to get another opinion about this setup before I take it to conformal changes. Did I screw this up? EDIT: I realize this setup might only apply to a family of geodesic observers (defined everywhere) ADDENDUM: I think I need to give an example: Suppose we push a test person out of the international space station and watch them fall to earth (supposing perfect vacuum all the way down). We could write their local frame at any position as a Lorentz transformation of our own frame on the ISS. mapping them all we could write the general Lorentz transformation as a function of position of the "test person" $\Lambda_{c}^{a}(\overrightarrow{r})$. $$\Lambda_{c}^{a}(\overrightarrow{r})\left\{ e_{a}^{\mu}\right\} _{ISS}=\left\{ e_{c}^{\mu}\right\} _{TestPerson}$$ Since we could take the ISS to be arbitarily far from the earth, we could let $\left\{ e_{a}^{\mu}\right\} _{ISS}\longrightarrow\mathbf{1}_{a}^{\mu}$ very far away (where $\mathbf{1}_{a}^{\mu}$ is just the identity), which leaves us with simply: $$\Lambda_{c}^{a}(\overrightarrow{r})\left\{ \mathbf{1}_{a}^{\mu}\right\} _{ISS}=\Lambda_{c}^{\mu}(\overrightarrow{r})=\left\{ e_{c}^{\mu}\right\} _{TestPerson}$$ Written as a Lie group element (as above), the respective rotations and boosts would be a function of position. I think then this contains all of the information of our original tetrad (though it won't in general cases). I think this is just the equivalent of going from a global gauge transformation to a local one, if my minds working right. Here we're going from a global lorentz transformation to a local one. The latter applying to special relativistic cases and the former to more "general" relativistic cases.
Let $X_n(\Bbb{Z})$ be the simplicial complex whose vertex set is $\Bbb{Z}$ and such that the vertices $v_0,...,v_k$ span a $k$-simplex if and only if $|v_i-v_j| \le n$ for every $i,j$. Prove that $X_n(\Bbb{Z})$ is $n$-dimensional... no kidding, my maths is foundations (basic logic but not pedantic), calc 1 which I'm pretty used to work with, analytic geometry and basic linear algebra (by basic I mean matrices and systems of equations only Anyway, I would assume if removing $ fixes it, then you probably have an open math expression somewhere before it, meaning you didn't close it with $ earlier. What's the full expression you're trying to get? If it's just the frac, then your code should be fine This is my first time chatting here in Math Stack Exchange. So I am not sure if this is frowned upon but just a quick question: I am trying to prove that a proper subgroup of $\mathbb{Z}^n$ is isomorphic to $\mathbb{Z}^k$, where $k \le n$. So we must have $rank(A) = rank(\mathbb{Z}^k)$ , right? For four proper fractions $a, b, c, d$ X writes $a+ b + c >3(abc)^{1/3}$. Y also added that $a + b + c> 3(abcd)^{1/3}$. Z says that the above inequalities hold only if a, b,c are positive. (a) Both X and Y are right but not Z. (b) Only Z is right (c) Only X is right (d) Neither of them is absolutely right. Yes, @TedShifrin the order of $GL(2,p)$ is $p(p+1)(p-1)^2$. But I found this on a classification of groups of order $p^2qr$. There order of $H$ should be $qr$ and it is present as $G = C_{p}^2 \rtimes H$. I want to know that whether we can know the structure of $H$ that can be present? Like can we think $H=C_q \times C_r$ or something like that from the given data? When we say it embeds into $GL(2,p)$ does that mean we can say $H=C_q \times C_r$? or $H=C_q \rtimes C_r$? or should we consider all possibilities? When considering finite groups $G$ of order, $|G|=p^2qr$, where $p,q,r$ are distinct primes, let $F$ be a Fitting subgroup of $G$. Then $F$ and $G/F$ are both non-trivial and $G/F$ acts faithfully on $\bar{F}:=F/ \phi(F)$ so that no non-trivial normal subgroup of $G/F$ stabilizes a series through $\bar{F}$. And when $|F|=pr$. In this case $\phi(F)=1$ and $Aut(F)=C_{p-1} \times C_{r-1}$. Thus $G/F$ is abelian and $G/F \cong C_{p} \times C_{q}$. In this case how can I write G using notations/symbols? Is it like $G \cong (C_{p} \times C_{r}) \rtimes (C_{p} \times C_{q})$? First question: Then it is, $G= F \rtimes (C_p \times C_q)$. But how do we write $F$ ? Do we have to think of all the possibilities of $F$ of order $pr$ and write as $G= (C_p \times C_r) \rtimes (C_p \times C_q)$ or $G= (C_p \rtimes C_r) \rtimes (C_p \times C_q)$ etc.? As a second case we can consider the case where $C_q$ acts trivially on $C_p$. So then how to write $G$ using notations? There it is also mentioned that we can distinguish among 2 cases. First, suppose that the sylow $q$-subgroup of $G/F$ acts non trivially on the sylow $p$-subgroup of $F$. Then $q|(p-1) and $G$ splits over $F$. Thus the group has the form $F \rtimes G/F$. A presentation $\langle S\mid R\rangle$ is a Dehn presentation if for some $n\in\Bbb N$ there are words $u_1,\cdots,u_n$ and $v_1,\cdots, v_n$ such that $R=\{u_iv_i^{-1}\}$, $|u_i|>|v_i|$ and for all words $w$ in $(S\cup S^{-1})^\ast$ representing the trivial element of the group one of the $u_i$ is a subword of $w$ If you have such a presentation there's a trivial algorithm to solve the word problem: Take a word $w$, check if it has $u_i$ as a subword, in that case replace it by $v_i$, keep doing so until you hit the trivial word or find no $u_i$ as a subword There is good motivation for such a definition here So I don't know how to do it precisely for hyperbolic groups, but if $S$ is a surface of genus $g \geq 2$, to get a geodesic representative for a class $[\alpha] \in \pi_1(S)$ where $\alpha$ is an embedded loop, one lifts it to $\widetilde{\alpha}$ in $\Bbb H^2$ by the locally isometric universal covering, and then the deck transformation corresponding to $[\alpha]$ is an isometry of $\Bbb H^2$ which preserves the embedded arc $\widetilde{\alpha}$ It has to be an isometry fixing a geodesic $\gamma$ with endpoints at the boundary being the same as the endpoints of $\widetilde{\alpha}$. Consider the homotopy of $\widetilde{\alpha}$ to $\gamma$ by straightline homotopy, but straightlines being the hyperbolic geodesics. This is $\pi_1(S)$-equivariant, so projects to a homotopy of $\alpha$ and the image of $\gamma$ (which is a geodesic in $S$) downstairs, and you have your desired representative I don't know how to interpret this coarsely in $\pi_1(S)$ @anakhro Well, they print in bulk, and on really cheap paper, almost transparent and very thin, and offset machine is really cheaper per page than a printer, you know, but you should be printing in bulk, its all economy of scale. @ParasKhosla Yes, I am Indian, and trying to get in some good masters progam in math. Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants.== Branches of algebraic graph theory ===== Using linear algebra ===The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Lap... I can probably guess that they are using symmetries and permutation groups on graphs in this course. For example, orbits and studying the automorphism groups of graphs. @anakhro I have heard really good thing about Palka. Also, if you do not worry about little sacrifice of rigor (e.g. counterclockwise orientation based on your intuition, rather than, on winding numbers, etc.), Howie's Complex analysis is good. It is teeming with typos here and there, but you will be fine, i think. Also, thisbook contains all the solutions in appendix! Got a simple question: I gotta find kernel of linear transformation $F(P)=xP^{''}(x) + (x+1)P^{'''}(x)$ where $F: \mathbb{R}_3[x] \to \mathbb{R}_3[x]$, so I think it would be just $\ker (F) = \{ ax+b : a,b \in \mathbb{R} \}$ since only polynomials of degree at most 1 would give zero polynomial in this case @chandx you're looking for all the $G = P''$ such that $xG + (x+1)G' = 0$; if $G \neq 0$ you can solve the DE to get $G'/G = -x/(x+1) = -1 + 1/(x+1) \implies \ln G = -x + \ln(1+x) \implies G = (1+x)e^(-x) + C$ which is obviously not a polyonomial, so $G = 0$ and thus $P = ax + b$ could you suppose that $\operatorname{deg} P \geq 2$ and show that you wouldn't have nonzero polynomials? Sure.
Yeah, this software cannot be too easy to install, my installer is very professional looking, currently not tied into that code, but directs the user how to search for their MikTeX and or install it and does a test LaTeX rendering Some body like Zeta (on codereview) might be able to help a lot... I'm not sure if he does a lot of category theory, but he does a lot of Haskell (not that I'm trying to conflate the two)... so he would probably be one of the better bets for asking for revision of code. he is usually on the 2nd monitor chat room. There are a lot of people on those chat rooms that help each other with projects. i'm not sure how many of them are adept at category theory though... still, this chat tends to emphasize a lot of small problems and occasionally goes off tangent. you're project is probably too large for an actual question on codereview, but there is a lot of github activity in the chat rooms. gl. In mathematics, the Fabius function is an example of an infinitely differentiable function that is nowhere analytic, found by Jaap Fabius (1966). It was also written down as the Fourier transform off^(z)=∏m=1∞(cos... Defined as the probability that $\sum_{n=1}^\infty2^{-n}\zeta_n$ will be less than $x$, where the $\zeta_n$ are chosen randomly and independently from the unit interval @AkivaWeinberger are you familiar with the theory behind Fourier series? anyway here's a food for thought for $f : S^1 \to \Bbb C$ square-integrable, let $c_n := \displaystyle \int_{S^1} f(\theta) \exp(-i n \theta) (\mathrm d\theta/2\pi)$, and $f^\ast := \displaystyle \sum_{n \in \Bbb Z} c_n \exp(in\theta)$. It is known that $f^\ast = f$ almost surely. (a) is $-^\ast$ idempotent? i.e. is it true that $f^{\ast \ast} = f^\ast$? @AkivaWeinberger You need to use the definition of $F$ as the cumulative function of the random variables. $C^\infty$ was a simple step, but I don't have access to the paper right now so I don't recall it. > In mathematics, a square-integrable function, also called a quadratically integrable function, is a real- or complex-valued measurable function for which the integral of the square of the absolute value is finite. I am having some difficulties understanding the difference between simplicial and singular homology. I am aware of the fact that they are isomorphic, i.e. the homology groups are in fact the same (and maybe this doesnt't help my intuition), but I am having trouble seeing where in the setup they d... Usually it is a great advantage to consult the notes, as they tell you exactly what has been done. A book will teach you the field, but not necessarily help you understand the style that the prof. (who creates the exam) creates questions. @AkivaWeinberger having thought about it a little, I think the best way to approach the geometry problem is to argue that the relevant condition (centroid is on the incircle) is preserved by similarity transformations hence you're free to rescale the sides, and therefore the (semi)perimeter as well so one may (for instance) choose $s=(a+b+c)/2=1$ without loss of generality that makes a lot of the formulas simpler, e.g. the inradius is identical to the area It is asking how many terms of the Euler Maclaurin formula do we need in order to compute the Riemann zeta function in the complex plane? $q$ is the upper summation index in the sum with the Bernoulli numbers. This appears to answer it in the positive: "By repeating the above argument we see that we have analytically continued the Riemann zeta-function to the right-half plane σ > 1 − k, for all k = 1, 2, 3, . . .."
it's claimed that the running time of QuickSort is $∼1.39\,n\log_2n$ I don't have the book handy, but it it most certainly does not claim this. That figure is the result of a close analysis of a very specific cost measure (which "time" is not); expected number of comparisons, if I remember correctly. The proof is given in the book, if I recall correctly. quicksort's average case is about $2n\log_e n$. This is basically the same figure since $1.39 \approx \frac{2}{\log_2 e}$, the difference probably being a rounding artifact. From your comment, quoting Sedgewick/Wayne: In summary, you can be sure that the running time of algorithm 2.5 (quicksort) will be within a constant factor of $1.39n \lg n$ whenever it it used to sort n items. This only states that the (expected) running time is in $\Theta(n \lg n)$; this follows from the (expected) number of comparisons being what it is, plus that comparison is a dominant operation in Quicksort. For what it's worth, the formulation "you can be sure" without context is misleading since it seems to ignore the worst-case, and the asymptotic nature of the result.
This kind of language use has often confused me. It is clearer to say the following: First Definition of Differentiability: Suppose we are given a function $g:\mathbb{R} \to \mathbb{R}$ and a number $t \in \mathbb{R}$. We say $g$ is differentiable at the point $t$ if the limit \begin{equation} \lim_{x \to t} \dfrac{g(x) - g(t)}{x-t} \end{equation} exists. In this case, we denote $g'(t)$ to be this limit, and we call $g'(t)$ "the derivative of $g$ at $t$." "Second" Definition of Differentiability: Suppose we are given a function $g:\mathbb{R} \to \mathbb{R}$ and a number $t \in \mathbb{R}$. We say $g$ is differentiable at the point $t$ if the limit \begin{equation} \lim_{h \to 0} \dfrac{g(t+h) - g(t)}{h} \end{equation} exists. In this case, we denote $g'(t)$ to be this limit, and we call $g'(t)$ "the derivative of $g$ at $t$." As mentioned in the image from Khan Academy, both these limit expressions are equivalent, which means these "two" definitions are really just different ways of saying the same thing. This is why I put the word 'second' in quotation marks above. Now, pay close attention to my choice of words in the definition. To define the concept of "differentiability of $g$ at $t$", we only need two pieces of information: the function $g$ and a number $t$. In the definition, there is no significance to the symbols "$x$" and "$h$"; these are just "dummy variables" for the limit. I could have just as well said $g$ is differentiable at $t$ if the limit\begin{equation}\lim_{\xi \to t} \dfrac{g(\xi) - g(t)}{\xi -t}\end{equation}exists. Or equivalently, if the limit\begin{equation}\lim_{\eta \to 0} \dfrac{g(t + \eta) - g(t)}{\eta}\end{equation}exists. Just to make sure the letter "$x$" or any other letters don't cause any confusion, consider the following question and answers: Question: Define what it means for a function $g$ to be differentiable at the point $x$. Answer 1: $g$ is differentiable at $x$ if $\lim\limits_{\mu \to x}\dfrac{g(\mu) - g(x)}{\mu - x}$ exists. Answer 2: $g$ is differentiable at $x$ if $\lim\limits_{@ \to 0}\dfrac{g(x + @) - g(x)}{@}$ exists. Both these answers are correct. Now, onto the second question. I hope you understand that the phrase $f(x) = e^x$ is a short way of saying "$f : \mathbb{R} \to \mathbb{R}$ the function, which takes a number $x$ as input and gives the number $e^x$ as output." We can express this as $f(\xi) = e^{\xi}$ or as $f(@) = e^@$, so once again, there is no significance attached to the symbol appearing inside the brackets, because it can be anything you like. You're being asked to write $f'(e)$ as a limit. So, we just apply the ("second") definition:\begin{align}f'(e) &= \lim_{h \to 0}\dfrac{f(e+h) - f(e)}{h} \\&= \lim_{h \to 0}\dfrac{e^{e+h} - e^e}{h}\end{align} Conclusion: The phrase "the derivative of the function $g$ at the point where $x=t$ ..." is just sloppy and misleading, it is more accurate to say something like "the derivative of the function $g$ at the point $t$ ...", which is what I said in the definitions above. I hope I've provided enough "weird" examples of notation with the use of $\xi$, $\eta$, @ that you understand the difference between a "dummy variable", and the actual point of interest.
Example 4-4: Wechsler Adult Intelligence Scale Section Here we have data on n = 37 subjects taking the Wechsler Adult Intelligence Test. This test is broken up into four different components: Information (Info) Similarities (Sim) Arithmetic (Arith) Picture Completion (Pic) The data are stored in five different columns. The first column is the ID number of the subjects, followed by the four component tasks in the remaining four columns. Download the txt file: wechslet.txt Using SAS These data may be analyzed using the SAS program shown below. Download the SAS file: wechsler.sas Walk through the procedures of the program by clicking on the "View Video Explanation" button. Just as in previous lessons, marking up a print out of the SAS program is also a good strategy for learning how this program is put together. The SAS output, (download below), gives the results of the data analyses. Because the SAS output is usually a relatively long document, printing these pages of output out and marking them with notes is highly recommended if not required!! Download the SAS output here: wechsler.lst Using Minitab View the video below to walk through how to produce the covariance matrix for the Wechsler Adult Intelligence Test data using Minitab. Analysis We obtain the following sample means. Variable Mean Information 12.568 Similarities 9.568 Arithmetic 11.486 Picture Completion 7.973 Variance-Covariance Matrix \(\textbf{S} = \left(\begin{array}{rrrr}11.474 & 9.086 & 6.383 & 2.071\\ 9.086 & 12.086 & 5.938 & 0.544\\ 6.383 & 5.938 & 11.090 & 1.791\\ 2.071 & 0.544 & 1.791 & 3.694 \end{array}\right)\) Here, for example, the variance for Information was 11.474. For Similarities it was 12.086. The covariance between Similarities and Information is 9.086. The total variance, which is the sum of the variances comes out to be 38.344, approximately. The eigenvalues are given below: \(\lambda_1 = 26.245\), \(\lambda_2 = 6.255\), \(\lambda_3 = 3.932\), \(\lambda_4 = 1.912\) and finally at the bottom of the table we have the corresponding eigenvectors. They have been listed here below: \(\mathbf{e_1}=\left(\begin{array}{r}0.606\\0.605\\0.505\\0.110 \end{array}\right)\), \(\mathbf{e_2}=\left(\begin{array}{r}-0.218\\-0.496\\0.795\\0.274 \end{array}\right)\), \(\mathbf{e_3}=\left(\begin{array}{r}0.461\\-0.320\\-0.335\\0.757 \end{array}\right)\), \(\mathbf{e_4}=\left(\begin{array}{r}-0.611\\0.535\\-0.035\\0.582 \end{array}\right)\) For example, the eigenvectors corresponding the the eigenvalue 26.245, those elements are 0.606, 0.605, 0.505, and 0.110. Now, let's consider the shape of the 95% prediction ellipse formed by the multivariate normal distribution whose variance-covariance matrix is equal to the sample variance-covariance matrix we just obtained. Recall the formula for the half-lengths of the axis of this ellipse. This is equal to the square root of the eigenvalue times the critical value from a chi-square table. In this case, we need the chi-square with four degrees of freedom because we have four variables. For a 95% prediction ellipse, the chi-square with four degrees of freedom is equal to 9.49. For looking at the first and longest axis of a 95% prediction ellipse, we substitute 26.245 for the largest eigenvalue, multiplied by 9.49 and take the square root. We end up with a 95% prediction ellipse with a half-length of 15.782 as shown below: \begin{align} l_1 &= \sqrt{\lambda_1\chi^2_{4,0.05}}\\ &= \sqrt{26.245 \times 9.49}\\ &= 15.782 \end{align} The direction of the axis is given by the first eigenvector. Looking at this first eigenvector we can see large positive elements corresponding to the first three variables. In other words, large elements for Information, Similarities, and Arithmetic. This suggests that this particular axis points in the direction specified by \(e_{1}\); that is, increasing values of Information, Similarities, and Arithmetic. The half-length of the second longest axis can be obtained by substituting 6.255 for the second eigenvalue, multiplying this by 9.49, and taking the square root. We obtain a half-length of about 7.7, or about half the length of the first axis. \begin{align} l_2 &= \sqrt{\lambda_2\chi^2_{4,0.05}}\\ &= \sqrt{6.255 \times 9.49}\\ &= 7.705 \end{align} So, if you were to picture this particular ellipse you would see that the second axis is about half the length of the first and longest axis. Looking at the corresponding eigenvector, \(e_{2}\), we can see that this particular axis is pointed in the direction of points in the direction of increasing values for the third value, or Arithmetic and decreasing value for Similarities, the second variable. Similar calculations can then be carried out for the third-longest axis of the ellipse as shown below: \begin{align} l_3 &= \sqrt{\lambda_1\chi^2_{4,0.05}}\\ &= \sqrt{3.931 \times 9.49}\\ &= 6.108 \end{align} This third axis has a half-length of 6.108, which is not much shorter or smaller than the second axis. It points in the direction of \(e_{3}\)that is, increasing values of Picture Completion and Information, and decreasing values of Similarities and Arithmetic. The shortest axis has half-length of about 4.260 as show below: \begin{align} l_4 &= \sqrt{\lambda_4\chi^2_{4,0.05}}\\ &= \sqrt{1.912 \times 9.49}\\ &= 4.260 \end{align} It points in the direction of \(e_{4}\) that is, increasing values of Similarities and Picture Completion, and decreasing values of Information. The overall shape of the ellipse can be obtained by comparing the lengths of the various axis. What we have here is basically an ellipse that is the shape of a slightly squashed football. We can also obtain the volume of the hyper-ellipse using the formula that was given earlier. Again, our critical value from the chi-square, if we are looking at a 95% prediction ellipse, with four degrees of freedom is given at 9.49. Substituting into our expression we have the product of the eigenvalues in the square root. The gamma function is evaluated at 2, and gamma of 2 is simply equal to 1. Carrying out the math we end up with a volume of 15,613.132 as shown below: \begin{align} \frac{2\pi^{p/2}}{p\Gamma\left(\frac{p}{2}\right)}(\chi^2_{p,\alpha})^{p/2}|\Sigma|^{1/2} &= \frac{2\pi^{p/2}}{p\Gamma\left(\frac{p}{2}\right)}(\chi^2_{p,\alpha})^{p/2}\sqrt{\prod_{j=1}^{p}\lambda_j} \\[10pt] &= \frac{2\pi^2}{4\Gamma(2)}(9.49)^2\sqrt{26.245 \times 6.255 \times 3.932 \times 1.912}\\[10pt] &= 444.429 \sqrt{1234.17086}\\[10pt] &= 15613.132\end{align}
Zero dimensional points do not take up space, so then wouldn't everything in the universe be literally empty? Or is there something that I'm missing? Although it's commonly said that fundamental particles are point particles you need to be clear what this means. To measure the size of the particle to within some experimental error $d$ requires the use of a probe with a wavelength of $\lambda=d$ or less i.e. with an energy of greater than around $hc/\lambda$. When we say particles are pointlike we mean that no matter how high the energy of your probe, or how small its wavelength, you will never measure a particle radius greater than your experimental limit $d$. That is the particle will always appear pointlike no matter how precise your experiment is. But this does not mean that the particles are actually zero dimensional, infinite density, dots whizzing around. An elementary particle does not have a position in the way we think of a macroscopic object as having a position. It is always delocalised to some extent, i.e., it exists across a region of some non-zero volume. More precisely the probability of finding the particle is non-zero anywhere within that region. So an atom is not empty space. The usual analogy is that it is a fuzzy blob, and actually that's a not a bad metaphor. If we take any small volume $\mathrm dV$ within the atom then the probability of finding the electron in that region is given by: $$ P = \int \psi^*\psi\,\mathrm dV $$ where $\psi$ is the wavefunction describing the electron in the atom. And since this probability is just the charge density that means the charge density varies smoothly throughout the atom. It is important to be clear that this is not just some time average due to the electron whizzing about the atom very fast. It is not the case that the electron has a precise position in the atom and our probability is some time average. The electron genuinely has no position in the usual macroscopic sense. Because of the Pauli exclusion principle, it's extremely difficult to compress atomic matter beyond a certain density. It's not impossible, because there are always higher-energy electron states available, but there's a very strong force opposing it (called electron degeneracy pressure). This is what it means for space to be full. If you define "empty space" in such a way that atoms are empty space, then atoms are empty space, but also the notion of "empty space" becomes useless because all space is empty. The idea of empty and occupied space long predates the modern understanding of fundamental particles. The job of science is to explain why the world is the way it is—for example, why you can't walk through walls—not to give new meanings to existing words. Yes, elementary particles such as electrons and quarks (inside protons) are point-like or at least, their internal structure is incomparably smaller than the size of the atom. So the atom is mostly empty space. However, that doesn't mean that atoms may penetrate each other. Matter is impenetrable because of a combination of the uncertainty principle that says that the electrons can't be simultaneously sitting at/near the nucleus and have a small velocity (and kinetic energy), so the typical distance of an electron from a nucleus is finite (comparable to what is then interpreted as the size of the atom) the Pauli exclusion principle that says that electrons can't occupy the same state. For this reason, even though the space in the atom is "empty", it is not possible for many electrons to occupy the same space. For those reasons, atoms, although they're empty space, don't allow other atoms to overlap with them – and that's why atoms always "push" on other nearby atoms. A charged particle like electron maybe is point-like (of radius zero), but it is "long-handed" as it is "felt" far away. In this sense it is not so "point-like". What we intuitively think of as "solid objects" are actually electromagnetic force-fields repelling each other. So you are correct; atoms are 'empty' in that they contain no solid objects or things. On the other hand, they are 'full' of basic force field which, in the aggregate, on a macro-scale, creates the illusion of 'solidity' that is what we perceive to be solid objects. Another issue is that the forces between elementary particles in an atom introduce some characteristic length scales. For example, although quarks are possible point particles, the protons and neutrons in which they reside are about 2 femtometres wide because the forces between quarks prevent further compression. This gives nuclei a femtometre-scale size, but an atom is picometres wide due to the orbital radius of electrons. An atom therefore functions like a sphere with a certain chemical valency. In each case, the occupied space is filled with a forcefield of a certain mass-energy density. protected by Qmechanic♦ Jul 10 '16 at 19:36 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
To use a simple example, assume that consumer $i$ maximizes $$U(x_i,y_i,I_i) = \alpha\ln x_i +(1-\alpha)\ln y_i + (\bar I_i-p_xx_i -p_yy_i)\\s.t. \bar I_i \geq p_xx_i +p_yy_i$$ In other words, it may spend all his income on the two goods (but not more), or he may keep some for other goods, not modeled here. Due to the inequality in the constraint we need to use a Karush-Kuhn-Tucker non-negative multiplier, rather than the usual Lagrange multiplier.Assuming a fixed income, the lagrangean function is $$\Lambda = \alpha\ln x_i +(1-\alpha)\ln y_i + (\bar I_i-p_xx_i -p_yy_i) +\lambda_i(\bar I_i-p_xx_i -p_yy_i)$$ Assuming income is fixed, the first order conditions are $$\frac {\alpha}{x_i} - (1+\lambda_i)p_x \leq 0,\;\; \frac {1-\alpha}{y_i} - (1+\lambda_i)p_y \leq 0$$ $$\lambda(\bar I_i-p_xx_i -p_yy_i) = 0 ,\; x_i\cdot \left(\frac {\alpha}{x_i} - (1+\lambda_i)p_x\right)=0, \; y_i\cdot \left(\frac {1-\alpha}{y_i} - (1+\lambda_i)p_y\right) =0$$ it follows that at the optimum both goods will be demanded in positive quantities, which in turn requires that the first-derivatives be set equal to zero. This gives us $$\frac {\alpha}{p_xx_i} = \frac {1-\alpha}{p_yy_i} \implies x_i^D=\frac {\alpha}{1-\alpha}\frac {p_y}{p_x}y_i^D \tag{1}$$ from which we can obtain the optimal relation for the Expenditure $E_i$ on the two goods $$E_i^* = \frac {1}{1+\lambda_i^*} , \;\; E_i^* \leq \bar I_i$$ Since $\lambda_i \geq 0 \implies \max E_i^* \leq 1$ so if it so happens that $\bar I_i >1$ then not all income will be spent on the two goods, the constraint won't be binding and so $\lambda_i^* =0 $. (Is $\max E_i^* \leq 1$ a strange result?) On the other hand if $\bar I_i \leq 1$ then we will necessarily have $E_i^* = \bar I_i$, and $\lambda_i^* = (1-\bar I_i)/\bar I_i$. In either case, optimal relation $(1)$ remains valid. So, assuming identical consumers with respect to preferences (not necessarily with respect to Income), aggregating $(1)$ we obtain the market-level relation $$ X^D=\frac {\alpha}{1-\alpha}\frac {p_y}{p_x}Y^D \tag {2}$$ At equilibrium, we have $X^D = X^S,\; Y^D = Y^S \tag{3}$. Equations $(2)$ and $(3)$ hold for any supply. We now change the supply of $X$ but leave the supply of $Y$ unchanged. Indexing the initial situation by $0$ and the second situation by $1$, they are described by $$ X^D_0=\frac {\alpha}{1-\alpha}\frac {p_{y0}}{p_{x0}}Y^D_0,\;\;X^D_0 = X^S_0,\;\; Y^D_0 = Y^S_0 \tag {4}$$ $$ X^D_1=\frac {\alpha}{1-\alpha}\frac {p_{y1}}{p_{x1}}Y^D_1,\;\;X^D_1 = X^S_1>X^S_0,\;\; Y^D_1 = Y^S_1= Y^S_0\tag {5}$$ Then we have $$X^D_1 - X^D_0 = \frac {\alpha}{1-\alpha}\frac {p_{y1}}{p_{x1}}Y^D_1 - \frac {\alpha}{1-\alpha}\frac {p_{y0}}{p_{x0}}Y^D_0 $$ and using the various relations $$\implies X^S_1 - X^S_0 = \frac {\alpha}{1-\alpha}\frac {p_{y1}}{p_{x1}}Y^S_1 - \frac {\alpha}{1-\alpha}\frac {p_{y0}}{p_{x0}}Y^S_0 >0 $$ $$\implies \frac {\alpha}{1-\alpha}Y^S_0\cdot \left[\frac {p_{y1}}{p_{x1}}-\frac {p_{y0}}{p_{x0}}\right] >0$$ $$\implies \frac {p_{y1}}{p_{x1}}-\frac {p_{y0}}{p_{x0}} >0 \implies \frac {p_{y1}}{p_{y0}}>\frac {p_{x1}}{p_{x0}} \tag{6}$$ $(6)$ tells us that the price of $Y$, if it falls, it will fall certainly less than the price of $X$, but in proportional terms. It appears that in this benchmark example, we cannot say anything about the price changes in terms of levels, as the OP asks about. If the levels of the two prices are very different we can easily have a smaller proportional fall for $p_y$, while at the same time a higher fall in monetary units. Set for example $p_{y0} = 100, p_{x0} = 1$, and assume that the price of $Y$ falls only by $10$% while the price of $X$ falls by $20$%. In turn the levels of prices depends on the magnitude of the supply of the two goods also, which are treated as exogenous here.
Your confusion really just comes down to understanding the notation that is widely used for partial derivatives. For simplicity, I'll restrict the discussion to a system with one coordinate degree of freedom $x$. In this case, the Lagrangian is a real valued function of two real variables which we suggestively label by the symbols $x$ and $\dot x$. Mathematically, we would write $L:U\to\mathbb R$ where $U\subset \mathbb R^2$. Let's consider the simple example$$ L(x, \dot x) = ax^2+b\dot x^2$$When we write the expression$$ \frac{\partial L}{\partial \dot x}(x, \dot x)$$this is an instruction to differentiate the function $L$ with respect to its second argument (because we labeled the second argument $\dot x$) and then to evaluate the resulting function on the pair $(x, \dot x)$. But we just as well could have written$$ \partial_2L(x, \dot x)$$To represent the same expression. Both of these expressions simply mean that we imagine holding the first argument of the function constant, and we take the derivative of the resulting function with respect to what remains. In the case above, this therefore means that$$ \frac{\partial L}{\partial\dot x}(x, \dot x) = 2b\dot x$$because $x$ labels the first argument, and taking a partial derivative with respect to the second argument means that we treat $x$ like a constant whose derivative is therefore $0$. It it in this sense that the partial of $x^2$ with respect to $\dot x$ is zero. So to recap, when we are taking these derivatives, we just keep in mind that the symbols $x$ and $\dot x$ are just labels for the different arguments of the Lagrangian. You might ask, however, "if $x$ and $\dot x$ are just labels, then what relation do they have to position and velocity?" The answer is that after we have treated them as labels for the arguments of $L$ in order to take the appropriate derivatives, we then evaluate the resulting expressions on a $(x(t), \dot x(t))$, the position and velocity of a curve at time $t$, to obtain equations of motion. For example, if you take the example of $L$ that I started with, we get$$ \frac{\partial L}{\partial x}(x, \dot x) = 2 ax, \qquad \frac{\partial L}{\partial \dot x}(x, \dot x) = 2b\dot x$$now we evaluate these expressions on $(x(t), \dot x(t))$ to obtain$$ \frac{\partial L}{\partial x}(x(t), \dot x(t)) = 2 ax(t), \qquad \frac{\partial L}{\partial \dot x}(x(t), \dot x(t)) = 2b\dot x(t)$$so that the Euler-Lagrange equations become$$ 0=\frac{d}{dt}\left[\frac{\partial L}{\partial \dot x}(x(t), \dot x(t))\right] - \frac{\partial L}{\partial x}(x(t), \dot x(t))=\frac{d}{dt}(2b\dot x(t)) - 2ax(t)$$which gives$$ b\ddot x(t) = a x(t)$$Once you understand all of this, you can (and should) dispense with the long-winded notation I used here for illustrative purposes, and you should make no error in using the abbreviated notation in your original post.
Title: Ejs Open Source Charge Particle in Magnetic Field B Java Applet in 3D Post by: lookang on October 04, 2010, 02:04:38 pm Ejs Open Source Charge Particle in Magnetic Field B Java Applet in 3D reference: http://www.phy.ntnu.edu.tw/ntnujava/index.php?topic=1800.msg7327#msg7327 Created by prof Hwang Modified by Ahmed http://www.phy.ntnu.edu.tw/ntnujava/index.php?topic=1450msg5484;topicseen#msg5484 Created by prof Hwang Charge In B-Field This 3D Ejs Charge particle In B-Field model allows the user to simulate a moving charged particle in two identical uniform magnetic fields separated by a zero magnetic field gap. A charge moving in a magnetic field experiences a magnetic force given by the Lorenz force law $\vec{F}=\vec{v}\times\vec{B}*q$ = v*B*q*sin(theta)where theta specifies the angle between the velocity vector v and the magnetic field B. In this simulation, the velocity and B-field are perpendicular (theta = 90 degrees) and the force is maximum and perpendicular to both v and B as predicted by $\vec{v}\times\vec{B}$. You can adjust the magnitude of the magnetic field B, the mass m and charge q of the charged particles. The slider at the top controls the width of the field free region (it is a percentage of half the window width). The magnetic field is assumed to be uniform $Bz\hat{z}$ inside the magnet region. and the field is zero when outside the boundary. You can change the location and velocity of the charged particle with mouse drag and drop or with sliders. [/quote] Enjoy! [ejsapplet] activities adapted from http://www.opensourcephysics.org/items/detail.cfm?ID=8984 Charge in Magnetic Field Model written by Fu-Kwun Hwang edited by Robert Mohr and Wolfgang Christian 1 When $Bz\hat{z}$ is positive and the charge particle is completely inside the $Bz\hat{z}$ field region, which way do positively charged particle circle (clockwise or counter-clockwise as view from the top looking down). Use the Fleming left hand (thumb Force, second finger B field and middle finger current i ) or right-hand cross product rule $\vec{F}=\vec{v}\times\vec{B}*q$ to determine if the field points into or out of the screen? 2 Explain why particle traveling in the region without B field, travel in a straight path. 3 Charged particle that remain in the uniform B magnetic field (orange color field vectors) experience uniform circle motion. Why? What provides the centripetal force? 4 design an experiment to investigate systematically, in a table the data and effects of varying the charge particle mass, m and charged q. 5 Do they experience the same force F? 6 What accounts for particle moving in circles of different radii (for the ones that stay in the uniform magnetic field)? 7 How can you change the different parameters to decrease the radius? Explain why each change results in a smaller radius. 8 challenging Optional: If you have EJS installed, modify this model to simulate a cyclotron. http://hyperphysics.phy-astr.gsu.edu/hbase/magnetic/cyclot.html It may be useful for you to know that Model ->Initialization page to see how the initial position and velocity of the particle(s) is(are) set as well as looking at the Model-> Custom page to see the equations of motion for a particle in the magnetic field as well as the gap region. Title: Re: Ejs Open Source Charge Particle in Magnetic Field B Java Applet in 3D Post by: lookang on October 04, 2010, 02:07:56 pm changes made 1. color scheme 2 magnet NS text and position 3 B field visual start at edge of magnet for ease of associating the influence of F = q.v^B where ^ is cross product. 4 rearrange the bottom panel 5 add z and vz into the evolution page and values display for http://link.aip.org/link/?AJP/65/726/1 journal article has this student learning challenge (Bagno & Eylon, 1997) The question is the paper is: The velocity of a charged particle moving in a magnetic field is always perpendicular to the direction of the field. The responses: 37% think it is true, The reasons and interviews analyzed indicated that the causes are: a. Recitation of formula: v, B and F are always perpendicular according to left hand or right screw law 81 b. No reason 19 reference: Bagno, E., & Eylon, B.-S. (1997). From problem solving to a knowledge structure: An example from the domain of electromagnetism. American Journal of Physics, 65( 8 ), 726-736. doi: 10.1119/1.18642 My thoughts: but the answer is false. it could be a supposition of uniform velocity and circular motion, thus there is an angle =! 90o between v and B, much like a helix path 6 add Force display value F = q.v^B 7 activate the 3 axes coordinate system for ease of communicate and associating motion to x y z direction other good resources: http://www.phy.ntnu.edu.tw/ntnujava/index.php?topic=1431.0 Charged particle motion in static Electric/Magnetic field by Fu-Kwun Hwang http://www.phy.ntnu.edu.tw/ntnujava/index.php?topic=36 Charged particle motion in E/B Field JDK version by Fu-Kwun Hwang http://www.opensourcephysics.org/items/detail.cfm?ID=8984 Charge in Magnetic Field Model written by Fu-Kwun Hwang edited by Robert Mohr and Wolfgang Christian http://www.opensourcephysics.org/items/detail.cfm?ID=9997 Charge Trajectories in 3D Electrostatic Fields Model written by Andrew Duffy http://www.opensourcephysics.org/items/detail.cfm?ID=8996 E x B Trajectory Model written by Anne Cox http://www.compadre.org/osp/items/detail.cfm?ID=8984 Charge in Magnetic Field Model written by Fu-Kwun Hwang edited by Robert Mohr and Wolfgang Christian Title: Re: Ejs Open Source Charge Particle in Magnetic Field B Java Applet in 3D Post by: Fu-Kwun Hwang on October 05, 2010, 10:08:36 am What is the trajectory in your mind if there is an electron inside a vacuum with magnetic field? Assume the energy of the electron is 1keV: try to calculate it's velocity. Assume the magnetic field is 1kG(Gauss)=0.1T(Tesla) and velocity is perpendicular to the magnetic field. : try to calculate it radius and frequency. With the above calculated data: try to think of the motion of the electron. If the velocity of the electron is not perpendicular: assume 1% of velocity is parallel to the magnetic field. What do you think about the average motion of the electron will be look like? Title: Re: Ejs Open Source Charge Particle in Magnetic Field B Java Applet in 3D Post by: lookang on October 05, 2010, 10:38:01 am What is the trajectory in your mind if there is an electron inside a vacuum with magnetic field?a beam of electrons in helix? Assume the energy of the electron is 1keV: try to calculate it's velocity.0.5*m*v^2 = E Assume the magnetic field is 1kG(Gauss)=0.1T(Tesla) and velocity is perpendicular to the magnetic field. : try to calculate it radius and frequency. 0.5*9.1*10-31*v^2 = 1*10^3*1.6*10^-19 v = 1.88*10^6 m/s assume F = m*v^2/r is valid since perpendicular b and v B*v*q = m*v^2/r 0.1*1.88*10^6*1.6*10^-19 = (9.1*10^-31*1.88*10^6)/r r = 1.06x10^-4 m assume circle path 2*pi*r / T = v 3.57x10^-10 s = T With the above calculated data: try to think of the motion of the electron.a beam of electrons in helix moving away from the circular axes? If the velocity of the electron is not perpendicular: assume 1% of velocity is parallel to the magnetic field.look like a beam in the vz direction? a beam of electrons in helix motion moving away from the circular axes x and y? What do you think about the average motion of the electron will be look like? Am i visualizing this.? (http://www.mdahlem.net/img/astro/elgyro.jpg) http://www.thunderbolts.info/forum/phpBB3/viewtopic.php?p=27760&sid=9a745d78c23d14d82acc194a8389c7a2 not sure whether this is what you mean ;D Title: Re: Ejs Open Source Charge Particle in Magnetic Field B Java Applet in 3D Post by: Fu-Kwun Hwang on October 05, 2010, 10:17:50 pm Think about the magnitude of the physics quantity is very important when we study physics. Did you notice that the radius is very small: $10^{-4}$ m And the period of one resolution is very small,too. $T=3.57\times 10^{-10}$ s. So the electron will make almost $3\times 10^9$ turns in one second. There is no way you can observed such trajectory with your eye or ordinary device. And the velocity is very large. $v=1.88\times 10^6$ m/s. If only 1% of velocity component is along the field line, the electron will move along the field line with speed $u\approx 10^4$m/s$\approx 10$ km/s. So on average, the electron will move along the field line. That is also how electrons move between sourth pole and north pole-- following the field line of earth's magnetic field. When the magnetic field is very strong, it will force electron moving back -- that need more physics analysis. (or check out Helmholtz coil / particle trapped in magnetic mirror field (http://www.phy.ntnu.edu.tw/ntnujava/index.php?topic=182.0))
that the electron and the proton are a particle pair This may appear inconsistent with current theory, as it is widely assumed that particle pairs created in a single event, must have opposite charge and equal mass (matter/antimatter). This assumption turns out to be wrong. We shall see that the difference in mass between the electron and the proton is due to mass defect, the same kind of mass defect as seen in heavy nuclei which is well understood. The electron is trapped in it's own negative electric potential and dispite having the same mass as the proton, appears to the observer as a smaller particle. It was obvious from these assumptions that the observers potential (Ground Potential) had to lie somewhere between the electrical potential of the proton and the potential of the electron, and the reason is simply that we know of no charge more positive than the proton and likewise no charge more negative than the electron, ergo everything else must lie in-between. This inspired me to go after the equation describing the relationship between electron-potential, ground-potential and proton-potential, it was tricky, but the solution presented itself to me within a couple of days of working on it. \[ a (\gamma) = \frac{1}{2} (c-b) \] Where 'a' is electon-potential, 'b' is ground-potential and 'c' is proton-potential. The problem here is \(\gamma\) being the same kind of \(\gamma\) as in Einstein's relativity, being velocity dependent, so how does one define gamma? After some further thinking it became apparent that four-velocity was a function of potential and since the proton represented the absolute maximum potential, it could be concidered as a physical constant in the same way as the speed of light, so the full equation now became; \[ a = \frac{(c-b)}{2} \sqrt{1-\frac{b^2}{c^2}} \] So here we have the first law of ground potential, a very simple and elegant statement for the first time defining ground potential. Solving the above equation with known values for electron and proton potential gives a ground potential of 930 million volts. I have chosen volts as the ideal unit to work in, because it is the SI unit which moves one elementary charge 1 meter in 1 second, so it is the perfect unit to define the potentials of electrons and protons. A proton which has a mass of \(\frac{938 MeV}{c^2}\) contains 938 MeV of energy which divided by one elementary charge gives a potential of 938 million volts. Likewise the electron can be said to have 0.511 million volts potential. This first law of GP shall be the law upon which all other laws shall rest. Steven
ä is in the extended latin block and n is in the basic latin block so there is a transition there, but you would have hoped \setTransitionsForLatin would have not inserted any code at that point as both those blocks are listed as part of the latin block, but apparently not.... — David Carlisle12 secs ago @egreg you are credited in the file, so you inherit the blame:-) @UlrikeFischer I was leaving it for @egreg to trace but I suspect the package makes some assumptions about what is safe, it offers the user "enter" and "exit" code for each block but xetex only has a single insert, the interchartoken at a boundary the package isn't clear what happens at a boundary if the exit of the left class and the entry of the right are both specified, nor if anything is inserted at boundaries between blocks that are contained within one of the meta blocks like latin. Why do we downvote to a total vote of -3 or even lower? Weren't we a welcoming and forgiving community with the convention to only downvote to -1 (except for some extreme cases, like e.g., worsening the site design in every possible aspect)? @Skillmon people will downvote if they wish and given that the rest of the network regularly downvotes lots of new users will not know or not agree with a "-1" policy, I don't think it was ever really that regularly enforced just that a few regulars regularly voted for bad questions to top them up if they got a very negative score. I still do that occasionally if I notice one. @DavidCarlisle well, when I was new there was like never a question downvoted to more than (or less?) -1. And I liked it that way. My first question on SO got downvoted to -infty before I deleted it and fixed my issues on my own. @DavidCarlisle I meant the total. Still the general principle applies, when you're new and your question gets donwvoted too much this might cause the wrong impressions. @DavidCarlisle oh, subjectively I'd downvote that answer 10 times, but objectively it is not a good answer and might get a downvote from me, as you don't provide any reasoning for that, and I think that there should be a bit of reasoning with the opinion based answers, some objective arguments why this is good. See for example the other Emacs answer (still subjectively a bad answer), that one is objectively good. @DavidCarlisle and that other one got no downvotes. @Skillmon yes but many people just join for a while and come form other sites where downvoting is more common so I think it is impossible to expect there is no multiple downvoting, the only way to have a -1 policy is to get people to upvote bad answers more. @UlrikeFischer even harder to get than a gold tikz-pgf badge. @cis I'm not in the US but.... "Describe" while it does have a technical meaning close to what you want is almost always used more casually to mean "talk about", I think I would say" Let k be a circle with centre M and radius r" @AlanMunn definitions.net/definition/describe gives a websters definition of to represent by drawing; to draw a plan of; to delineate; to trace or mark out; as, to describe a circle by the compasses; a torch waved about the head in such a way as to describe a circle If you are really looking for alternatives to "draw" in "draw a circle" I strongly suggest you hop over to english.stackexchange.com and confirm to create an account there and ask ... at least the number of native speakers of English will be bigger there and the gamification aspect of the site will ensure someone will rush to help you out. Of course there is also a chance that they will repeat the advice you got here; to use "draw". @0xC0000022L @cis You've got identical responses here from a mathematician and a linguist. And you seem to have an idea that because a word is informal in German, its translation in English is also informal. This is simply wrong. And formality shouldn't be an aim in and of itself in any kind of writing. @0xC0000022L @DavidCarlisle Do you know the book "The Bronstein" in English? I think that's a good example of archaic mathematician language. But it is still possible harder. Probably depends heavily on the translation. @AlanMunn I am very well aware of the differences of word use between languages (and my limitations in regard to my knowledge and use of English as non-native speaker). In fact words in different (related) languages sharing the same origin is kind of a hobby. Needless to say that more than once the contemporary meaning didn't match a 100%. However, your point about formality is well made. A book - in my opinion - is first and foremost a vehicle to transfer knowledge. No need to complicate matters by trying to sound ... well, overly sophisticated (?) ... The following MWE with showidx and imakeidx:\documentclass{book}\usepackage{showidx}\usepackage{imakeidx}\makeindex\begin{document}Test\index{xxxx}\printindex\end{document}generates the error:! Undefined control sequence.<argument> \ifdefequal{\imki@jobname }{\@idxfile }{}{... @EmilioPisanty ok I see. That could be worth writing to the arXiv webmasters as this is indeed strange. However, it's also possible that the publishing of the paper got delayed; AFAIK the timestamp is only added later to the final PDF. @EmilioPisanty I would imagine they have frozen the epoch settings to get reproducible pdfs, not necessarily that helpful here but..., anyway it is better not to use \today in a submission as you want the authoring date not the date it was last run through tex and yeah, it's better not to use \today in a submission, but that's beside the point - a whole lot of arXiv eprints use the syntax and they're starting to get wrong dates @yo' it's not that the publishing got delayed. arXiv caches the pdfs for several years but at some point they get deleted, and when that happens they only get recompiled when somebody asks for them again and, when that happens, they get imprinted with the date at which the pdf was requested, which then gets cached Does any of you on linux have issues running for foo in *.pdf ; do pdfinfo $foo ; done in a folder with suitable pdf files? BMy box says pdfinfo does not exist, but clearly do when I run it on a single pdf file. @EmilioPisanty that's a relatively new feature, but I think they have a new enough tex, but not everyone will be happy if they submit a paper with \today and it comes out with some arbitrary date like 1st Jan 1970 @DavidCarlisle add \def\today{24th May 2019} in INITEX phase and recompile the format daily? I agree, too much overhead. They should simply add "do not use \today" in these guidelines: arxiv.org/help/submit_tex @yo' I think you're vastly over-estimating the effectiveness of that solution (and it would not solve the problem with 20+ years of accumulated files that do use it) @DavidCarlisle sure. I don't know what the environment looks like on their side so I won't speculate. I just want to know whether the solution needs to be on the side of the environment variables, or whether there is a tex-specific solution @yo' that's unlikely to help with prints where the class itself calls from the system time. @EmilioPisanty well th eenvironment vars do more than tex (they affect the internal id in teh generated pdf or dvi and so produce reproducible output, but you could as @yo' showed redefine \today oor teh \year, \month\day primitives on teh command line @EmilioPisanty you can redefine \year \month and \day which catches a few more things, but same basic idea @DavidCarlisle could be difficult with inputted TeX files. It really depends on at which phase they recognize which TeX file is the main one to proceed. And as their workflow is pretty unique, it's hard to tell which way is even compatible with it. "beschreiben", engl. "describe" comes from the math. technical-language of the 16th Century, that means from Middle High German, and means "construct" as much. And that from the original meaning: describe "making a curved movement". In the literary style of the 19th to the 20th century and in the GDR, this language is used. You can have that in englisch too: scribe(verb) score a line on with a pointed instrument, as in metalworking https://www.definitions.net/definition/scribe @cis Yes, as @DavidCarlisle pointed out, there is a very technical mathematical use of 'describe' which is what the German version means too, but we both agreed that people would not know this use, so using 'draw' would be the most appropriate term. This is not about trendiness, just about making language understandable to your audience. Plan figure. The barrel circle over the median $s_b = |M_b B|$, which holds the angle $\alpha$, also contains an isosceles triangle $M_b P B$ with the base $|M_b B|$ and the angle $\alpha$ at the point $P$. The altitude of the base of the isosceles triangle bisects both $|M_b B|$ at $ M_ {s_b}$ and the angle $\alpha$ at the top. \par The centroid $S$ divides the medians in the ratio $2:1$, with the longer part lying on the side of the corner. The point $A$ lies on the barrel circle and on a circle $\bigodot(S,\frac23 s_a)$ described by $S$ of radius…
Frustum of a right circular cone is that portion of right circular cone included between the base and a section parallel to the base not passing through the vertex. Properties of Frustum of Right Circular Cone The altitude of a frustum of a right circular cone is the perpendicular distance between the two bases. It is denoted by h. All elements of a frustum of a right circular cone are equal. It is denoted by L. Formula of Frustum of a Right Circular Cone \[\large A=\pi (R_{1}+R_{2})s\] \[\large V=\frac{\pi h}{3}\left(R^{2}+Rr+r^{2}\right)\] Solved Examples Question: Find the volume of a frustum of a right circular cone with height 20, lower base radius 34 and top radius 19 ? Solution: Given h = 20 R = 34 r = 19 V = $\frac{\pi h}{3} \left ( R^{2}+ Rr + r^{2} \right)$ V = $\frac{\pi \times 20}{3} \left (34^{2} + 34 \times 19 + 19^{2} \right)$ V = 14420 π
Sometimes, especially in introductory courses the instructor will try to keep things "focused" in order to promote learning. Still, it's unfortunate that the instructor couldn't respond in a more positive and stimulating way to your question. These reactions do occur at $\ce{sp^2}$ hybridized carbon atoms, they are often just energetically more costly, and therefore somewhat less common. Consider when a nucleophile reacts with a carbonyl compound, the nucleophile attacks the carbonyl carbon atom in an $\ce{S_{N}2}$ manner. The electrons in the C-O $\pi$–bond can be considered as the leaving group and a tetrahedral intermediate is formed with a negative charge on oxygen. It is harder to do this with a carbon-carbon double bond (energetically more costly) because you would wind up with a negative charge on carbon (instead of oxygen), which is energetically less desirable (because of the relative electronegativities of carbon and oxygen). If you look at the Michael addition reaction, the 1,4-addition of a nucleophile to the carbon-carbon double bond in an $\ce{\alpha-\beta}$ unsaturated carbonyl system, this could be viewed as an $\ce{S_{N}2}$ attack on a carbon-carbon double bond, but again, it is favored (lower in energy) because you create an intermediate with a negative charge on oxygen. $\ce{S_{N}1}$ reactions at $\ce{sp^2}$ carbon are well documented. Solvolysis of vinyl halides in very acidic media is an example. The resultant vinylic carbocations are actually stable enough to be observed using nmr spectroscopy. The picture below helps explain why this reaction is so much more difficult (energetically more costly) than the more common solvolysis of an alkyl halide. In the solvolysis of the alkyl halide we produce a traditional carbocation with an empty p orbital. In the solvolysis of the vinyl halide we produce a carbocation with the positive charge residing in an $\ce{sp^2}$ orbital. Placing positive charge in an $\ce{sp^2}$ orbital is a higher energy situation compared to placing it in a p orbital (electrons prefer to be in orbitals with higher s density, it stabilizes them because the more s character in an orbital the lower its energy; conversely, in the absence of electrons, an orbital prefers to have high p character and mix the remaining s character into other bonding orbitals that do contain electrons in order to lower their energy).
f is continuous and bounded on $[a,b]$ such that $c,d\in[a,b]\subset \mathbb{R}$. Let $\alpha :=inf\left\{f(x):x\in [a,b]\right\}$. This exists because f is continuous on the bounded interval $[a,b]$. (What is it's infimum?) $\alpha$ denoted as the infimum of the function $f$ on $[a,b]\Rightarrow \alpha+\frac 1n $ isn't the greatest lower bound of $f$. Thus $\exists$ some mapping $x_n \rightarrow f(x_n) \ s.t. f(x_n)$ lies in between its infimum $\alpha$ and $\alpha+\frac 1n$. Mathematically this can be notated as the inequality $\alpha \leq f(x_n) \leq \alpha+\frac 1n$. $f$ is bounded on $[a,b]\Rightarrow$ its equivalent sequence, let's call it $a_n =\left\{x_n\right\}_{n=1}^{n= \infty } \ \forall n\in\mathbb{N}$, is also bounded on $[a,b]$. Thus by Bolzano-Weierstrass theorem, $a_n$ contains at least one convergent subsequence in the interval $[a,b]$, let's call it $a_{n_i}$ for $i\in$ of $n$, such that $\lim_{i\to\infty}{ \ (a_{n_i})}=c$ Because f is continuous, the limit of the function, or in this case, sequence, is the function of the limit. So the following results: $$\lim_{i\to\infty}{ \ (a_{n_i})}=c\Rightarrow f(\lim_{i\to\infty}{ \ (x_{n_i}))}=f(c) \ for \ c\in [a,b]$$ Going back to our definition for $\alpha$, and applying the squeeze theorem for functions, we note: $$\alpha \leq f(x_n) \leq \alpha+\frac 1n \Rightarrow \lim_{n\to\infty}{ (\alpha)} = \lim_{n\to\infty}{f(x_n)}=\lim_{n\to\infty}{(\alpha+\frac 1n})=\alpha$$ Noting that if a sequence or function is convergent, that all of its subsequences/subfunctions are convergent to that same limit value, we thus finally have:$$\lim_{i\to\infty}{f(x_{n_i})=\lim_{n\to\infty}f(x_n)=\alpha}$$ This implies $\alpha :=inf\left\{f(x):x\in [a,b]\right\}=f(c)$ Thus the function f achieves a minimum value $f(c)$ for $c\in [a,b]$. To prove the maximum version, consider some number $\beta \ s.t. \ \beta :=sup\left\{f(x):x\in [a,b]\right\}=f(d)$ for $d\in [a,b]$.
I'd be interested in knowing if there are any problems that are easier to solve in a higher dimension, i.e. using solutions in a higher dimension that don't have an equally optimal counterpart in a lower dimension, particularly common (or uncommon) geometry and discrete math problems. The kissing number problem asks how many unit spheres can simultaneously touch a certain other unit sphere, in $n$ dimensions. The $n=2$ case is easy; the $n=3$ case was a famous open problem for 300 years; the $n=4$ case was only resolved a few years ago, and the problem is still open for $n>4$… except for $n=8$ and $n=24$. The $n=24$ case is (relatively) simple because of the existence of the 24-dimensional Leech lattice, which owes its existence to the miraculous fact that $$\sum_{i=1}^{\color{red}{24}} i^2 = 70^2 .$$ The Leech lattice has a particularly symmetrical 8-dimensional sublattice, the $E_8$ lattice and this accounts for the problem being solved for $n=8$. There are a lot of similar kinds of packing problems that are unsolved except in 8 and 24 dimensions, for similar reasons. One such example in PDEs is Kirchhoff's formula for the solution to the initial value problem for the wave equation:\begin{equation}\begin{cases}\partial^2_t u - \Delta u = 0 & x \in \mathbb{R}^n,\ t \in \mathbb{R} \\u(0,x) = g(x) \\\partial_t u (0, x) = h(x)\end{cases}\end{equation}In space dimension $n=3$ it is relatively easy to derive the formula \begin{equation}\tag{1}u(t,x)=\frac{1}{4\pi t^2} \int_{\partial B(x;t)} \big[ t\cdot h(y) + g(y) + \nabla g(y)\cdot (y-x) \big]\, dS(y)\end{equation}which expresses the solution in terms of the initial data. $^{[1]}$ The same cannot be said for dimension $n=2$, though. Indeed, the usual method to recover a formula analogous to (1) in the two-dimensional case is called method of descent and it works by embedding the two dimensional equation into a three dimensional space and then using formula (1). $^{[1]}$ One can either exploit symmetries or use the Fourier transform. For the first method one can consult, among others, Evans, Partial differential equations, chapter 2. For the latter method one can consult, among others, Folland's Real Analysis, chapter "Topics in Fourier analysis".
Yeah, this software cannot be too easy to install, my installer is very professional looking, currently not tied into that code, but directs the user how to search for their MikTeX and or install it and does a test LaTeX rendering Some body like Zeta (on codereview) might be able to help a lot... I'm not sure if he does a lot of category theory, but he does a lot of Haskell (not that I'm trying to conflate the two)... so he would probably be one of the better bets for asking for revision of code. he is usually on the 2nd monitor chat room. There are a lot of people on those chat rooms that help each other with projects. i'm not sure how many of them are adept at category theory though... still, this chat tends to emphasize a lot of small problems and occasionally goes off tangent. you're project is probably too large for an actual question on codereview, but there is a lot of github activity in the chat rooms. gl. In mathematics, the Fabius function is an example of an infinitely differentiable function that is nowhere analytic, found by Jaap Fabius (1966). It was also written down as the Fourier transform off^(z)=∏m=1∞(cos... Defined as the probability that $\sum_{n=1}^\infty2^{-n}\zeta_n$ will be less than $x$, where the $\zeta_n$ are chosen randomly and independently from the unit interval @AkivaWeinberger are you familiar with the theory behind Fourier series? anyway here's a food for thought for $f : S^1 \to \Bbb C$ square-integrable, let $c_n := \displaystyle \int_{S^1} f(\theta) \exp(-i n \theta) (\mathrm d\theta/2\pi)$, and $f^\ast := \displaystyle \sum_{n \in \Bbb Z} c_n \exp(in\theta)$. It is known that $f^\ast = f$ almost surely. (a) is $-^\ast$ idempotent? i.e. is it true that $f^{\ast \ast} = f^\ast$? @AkivaWeinberger You need to use the definition of $F$ as the cumulative function of the random variables. $C^\infty$ was a simple step, but I don't have access to the paper right now so I don't recall it. > In mathematics, a square-integrable function, also called a quadratically integrable function, is a real- or complex-valued measurable function for which the integral of the square of the absolute value is finite. I am having some difficulties understanding the difference between simplicial and singular homology. I am aware of the fact that they are isomorphic, i.e. the homology groups are in fact the same (and maybe this doesnt't help my intuition), but I am having trouble seeing where in the setup they d... Usually it is a great advantage to consult the notes, as they tell you exactly what has been done. A book will teach you the field, but not necessarily help you understand the style that the prof. (who creates the exam) creates questions. @AkivaWeinberger having thought about it a little, I think the best way to approach the geometry problem is to argue that the relevant condition (centroid is on the incircle) is preserved by similarity transformations hence you're free to rescale the sides, and therefore the (semi)perimeter as well so one may (for instance) choose $s=(a+b+c)/2=1$ without loss of generality that makes a lot of the formulas simpler, e.g. the inradius is identical to the area It is asking how many terms of the Euler Maclaurin formula do we need in order to compute the Riemann zeta function in the complex plane? $q$ is the upper summation index in the sum with the Bernoulli numbers. This appears to answer it in the positive: "By repeating the above argument we see that we have analytically continued the Riemann zeta-function to the right-half plane σ > 1 − k, for all k = 1, 2, 3, . . .."
a) Correct. The new equilibrium position occurs at distance $x=mg/k$ below the initial position (where the collision occurred). Alternatively it is $X=(M+m)g/k$ below the top of the unloaded spring (with no cymbal attached). b) The new angular frequency of oscillations can be written down without any calculation : $\omega=\sqrt{\frac{k}{m+M}}$. Correct. However, your calculation of amplitude is not correct. Peak-to-trough is twice the amplitude - ie $2A$. Amplitude is the distance of a peak or trough from the new equilibrium position. Peaks and troughs occur when kinetic energy is zero - ie "when it is not oscillating" as you put it. See Energy Method below. You had the right idea, but your calculation neglects the fact that the cymbal is not released from rest - it has some initial kinetic energy due to the collision. Also your solution does not deal with elastic energy correctly. CALCULATION OF AMPLITUDE OF OSCILLATIONS 1. Equation of Motion Method First work out the speed of the cymbal immediately after the inelastic collision. From conservation of momentum this is $v=\frac{m}{m+M}u$ where $u=\sqrt{2gh}$ is the speed of $m$ immediately before the collision. Next, the oscillation about the new equilibrium position can be described by the equation of motion $$\xi=A\sin(\omega t+\phi), \dot \xi=\omega A\cos(\omega t+\phi)$$ where $\phi$ is some unknown phase angle. Suppose the collision occurs at $t=0, \xi=x$ with $\dot \xi=v$. (It does not matter what time it occurs, we will only get a different phase angle $\phi$.) Then $$x=A\sin\phi, v=\omega A\cos\phi$$ $$A^2=x^2+(\frac{v}{\omega})^2=(\frac{mg}{k})^2+\frac{2ghm^2}{k(M+m)}$$ 2. Energy Method Let point U be the top of the spring when it has no load. I shall measure all potential and elastic energies from this point. Let P be the starting equilibrium position where the load is $M$, let O be the new equilibrium position when the load is $M+m$, and Q be the lowest point of the subsequent oscillations. The amplitude of oscillations is $OQ=A$. Distance $OP=x$ as already calculated. Other distances are $UP=UO-PO=X-x$ and $UQ=UO+OQ=X+A$. At P there is kinetic energy of $\frac12 (m+M)v^2$. The spring is compressed by distance $UP=X-x$ so the elastic energy stored in the spring is $\frac12 k(X-x)^2$ where $X=(M+m)g/k$ as above. Gravitational PE at P relative to U is $-(m+M)g(X-x)=-kX(X-x)$. At Q there is no KE, and gravitational PE is $-(M+m)g(X+A)=-kX(X+A)$. The elastic energy stored here is $\frac12 k(X+A)^2$. By the conservation of energy, the total energy at P is the same as that at Q. Therefore $$\frac12 (m+M)v^2-kX(X-x)+\frac12 k(X^2-2xX+x^2)=\frac12 k(X^2+2AX+A^2)-kX(X+A)$$ $$(m+M)v^2=k(A^2-x^2)$$ $$A^2=x^2+\frac{(m+M)v^2}{k}=x^2+(\frac{v}{\omega})^2$$ as found using Method 1. (c) This part of the question is not clear. I assume that the block moves down with the cymbal as in (a) and (b). However, because it is not fixed to the cymbal it can separate from the cymbal when it rises from Q back above O. The block and cymbal do not 'stick together' because of the collision, they only 'move together' after it. Separation occurs when the downward acceleration of the SHM becomes greater than $g$. Gravity is the only force holding the block in contact with the cymbal, so when gravity is no longer able to supply the required restoring force on the block, it leaves contact with the cymbal. Suppose separation occurs at point R which has displacement $\xi$ above equilibrium position O. The downward acceleration at point R is $\omega^2 \xi=g$. So $$\xi=\frac{g}{\omega^2}=\frac{(m+M)g}{k}=X$$ This means that the lift-off point is always at the top of the spring when it has no load (R=U), whatever the values of $m, M, h, k$. This is a surprising result. The explanation is that at this instant the only force acting on the cymbal and block is gravity, because the spring is no longer compressed or stretched so it exerts no force. Both cymbal and block are in 'free fall' so the force between them is zero. Just before this instant both cymbal and block are moving upwards but accelerating downwards at just less than $g$ because of a small upward push from the spring. After this instant the cymbal is being accelerated downwards at slightly more than $g$ because of a small pull from the spring. But the spring does not pull down on the block so the block is still accelerating downwards at $g$. There is relative acceleration, so the block and cymbal separate. Note that if $A\lt X$ then the cymbal does not rebound to the relaxed position of the spring at U, then there is no lift-off. The condition for lift-off is that $A \gt X$. Substituting from the equations for $A^2$ and $X$ given above we get $$A^2 \gt X^2$$ $$\frac{m^2g^2}{k^2}+\frac{2ghm^2}{k(M+m)} \gt \frac{(m+M)^2g^2}{k^2}$$ $$1+\frac{2kh}{g(M+m)} \gt \frac{(m+M)^2}{m^2}$$ $$\frac{2kh}{g(M+m)} \gt \frac{(m+M)^2}{m^2}-1=\frac{m^2+2mM+M^2-m^2}{m^2}=\frac{M(2m+M)}{m^2}$$ $$h \gt \frac{M(M+m)(M+2m)g}{2km^2}$$ We can also find the maximum height $H$ above R=U reached by the mass $m$ after lift-off. The total energy at P is the same as at U, so from above ( Energy Method) we have $$\frac12 (m+M)v^2+\frac12 k(X-x)^2-kX(X-x)=\frac12 (m+M)V^2$$ $$\frac12 (m+M)(v^2-V^2)=k(X^2-xX)-\frac12 k(X^2-2xX+x^2)=\frac12 k(X^2-x^2)$$ $$v^2-V^2=\frac{k}{(m+M)}(X-x)(X+x)$$ From earlier results we have $$v=\frac{m}{m+M}u=\mu u=\mu \sqrt{2gh}$$ $$v^2=2gh \mu^2$$ $$X-x=\frac{(m+M)g}{k}-\frac{mg}{k}=\frac{Mg}{k}$$ $$X+x=\frac{(m+M)g}{k}+\frac{mg}{k}=\frac{(m+M+m)g}{k}$$ The block rises to a height $H$ above U given by $2gH=V^2$. Making the above substitutions into the equation for $v^2-V^2$ we get $$2g(h\mu^2-H)=\frac{k}{m+M}\frac{Mg}{k} \frac{(m+M+m)g}{k}=\frac{Mg^2}{k}(1+\mu)$$ $$H=h\mu^2-\frac{Mg}{2k}(1+\mu)$$