text
stringlengths
256
16.4k
Search Now showing items 1-2 of 2 D-meson nuclear modification factor and elliptic flow measurements in Pb–Pb collisions at $\sqrt {s_{NN}}$ = 5.02TeV with ALICE at the LHC (Elsevier, 2017-11) ALICE measured the nuclear modification factor ($R_{AA}$) and elliptic flow ($\nu_{2}$) of D mesons ($D^{0}$, $D^{+}$, $D^{⁎+}$ and $D^{s+}$) in semi-central Pb–Pb collisions at $\sqrt{s_{NN}} =5.02$ TeV. The increased ... ALICE measurement of the $J/\psi$ nuclear modification factor at mid-rapidity in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV (Elsevier, 2017-11) ALICE at the LHC provides unique capabilities to study charmonium production at low transverse momenta ( p T ). At central rapidity, ( |y|<0.8 ), ALICE can reconstruct J/ ψ via their decay into two electrons down to zero ...
It is. Prior to continuity, which is a property of the preference relation, the preference relation $\succsim$ itself has been defined to be a binary relation that is characterized by transitivity, and, to begin with, by completeness. Then if $S_1\cup S_2 \neq [0,1]$, it means that there exist some values of $\alpha$ somewhere in $[0,1]$, call them $\tilde \alpha$ for which neither $$\{\tilde \alpha L+(1-\tilde \alpha)L'\succsim L''\}$$ nor $$\{L''\succsim \tilde \alpha L+(1-\tilde \alpha)L'\}$$ In words, for these $\tilde \alpha$'s, the pair cannot be ordered at all. But this contradicts the completeness foundation that is needed to even obtain a preference relation (as of course used in our theory. Psychologists I guess would disagree). Also, note that completeness is defined over all conceivable pairs, even if, in a specific situation, we chose to restrict the space of lotteries to something smaller. Whether the lotteries under consideration belong to the specified lottery space, is really irrelevant. The person having the preferences has to be able to order them in any case, even as a "hypothetical" scenario (although strictly speaking, for a specific problem we have the "luxury" to impose completeness only as regards the lotteries available, while "remaining agnostic" as regards completeness if we expand the lottery space. Still this "weakening" on the imposition of the completeness axiom, does not really bring any gain).
Liouville's Theorem (Complex Analysis) Theorem Then $f$ is constant. Proof By assumption, there is $M \ge 0$ such that $\cmod {\map f z} \le M$ for all $z \in \C$. For any $R \in \R: R > 0$, consider the function: $\map {f_R} z := \map f {R z}$ Using the Cauchy Integral Formula, we see that: $\displaystyle \cmod {\map {f_R'} z} = \frac 1 {2 \pi} \cmod {\int_{\map {C_1} z} \frac {\map f w} {\paren {w - z}^2} \rd w} \le \frac 1 {2 \pi} \int_{\map {C_1} z} M \rd w = M$ Hence: $\displaystyle \cmod {\map {f'} z} = \cmod {\map {f_R'} z} / R \le M / R$ Since $R$ was arbitrary, it follows that $\cmod {\map {f'} z} = 0$ for all $z \in \C$. Thus $f$ is constant. $\blacksquare$ Remark Source of Name This entry was named for Joseph Liouville.
I built a Precision Full Wave Rectifier Circuit, like in the following figure( with better amplifiers) to rectify a sine wave of 1kHz. As expected i got a rectified wave with 2kHz but doing its FFT analysis i noticed that the biggest frequency component is at DC(0) can anyone explain me why? Also, I'm sampling this signal with an ADC at much higher frequency around (40 kHz), it order to measure its RMS. Do you find appropriate to build an anti-aliasing filter to the rectified wave and which frequencies should i reject, everything above the 2kHz ? The Fourier series of the full wave rectified sine wave is (from here): The DC component has magnitude 2A/π, while the first AC component has magnitude 4A/3π. So that's why mathematically the DC component is largest. You'd expect a large DC component because rectification makes the whole signal positive. You don't want to filter everything about 2 kHz because then you'll just end up with a sine wave again. All the frequency components are important to the shape of the curve, so you should keep as many as possible. Because of the Nyquist limit, you'll want an antialiasing filter to remove everything above half your sampling frequency (and a bit more for safety). i noticed that the biggest frequency component is at DC(0) That is what rectifiers do - they rectify AC into DC - what would happen if you full-wave rectified an AC square wave - you would get pure DC out with no ripple - I know that is an extreme example but, don't forget what rectifiers do best. Think about it another way: - The average value is the peak value x \$\dfrac{2}{\pi}\$ or 0.6366 x pk. The RMS is 0.7071 x pk so the whole harmonic content of the wave form accounts for \$\sqrt{0.7071^2-0.6366^2}\$ = 0.265 i.e. significantly lower than the DC content. I'm sampling this signal with an ADC at much higher frequency around (40 kHz), it order to measure its RMS. Do you find appropriate to build an anti-aliasing filter to the rectified wave and which frequencies should i reject, everything above the 2kHz ? If you are only calculating the RMS then you don't need an anti-alias filter. With an anti-alias filter you are discarding harmonic content so you would never measure RMS perfectly. By not using an anti-alias filter you are "aliasing" spectral content above 20 kHz nyquist down into the base band and, ironically, you want to measure this so, don't use an anti-alias filter. Just take each sample, square it, take an average (many samples) then take the square root of that average. Below is a picture of a sinewave sampled at 2.7 times per cycle and I've shown the sample values: - Take all those values, square them, take the average then take the square root: - \$\sqrt{\dfrac{(+0.7071)^2 + (0)^2 + (-0.7071)^2 + (1)^2 + (-0.7071)^2 + (0)^2 + (+0.7071)^2 + (-1)^2}{8}}\$ = \$\sqrt{\dfrac{4}{8}}\$ = 0.7071 i.e. same as a sinewave of peak amplitude 1 It doesn't matter whether this is a sinewave or a rectified sinewave - the RMS value will be the same because the negative sine samples get converted to positive ones in the squaring process: - The full wave rectified signal has exactly the same RMS value as the sinewave yet has only been slightly oversampled at the fundamental frequency. Clearly there is harmonic content at much higher frequencies than the sampling frequency yet the proper RMS value has been derived correctly. If you are careful in choosing the sampling frequency and the signal is periodic you can even undersample and get the correct value for RMS: - You should be able to see that the undersampled waveform output has the same RMS value of the oversampled output. You have to avoid sampling continually at the same position on the waveform or you will get an error but this can be avoided with good design. In order to capture the RMS value you need to measure the highest frequency components with significant power as defined by your requirements. If you know in advance that the input will always be a pure sine wave you can just measure the DC component and correct for the ~11% error in software. If the input can be much less like a sine wave, for example DC or a waveform with a large crest factor then you will need to measure a wider bandwidth. How wide depends on the worst case acceptable error and the nastiest waveform you need to accept. I can't add anything to the wonderful answer by Andy aka, but if you want to explore the Fourier series math Ken Shirriff mentioned I can recommend an interactive tool to explore the Fourier coefficients of different waveforms. It's written by Paul Falstad and available here: http://www.falstad.com/fourier/index.html He also provides an example for full-wave rectification: http://www.falstad.com/fourier/e-fullrect.html You can play with brick-wall low-pass filtering ideal signals by reducing the number of terms represented. However, you won't be able to simulate sampling or aliasing issues with this tool.
Snell's Law, Frequency, Speed and Wavelength \[n= \frac{sin \theta_1}{\sin \theta_2}\]. In fact \[n=\frac{v_1}{v_2}= \frac{\lambda_1}{\lambda_2}\]. These relationships mean that as waves pass from medium to medium, the sin of the angle between the wave and the normal, the speed and the wavelength all change in proportion. The frequency is not changed. Not all wave behave in the same way on passing from air to gall or water. Light slows down, nut sound speeds up dramatically. The speed of sound in air is about 330m/s, but the speed of sound in water is about 1500m/s.
Definition of a Vector Just being able to put numbers on physical quantities is not sufficient for describing nature. Very often physical quantities have directions. For example, a description of something’s motion is incomplete if you merely state how fast it is going. [ Okay, so an asteroid is moving at 35,000 miles per hour, but is it headed for Earth?!] We therefore have the following definition for physical quantities that exhibit both these properties: Definition: Vector A vector is a quantity with both magnitude and direction. We will frequently represent a vector quantity with an arrow, where the direction of the vector is the direction that the arrow points, and the magnitude of the vector is represented by the length of the arrow. This is not to say that vectors are arrows – arrows just make a handy geometric representation. So while an arrow representing a vector might be 6 cm long, that doesn’t mean that the vector has a magnitude of 6 cm. The vector might represent the speed and direction of a moving object, for example, and then the vector’s magnitude isn’t even in units of cm. However, if we draw two arrow representations of the same sort of quantity, and one is twice as long as the other, the implication is that the longer arrow represents a vector with twice the magnitude of the vector represented by the shorter arrow. Alert There is no way to compare magnitudes of different physical quantities. If a distance vector is drawn as an arrow on the same page as a velocity vector’s arrow, the relative sizes of the two arrows are meaningless. There are a few other things that we should say about vectors and the arrows that represent them: Wherethe arrow representing a vector is positioned is not a distinct feature of the vector. That is, an arrow representing a vector can be moved at will, and so long as it isn’t stretched, shrunk, or rotated, it will represent the same vector. Just changing an arrow's location does not change its magnitude or its direction if it is moved carefully. Vector directions (and therefore the directions of their representative arrows) can be reversed mathematically through multiplication by –1. Vector lengths can be expanded or shrunk (scaled) through multiplication by a regular number (called a scalar). If the number is greater than 1, the vector expands in length, and if it is less than 1, it contracts. One other thing... When we write a symbol for a vector quantity, we will do so with a small arrow above the letter, like this: \(\overrightarrow A\). Variables with the same letter as a defined vector that do not include an arrow, are assumed to represent the magnitude of that vector. So for example, when used in the same context, the variable A represents the magnitude of \(\overrightarrow A\). Vector Addition/Subtraction For these mathematical quantities we call vectors to have any value to us, they have to allow for simple mathematical operations, such as addition. The directional nature of vectors makes addition much trickier than simply summing the two magnitudes. It turns out that a well-defined vector addition involves simple geometry. It goes like this: Transport one of the vectors (in a parallel fashion, so as not to change its direction) so that its tail is in contact with the head of the other vector. Then fashion a new vector such that its tail is at the open tail and its head is at the open head. Figure 1.1.1 – Graphical Vector Addition Figure 1.1.2 – Vector Addition is Commutative What about subtracting two vectors? Well, we can do this by following the same method as for regular numbers: Whichever vector we wish to subtract we multiply by –1, and then add the result to the other vector, which we do in the manner described above. We already know that multiplying a vector by –1 reverses its direction (and leaves its magnitude unchanged), so this is a well-defined operation for us. Vector Components The graphical method of adding vectors are not always convenient. For example, we shouldn’t have to actually measure the length of the new vector, we should be able to calculate it. Well, of course we can do this using some sophisticated knowledge of triangles. For example, given we know the lengths and directions of the two vectors we are adding, we can determine the length of the third leg of the triangle using the Law of Cosines: \[ C^2 = A^2 + B^2 -2AB \cos \theta\] With all of the lengths of the triangle legs and one of the angles (the one between \(A\) and \(B\)), we can get the other angles using the Law of Sines. Example \(\PageIndex{1}\) The magnitudes of the two vectors shown in the diagram below are: \(A=132\) and \(B=145\). Find the magnitude and direction (angle made with the \(x\)-axis) of the vector that is the difference of these two vectors. Solution Using the fact that the negative of a vector is the same vector pointing in the opposite direction along with using tail-to-head vector addition, we get the following diagram for the three vectors: The angle between \(\overrightarrow A\) and \(\overrightarrow B\) is obviously 65º – 30º = 35º, so for this triangle we have the lengths of two sides and the angle between them. We can therefore find the length of the third side (\(\overrightarrow C\)) from the law of cosines: \[{C^2} = {A^2} + {B^2} - 2AB\cos \theta \;\;\; \Rightarrow \;\;\;C = \sqrt {{{\left( {132} \right)}^2} + {{\left( {145} \right)}^2} - 2\left( {132} \right)\left( {145} \right)\cos \left( {{{35}^o}} \right)} = 84.2\nonumber\] Next we can determine the angle between \(\overrightarrow A\) and \(\overrightarrow C\) using the law of sines: \[\frac{\sin {35}^o}{C} = \frac{\sin \theta _{AC}}{B}\;\;\; \Rightarrow \;\;\;\theta _{AC} = \sin ^{ - 1}\left[ \frac{145}{84.2}\sin {35}^o \right] = 81^o\nonumber\] If we rotate \(\overrightarrow C\) counterclockwise through this angle, it will be parallel to \(\overrightarrow A\), if we then rotated it back clockwise by 30º (the angle \(\overrightarrow A\) makes with the \(x\)-axis), then it will be parallel to the \(x\)-axis. Therefore the angle \(\overrightarrow C\) makes with the \(x\)-axis is: –81º + 30º = –51º (below the \(x\)-axis). This answer certainly conforms with the diagram above, which shows \(\overrightarrow C\) with a smaller magnitude than \(\overrightarrow A\) and \(\overrightarrow B\) and pointing down to the right. While we can use these tools to mathematically solve for the sum of two vectors, it turns out that there is another way we can do it that doesn’t require quite as much geometrical reasoning. This method exploits three simple facts: We can replace any single vector as a sum of two (or more) vectors. It is easy to add two vectors that are parallel. If we use right triangles, trigonometry is easier to work with than with general triangles and the law of cosines/sines. The trick is to select two (or three, if necessary) perpendicular axes (they do not have to be horizontal and vertical, they only need to be perpendicular to each other), and break up every vector involved into a sum of two perpendicular vectors parallel to these axes. The lengths of these perpendicular vectors are called the components of the vector along those axes. Going back to the list of advantages above, remember that we can add similar components like numbers, and we can determine these components easily using trigonometry. Figure 1.1.3 – Vector Components Figure 1.1.4 – Summing Vectors Using Components The sums of components are like summing numbers, but only components along the same axes can be added. The results are then more components, which then have to be reconstructed into a vector. Unit Vectors So we can use perpendicular coordinate systems to describe vectors in terms of their components. Essentially this means that to describe a vector in terms of a set of three axes, we need to know three numbers. But it might be useful to actually express these vectors as a single mathematical entity, and that’s where the notion of the unit vector comes in. Vectors have magnitude and direction, and with unit vectors we can mathematically break up the vector into those two parts. The magnitude is just a number (with physical units) without direction, and a unit vector is a vector (without units) that has a length of 1, so that it can be scaled to any length without contributing anything to the magnitude. Therefore we can write a vector as a simple product: where \(\widehat A\) is the unit vector (usually pronounced “\(A\)-hat”). It is a unitless vector of length 1 that points in the direction of the vector \( \overrightarrow A\). The value \(A\) is a number with physical units that equals the magnitude. The diagram below gives a graphic description of how this construction works for a few common physical vectors. The unit vectors provide a very basic template by defining the direction, and the magnitude fills in the template by contributing the girth and 'flavor' (physical units) of the vector. Figure 1.1.4 – Unit Vectors and Magnitudes If we combine this notion with components, we can write any vector as a sum of components multiplying unit vectors in the directions of the three spatial dimensions. By convention, we give these unit vectors the names \(\widehat i\), \(\widehat j\), and \(\widehat k\) for the axes \(x\), \(y\), and \(z\), respectively. So specifically, we have: \[\overrightarrow A = A_x\widehat i + A_y\widehat j + A_z\widehat k \] Now we can just use this as a mathematical representation of vectors, and we do not have to appeal to geometry at all. For example, \[ \begin{align} \overrightarrow C &= \overrightarrow A + \overrightarrow B \nonumber\\[5pt] &= (A_x\widehat i + A_y\widehat j + A_z\widehat k) + (B_x\widehat i + B_y\widehat j + B_z\widehat k) \nonumber\\[5pt] &= C_x\widehat i + C_y\widehat j + C_z\widehat k\nonumber\\[5pt] &= (A_x+ B_x) \widehat i + (A_y + B_y) \widehat j + (A_z + B_z) \widehat k \end{align}\] Giving us the same result as we got before for the components of the sum of two vectors. Example \(\PageIndex{2}\) Repeat the calculation of example 1.1.1, this time using components. Solution Breaking the two vectors into their \(x\) and \(y\) components gives: \[\begin{array}{c} \overrightarrow A = {A_x}\widehat i + {A_y}\widehat j = A\cos \theta \;\widehat i + A\sin \theta \;\widehat j = 132\cos {30^o}\widehat i + 132\sin {30^o}\widehat j = 114.3\;\widehat i + 66.0\;\widehat j\\ \overrightarrow B = {A_x}\widehat i + {A_y}\widehat j = B\cos \theta \;\widehat i + B\sin \theta \;\widehat j = 145\cos {65^o}\widehat i + 145\sin {65^o}\widehat j = 61.3\;\widehat i + 131.4\;\widehat j \end{array}\nonumber\] Next we subtract \(\overrightarrow B\) from \(\overrightarrow A\) to get \(\overrightarrow C\), then compute its magnitude (using the Pythagorean theorem) and direction (using trigonometry): \[\begin{array}{l} \overrightarrow C = \overrightarrow A - \overrightarrow B = 53.0\;\widehat i - 65.4\;\widehat j\\ \left| {\overrightarrow C } \right| = \sqrt {{53.0}^2 + {65.4}^2} = 84.2\\ angle = \tan ^{ - 1}\left( \dfrac{ - 65.4}{53.0} \right) = - {51^o} \end{array}\nonumber\] This matches the answer found in example 1.1.1.
Last week we saw that the best strategy for Wythoff’s game is to always move to one of the blue squares in the diagram below (if you can). We found the locations of the blue squares with an iterative algorithm that builds outward from (0,0), but we haven’t yet discovered the underlying pattern in how these blue cells are arranged. Let’s go pattern hunting! As we noticed last week, the blue cells are arranged in a symmetric “V” shape, formed from two seemingly straight lines. What are the slopes of these lines? This is the question du jour. Looking more carefully, the upper branch of the “V” has only two types of “gaps”: consecutive blue cells are separated by either a “short” (1,2) step (a.k.a. a knight’s move) or a “long” (2,3) step. So, if we believe that this “V” branch is a perfect line, then that line’s slope should be somewhere between the slopes of these two steps, namely \(3/2=1.5\) and \(2/1=2\). Knowing more about how these steps are arranged will tell us more about the “V”. Specifically, label these “short” and “long” steps as a and b respectively. What we care about is the ratio of bs to as. Why? For example, if there were an equal number of as and bs on average, then the slope of the line would be the same as the vector \(a+b=(3,5)\), i.e., the slope would be 5/3. If there were, say, two bs for each a in the long run, then the line would fall in the direction of \(a+2b=(5,8)\), with slope 8/5. Unfortunately, the ratio we seek is neither 1 nor 2; what is this magic number? Let’s write down the sequence of jumps along our line. From the diagram above we see that it starts a b a b b a …, and using last week’s iterative algorithm, we can compute farther into this infinite sequence: a b a b b a b a b b a b b a b a b b a b a b b a b b a b a b b a b b … This sequence seems to be composed of “blocks” of (a b) and (a b b): (a b)(a b b)(a b)(a b b)(a b b)(a b)(a b b)(a b)(a b b)(a b b)(a b)(a b b)(a b b)… So for every a there are either one or two bs, meaning the ratio of bs to as is somewhere between 1 and 2. So the slope should be between 5/3 and 8/5. But what is it exactly? To answer this, we need to know how these “short” and “long” blocks are arranged. Write A = (a b) and B = (a b b), so we can rewrite the sequence of blocks as A B A B B A B A B B A B B … Hold on; this sequence looks familiar. It’s exactly the same pattern as our original jump sequence! So it seems that the pattern of blocks is exactly the same as the pattern of the letters themselves! This is a “self-generating” sequence! In particular, if the ratio of bs to as is some number r, then the ratio of (a b b) blocks to (a b) blocks is also r. But if we have one block of (a b) and r blocks of (a b b) then we have a total of \(1+2r\) bs and \(1+r\) as, so the ratio of bs to as is \(\frac{1+2r}{1+r}=r\). This simplifies to \(1+r=r^2\), and the solution is the shiny number \(r=\frac{1+\sqrt{5}}{2}\), known as the golden ratio and denoted \(\phi\) (Greek letter phi). Now we know that our purported line should be in the direction of \(a+b\cdot\phi = (1+2\phi,2+3\phi)\), so the slope is \((2+3\phi)/(1+2\phi)=\phi\). And since our “V” shape is symmetric, the other line should have slope \(1/\phi = \frac{\sqrt{5}-1}{2}\). Done and done! Well, yes and no. We found the slopes of the lines in the “V”, but why do they form lines at all? And what does all this have to do with the Fibonacci numbers?
Quadratic Formula – Solution of quadratic equation using factorization A quadratic equation is of the form of \( ax^2~+~bx~+~c~=~0, a~≠~0, a,b, ~and~ c\) are real numbers. Consider the quadratic equation \( x^2~-~3x~+~2\) = 0; substituting x=1 in LHS of the equation gives, \(1^2~-~3~+~2\) = 0 which is equal to RHS of the equation. Similarly, substituting x = 2 in LHS of the equation also gives, \( 2^2~-~6~+~2\) = 0 which is equal to RHS of the equation. Here, 1 and 2 satisfies the equation \(x^2~-~3x~+~2\) =0. Therefore, those are known as the solution of quadratic equation or roots of the equation. This also means that numbers 1 and 2 are the zeros of the polynomial \( x^2~-~3x~+~2\) . We know that, a second degree polynomial will have at most two zeros. Therefore a quadratic equation will have at most two roots. In general, if α is a root of the quadratic equation \( ax^2~+~bx~+~c\) = 0, a ≠ 0; then, \(aα^2~+~bα~+~c\) = 0. We can also say that x = α is a solution of the quadratic equation or α satisfies the equation \( ax^2~+~bx~+~c\) =0. Roots of the quadratic equation \( ax^2~+~bx~+~c\) =0 is same as zeros of the polynomial \( ax^2~+~bx~+~c\). By splitting the middle term, we can factorise quadratic polynomial. For example; \( x^2~+~5x~+~6\) can be written as \(x^2~+~2x~+~3x~+~6\), \( x^2~+~2x~+~3x~+~6\) =\( x(x~ +~ 2)~ +~ 3(x~ +~ 2)\) = \((x ~+~ 2)~(x~ +~ 3)\) Roots of the quadratic equation can be found by factorising the polynomial and equate it with zero. Consider the quadratic equation \( 2x^2~-~5x~+~2\) = 0; which is of the form of \(ax^{2}+ bx +c = 0\) Here, -5x can be broken down into two parts \(-4x – x\), as the multiplication of these parts result in \(4x^{2}\) which is equal to a \(\times\) c = \(2x^{2} \times 2\) = \(4x^{2}\) Thus equation becomes: \(2x^{2} – 5x + 2 = 2x^{2} – 4x – x + 2\) \(= 2x(x – 2) -1 (x – 2)\) \(= (2x – 1)(x – 2)\) Therefore, \(2x^2~-~5x~+~2\) = 0 is same as \((2x~ -~ 1) (x~ -~ 2)\)=0 The values of x for which \( 2x^2~-~5x~+~2\) = 0 is same as the values of x for which \((2x ~- ~1)~(x~ -~ 2)\) = 0. If \( (2x ~-~ 1)~(x~ -~ 2)\) = \(0\) ; then, either \((2x ~-~ 1)\) = \(0~ or~ (x ~- ~2)\) = 0 \(2x ~-~ 1\) = 0 gives, \(2x \) = 1, \(x \)= \( \frac 12 \) \(x ~-~ 2\) = 0 give \(sx\) = 2 Therefore, \( \frac 12 \) and 2 are the roots of the equation \( 2x^2~-~5x~+~2\) = 0. Solution of the quadratic equation using quadratic formula Consider the equation \( ax^2~+~bx~+~c\) = 0, a ≠ 0. Dividing the equation by a gives, \( x^2~+~ \frac ba x~+~ \frac ca \) = 0 By using method of completing the square, we get \( \left( x~+~\frac{b}{2a}\right)^2~-~\left( \frac {b}{2a}\right)^2 ~+~\frac ca \) = 0 \( \left( x~+~\frac{b}{2a}\right)^2 ~-~ \frac {b^2~-~4ac}{4a^2}\) = 0 \( \left( x~+~\frac{b}{2a}\right)^2 \) = \( \frac {b^2~-~4ac}{4a^2}\) Roots of the equation is found by taking the square root of RHS. For that \( b^2~-~4ac\) should be greater than or equal to zero. When \( b^2~-~4ac\) ≥ 0, \( \left( x~+~\frac{b}{2a}\right) \) = ± \( \frac {\sqrt{b^2~-~4ac}}{2a}\) \( x \)= \( -~\frac{b}{2a}~±~\frac {\sqrt{b^2~-~4ac}}{2a}\) \( x \) = \(\frac {-b~±~\sqrt{b^2~-~4ac}}{2a}\) —-(1) Therefore roots of the equation are, \(\frac {-b~+~\sqrt{b^2~-~4ac}}{2a}\) and \(\frac {-b~-~\sqrt{b^2~-~4ac}}{2a}\) The equation will not have real roots if \( b^2~-~4ac\) < 0, because square root is not defined for negative numbers in real number system. (1) is formula to find roots of the quadratic equation \(ax^2~+~bx~+~c\) = 0, which is known as quadratic formula. Example: Find the roots of the equation\( x^2~-~5x~+~6\) = 0 using quadratic formula. Comparing the equation with \( ax^2~+~bx~+~c\) = 0 gives, a = 1, b = -5 and c = 6 \(b^2~-~4ac\) = \((-5)^2~-~4~×~1~×~6\) = 1 Roots of the equations are \(\frac {-b~+~\sqrt{b^2~-~4ac}}{2a}\) = \( \frac {5~+~1}{2} \) = \( \frac {6}{2}\) = 3 and \(\frac {-b~-~\sqrt{b^2~-~4ac}}{2a}\) = \( \frac {5~-~1}{2} \) = \( \frac {4}{2}\) = 2 To know more about quadratic equations, download Byjus-the learning app from google play store.
What is Curie-Weiss temperature? What is the difference between Curie-Weiss temperature and Curie temperature? The Curie temperature or Curie point is the temperature at which a ferromagnetic or a ferri-magnetic material becomes paramagnetic when heated. The effect is reversible. On the other hand,the Curie-Weiss temperature is the temperature at which a plot of the reciprocal molar magnetic susceptibility against the absolute temperature T intersects the T-axis. The Curie-Weiss temperature can adopt positive as well as negative values. I hope,now you will get the difference. Naively, both temperatures are equal and they're the constant temperature $T_c$ entering the Curie-Weiss Law: $$ \chi = \frac{C}{T-T_c}. $$ However, the behavior is often more complicated and the formula above doesn't describe the susceptibility $\chi$ well for all temperatures. When it's so, the Curie temperature $T_c$ is the temperature at which the susceptibility actually blows up, so $\chi=C/(T-T_c)$ holds for $T\sim T_c$ while the Curie-Weiss temperature is the temperature for which the law $\chi=C/(T-T_0)$ holds for $T\gg T_0$, i.e. one reconstructed from the "shape of the hyperbola far away". The temperatures are close $T_0\sim T_c$ for materials for which the transition is first-order; the temperatures are very different if the transition is second-order.
Difference between revisions of "User:Nikita2/sandbox" Line 15: Line 15: ====$\mathcal N$-property of a function $f$, continuous on an interval $[a,b]$==== ====$\mathcal N$-property of a function $f$, continuous on an interval $[a,b]$==== − For any set + For any set of measure =0, the image of this set, , also has measure zero. It was introduced by N.N. Luzin in 1915 (see [[#References|[1]]]). The following assertions hold. − − + )=0 -on a[[|]]. − + the Luzin <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l061/l061050/.png" />-property. − + + + If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l061/l061050/l06105018.png" /> has the Luzin <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l061/l061050/l06105019.png" />-property and has bounded variation on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l061/l061050/l06105020.png" /> (as well as being continuous on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l061/l061050/l06105021.png" />), then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l061/l061050/l06105022.png" /> is absolutely continuous on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l061/l061050/l06105023.png" /> (the Banach–Zaretskii theorem). 5) If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l061/l061050/l06105024.png" /> does not decrease on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l061/l061050/l06105025.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l061/l061050/l06105026.png" /> is finite on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l061/l061050/l06105027.png" />, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l061/l061050/l06105028.png" /> has the Luzin <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l061/l061050/l06105029.png" />-property. 5) If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l061/l061050/l06105024.png" /> does not decrease on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l061/l061050/l06105025.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l061/l061050/l06105026.png" /> is finite on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l061/l061050/l06105027.png" />, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l061/l061050/l06105028.png" /> has the Luzin <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l061/l061050/l06105029.png" />-property. Revision as of 12:44, 15 December 2012 A mapping $\varphi:D\to D'$ possesses Luzin's $\mathcal N$-property if the image of every set of measure zero is a set of measure zero. A mapping $\varphi$ possesses Luzin's $\mathcal N{}^{-1}$-property if the preimage of every set of measure zero is a set of measure zero. Briefly\begin{equation*}\mathcal N\text{-property:}\quad \Sigma\subset D, |\Sigma| = 0 \Rightarrow |\varphi(\Sigma)|=0,\end{equation*}\begin{equation*}\mathcal N{}^{-1}\text{-property:} \quad M \subset D, |M| = 0 \Rightarrow |\varphi^{-1}(M)|=0.\end{equation*} Contents $\mathcal N$-property of a function $f$, continuous on an interval $[a,b]$ For any set $E\subset[a,b]$ of measure zero ($|E|=0$), the image of this set, $f(E)$, also has measure zero. It was introduced by N.N. Luzin in 1915 (see [1]). The following assertions hold. A function $f\not\equiv \operatorname{const}$ on $[a,b]$ such that $f'(x)=0 $ almost-everywhere on $[a,b]$ (see for example Cantor ternary function) does not have the Luzin $\mathcal N$-property. If does not have the Luzin -property, then on there is a perfect set of measure zero such that . An absolutely continuous function has the Luzin -property. If has the Luzin -property and has bounded variation on (as well as being continuous on ), then is absolutely continuous on (the Banach–Zaretskii theorem). 5) If does not decrease on and is finite on , then has the Luzin -property. 6) In order that be measurable for every measurable set it is necessary and sufficient that have the Luzin -property on . 7) A function that has the Luzin -property has a derivative on the set for which any non-empty portion of it has positive measure. 8) For any perfect nowhere-dense set there is a function having the Luzin -property on and such that does not exist at any point of . The concept of Luzin's -property can be generalized to functions of several variables and functions of a more general nature, defined on measure spaces. References [1] N.N. Luzin, "The integral and trigonometric series" , Moscow-Leningrad (1915) (In Russian) (Thesis; also: Collected Works, Vol. 1, Moscow, 1953, pp. 48–212) Comments There is another property intimately related to the Luzin -property. A function continuous on an interval has the Banach -property if for all Lebesgue-measurable sets and all is a such that This is clearly stronger than the -property. S. Banach proved that a function has the -property (respectively, the -property) if and only if (respectively, only if — see below for the missing "if" ) the inverse image is finite (respectively, is at most countable) for almost-all in . For classical results on the - and -properties, see [a3]. Recently a powerful extension of these results has been given by G. Mokobodzki (cf. [a1], [a2]), allowing one to prove deep results in potential theory. Let and be two compact metrizable spaces, being equipped with a probability measure . Let be a Borel subset of and, for any Borel subset of , define the subset of by (if is the graph of a mapping , then ). The set is said to have the property (N) (respectively, the property (S)) if there exists a measure on (here depending on ) such that for all , (respectively, for all there is a such that for all one has Now has the property (N) (respectively, the property (S)) if and only if the section of is at most countable (respectively, is finite) for almost-all . References [a1] C. Dellacherie, D. Feyel, G. Mokobodzki, "Intégrales de capacités fortement sous-additives" , Sem. Probab. Strasbourg XVI , Lect. notes in math. , 920 , Springer (1982) pp. 8–28 MR0658670 Zbl 0496.60076 [a2] A. Louveau, "Minceur et continuité séquentielle des sous-mesures analytiques fortement sous-additives" , Sem. Initiation à l'Analyse , 66 , Univ. P. et M. Curie (1983–1984) Zbl 0587.28003 [a3] S. Saks, "Theory of the integral" , Hafner (1952) (Translated from French) MR0167578 Zbl 1196.28001 Zbl 0017.30004 Zbl 63.0183.05 [a4] E. Hewitt, K.R. Stromberg, "Real and abstract analysis" , Springer (1965) MR0188387 Zbl 0137.03202 How to Cite This Entry: Nikita2/sandbox. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Nikita2/sandbox&oldid=29210
I have a function $g(x)$ that looks like this: ListPlot[Table[{x, g[0.5, x, 3, 1]}, {x, -15, 15, 0.1}], Joined -> True, ImageSize -> Large, InterpolationOrder -> 2, PlotRange -> All] // AbsoluteTiming I am trying to calculate the following integral: $$\int_{-\infty}^{\infty} \cos (5.5x) g(x) \,dx \tag{1}$$ Naturally, to speed up convergence, I would like to limit the integration to a smaller domain than $(-\infty,\infty)$, which you can see should be justified by the fact that $g(x)$ quickly dies off past $x=5$. So I calculated (1) with the following limited domain: $$\int_{-L}^{L} \cos (5.5x) g(x) \,dx \tag{2}$$ for increasing values of $L$. Once again, just by looking at the plot of $g(x)$ above, the majority of the integral should come from $-5<x<5$, so I would expect some reasonable convergence. But this is not what I found. In the plot below testlength = Table[{L, NIntegrate[Cos[5.5*y]*g[0.5, y, 3., 1.], {y, -L, L}]}, {L, 5., 15., 0.5}]; (*checking convergence in interval size, t=5.5*)ListPlot[{#[[1]], #[[2]]/2.1571002369282504`*^-6} & /@ testlength, Joined -> True, ImageSize -> Large, PlotRange -> All, AxesLabel -> {"L", ""}] The integral is oscillating symmetrically about $0$ (I checked), and quickly dying off. What's going on here? How is this possible? Supporting Code: ClearAll[g];g[x_?NumericQ, En_?NumericQ, pz_?NumericQ, \[Alpha]_?NumericQ] := 1./(En^2 + pz^2 + 0.24^2)* NIntegrate[((Sqrt[ 0.316/(1. + 1.2*((k4 + 0.5*En)^2 + kp + (x*pz)^2))^\[Alpha]*0.316/(1. + 1.2*((k4 - 0.5*En)^2 + kp + ((1. - x)*pz)^2))^\[Alpha]])*((1. - x)*0.316/(1. + 1.2*((k4 + 0.5*En)^2 + kp + (x*pz)^2))^\[Alpha] + x*0.316/(1. + 1.2*((k4 - 0.5*En)^2 + kp + ((1. - x)*pz)^2))^\[Alpha]))/(((k4 + 0.5*En)^2 + kp + (x*pz)^2 + (0.316/(1. + 1.2 ((k4 + 0.5*En)^2 + kp + (x*pz)^2))^\[Alpha])^2)*((k4 - 0.5*En)^2 + kp + ((1. - x)* pz)^2 + (0.316/(1. + 1.2*((k4 - 0.5*En)^2 + kp + ((1. - x)* pz)^2))^\[Alpha])^2)), {k4, -\[Infinity], \\[Infinity]}, {kp, 0, \[Infinity]}, Method -> "LocalAdaptive"];
№ 9 All Issues Volume 55, № 3, 2003 Algorithms for the Best Simultaneous Uniform Approximation of a Family of Functions Continuous on a Compact Set by a Chebyshev Subspace Ukr. Mat. Zh. - 2003. - 55, № 3. - pp. 291-306 We generalize the cutting-plane method and the Remez method to the case of the problem of the best simultaneous uniform approximation of a family of functions continuous on a compact set. Ukr. Mat. Zh. - 2003. - 55, № 3. - pp. 307-359 We give a survey of the method of generalized moment representations introduced by Dzyadyk in 1981 and its applications to Padé approximations. In particular, some properties of biorthogonal polynomials are investigated and numerous important examples are given. We also consider applications of this method to joint Padé approximations, Padé–Chebyshev approximations, Hermite–Padé approximations, and two-point Padé approximations. Ukr. Mat. Zh. - 2003. - 55, № 3. - pp. 360-372 We propose a general method for obtaining Tauberian theorems with remainder for one class of Voronoi summation methods for double sequences of elements of a locally convex, linear topological space. This method is a generalization of the Davydov method of $C$-points. Ukr. Mat. Zh. - 2003. - 55, № 3. - pp. 373-378 We consider the problem of the extendability of solutions of differential equations to a singular set that consists of points at which the right-hand side of the equation considered is undefined. On the Asymptotic Behavior of the Remainder of a Dirichlet Series Absolutely Convergent in a Half-Plane Ukr. Mat. Zh. - 2003. - 55, № 3. - pp. 379-388 For a Dirichlet series \(\sum\nolimits_{n = 1}^\infty {a_n \exp \{ s{\lambda}_n \} } \) with nonnegative exponents and zero abscissa of absolute convergence, we study the asymptotic behavior of the remainder \(\sum\nolimits_{k = n}^\infty {\left| {a_k } \right|\exp \{ {\delta \lambda}_k \} } \) , δ < 0, as n → ∞. Ukr. Mat. Zh. - 2003. - 55, № 3. - pp. 389-399 We construct a measure that corresponds to the correlation functions of equilibrium states of infinite systems of classical statistical mechanics. The correlation functions satisfy the Bogolyubov compatibility conditions. We also construct measures that correspond to the correlation functions of nonequilibrium states of infinite systems for the Boltzmann hierarchy and the Bogolyubov–Strel'tsova diffusion hierarchy. Ukr. Mat. Zh. - 2003. - 55, № 3. - pp. 400-413 We establish conditions for the existence and uniqueness of a solution of a problem with multipoint conditions with respect to a selected variable t (in the case of multiple nodes) and periodic conditions with respect to x 1,..., x p for a nonisotropic partial differential equation with constant complex coefficients. We prove metric theorems on lower bounds for small denominators appearing in the course of the solution of this problem. Ukr. Mat. Zh. - 2003. - 55, № 3. - pp. 414-424 For the upper bounds of the deviations of a function defined on the entire real line from the corresponding values of the de la Vallée-Poussin operators, we find asymptotic equalities that give a solution of the well-known Kolmogorov–Nikol'skii problem. Ukr. Mat. Zh. - 2003. - 55, № 3. - pp. 425-430 We establish a criterion for the existence and uniqueness of solutions of a linear difference equation with an unbounded operator coefficient belonging to the space $l_p(B)$ of sequences of elements of a Banach space $B$.
In mathematics, mixing is an abstract concept originating from physics: the attempt to describe the irreversible thermodynamic process of mixing in the everyday world: mixing paint, mixing drinks, etc. The concept appears in ergodic theory—the study of stochastic processes and measure-preserving dynamical systems. Several different definitions for mixing exist, including strong mixing, weak mixing and topological mixing, with the last not requiring a measure to be defined. Some of the different definitions of mixing can be arranged in a hierarchical order; thus, strong mixing implies weak mixing. Furthermore, weak mixing (and thus also strong mixing) implies ergodicity: that is, every system that is weakly mixing is also ergodic (and so one says that mixing is a "stronger" notion than ergodicity). Mixing in stochastic processes Let be a sequence of random variables. Such a sequence is naturally endowed with a topology, the product topology. The open sets of this topology are called cylinder sets. These cylinder sets generate a sigma algebra, the Borel sigma algebra; it is the smallest (coarsest) sigma algebra that contains the topology. Define a function , called the strong mixing coefficient, as In this definition, P is the probability measure on the sigma algebra. The symbol , with denotes a subalgebra of the sigma algebra; it is the set of cylinder sets that are specified between times a and b. Given specific, fixed values , , etc., of the random variable, at times , , etc., then it may be thought of as the sigma-algebra generated by The process is strong mixing if as . One way to describe this is that strong mixing implies that for any two possible states of the system (realizations of the random variable), when given a sufficient amount of time between the two states, the occurrence of the states is independent. Types of mixing Suppose { X} is a stationary Markov process, with stationary distribution t Q. Denote L²( Q) the space of Borel-measurable functions that are square-integrable with respect to measure Q. Also let ℰ t ϕ( x) = E[ ϕ( X) | t X 0 = x] denote the conditional expectation operator on L²( Q). Finally, let Z = { ϕ∈ L²( Q): ∫ ϕdQ = 0} denote the space of square-integrable functions with mean zero. The of the process { ρ-mixing coefficients x} are t \rho_t = \sup_{\phi\in Z:\,\|\phi\|_2=1} \| \mathcal{E}_t\phi \|_2. The process is called if these coefficients converge to zero as ρ-mixing t → ∞, and “ρ-mixing with exponential decay rate” if ρ < t e − for some δt δ > 0. For a stationary Markov process, the coefficients ρ may either decay at an exponential rate, or be always equal to one. t [1] The of the process { α-mixing coefficients x} are t \alpha_t = \sup_{\phi\in Z:\,\|\phi\|_\infty=1} \| \mathcal{E}_t\phi \|_1. The process is called if these coefficients converge to zero as α-mixing t → ∞, it is “α-mixing with exponential decay rate” if α < t γe − for some δt δ > 0, and it is “α-mixing with sub-exponential decay rate” if α < t ξ( t) for some non-increasing function ξ( t) satisfying t −1ln ξ( t) → 0 as t → ∞. [1] The α-mixing coefficients are always smaller than the ρ-mixing ones: α ≤ t ρ, therefore if the process is t ρ-mixing, it will necessarily be α-mixing too. However when ρ = 1, the process may still be t α-mixing, with sub-exponential decay rate. The are given by β-mixing coefficients \beta_t = \int \sup_{0\leq\phi\leq1} \Big| \mathcal{E}_t\phi(x) - \int \phi dQ \Big| dQ. The process is called if these coefficients converge to zero as β-mixing t → ∞, it is “β-mixing with exponential decay rate” if β < t γe − for some δt δ > 0, and it is “β-mixing with sub-exponential decay rate” if β( tξ t) → 0 as t → ∞ for some non-increasing function ξ( t) satisfying t −1ln ξ( t) → 0 as t → ∞. [1] A strictly stationary Markov process is β-mixing if and only if it is an aperiodic recurrent Harris chain. The β-mixing coefficients are always bigger than the α-mixing ones, so if a process is β-mixing it will also be α-mixing. There is no direct relationship between β-mixing and ρ-mixing: neither of them implies the other. Mixing in dynamical systems A similar definition can be given using the vocabulary of measure-preserving dynamical systems. Let be a dynamical system, with T being the time-evolution or shift operator. The system is said to be strong mixing if, for any , one has . For shifts parametrized by a continuous variable instead of a discrete integer n, the same definition applies, with replaced by with g being the continuous-time parameter. To understand the above definition physically, consider a shaker full of an incompressible liquid, which consists of 20% wine and 80% water. If is the region originally occupied by the wine, then, for any part of the shaker, the percentage of wine in after n repetitions of the act of stirring is In such a situation, one would expect that after the liquid is sufficiently stirred (), every part of the shaker will contain approximately 20% wine. This leads to which implies the above definition of strong mixing. A dynamical system is said to be weak mixing if one has |\mu (A \cap T^{-k}B) - \mu(A)\mu(B)| = 0. In other words, is strong mixing if converges towards in the usual sense, weak mixing if converges towards in the Cesàro sense, and ergodic if converges towards in the Cesàro sense. Hence, strong mixing implies weak mixing, which implies ergodicity. However, the converse is not true: there exist ergodic dynamical systems which are not weakly mixing, and weakly mixing dynamical systems which are not strongly mixing. For a system that is weak mixing, the shift operator T will have no (non-constant) square-integrable eigenfunctions with associated eigenvalue of one. In general, a shift operator will have a continuous spectrum, and thus will always have eigenfunctions that are generalized functions. However, for the system to be (at least) weak mixing, none of the eigenfunctions with associated eigenvalue of one can be square integrable. formulation The properties of ergodicity, weak mixing and strong mixing of a measure-preserving dynamical system can also be characterized by the average of observables. By von Neumann's ergodic theorem, ergodicity of a dynamical system is equivalent to the property that, for any function , the sequence converges strongly and in the sense of Cesàro to , i.e., A dynamical system is weakly mixing if, for any functions and , A dynamical system is strongly mixing if, for any function , the sequence converges weakly to , i.e., for any function , Since the system is assumed to be measure preserving, this last line is equivalent to saying that , so that the random variables and become orthogonal as grows. Actually, since this works for any function , one can informally see mixing as the property that the random variables and become independent as grows. Products of dynamical systems Given two measured dynamical system and , one can construct a dynamical system on the Cartesian product by defining . We then have the following characterizations of weak mixing: Proposition : A dynamical system is weakly mixing if and only if, for any ergodic dynamical system , the system is also ergodic. Proposition : A dynamical system is weakly mixing if and only if is also ergodic. If this is the case, then is also weakly mixing. Generalizations The definition given above is sometimes called strong 2-mixing, to distinguish it from higher orders of mixing. A strong 3-mixing system may be defined as a system for which holds for all measurable sets A, B, C. We can define strong k-mixing similarly. A system which is strong k-mixing for all k=2,3,4,... is called mixing of all orders. It is unknown whether strong 2-mixing implies strong 3-mixing. It is known that strong m-mixing implies ergodicity. Examples Irrational rotations of the circle, and more generally irreducible translations on a torus, are ergodic but neither strongly nor weakly mixing with respect to the Lebesgue measure. Many map considered as chaotic are strongly mixing for some well-chosen invariant measure, including: the dyadic map, Arnold's cat map, horseshoe maps, Kolmogorov automorphisms, the geodesic flow on the unit tangent bundle of compact surfaces of negative curvature... Topological mixing A form of mixing may be defined without appeal to a measure, only using the topology of the system. A continuous map is said to be topologically transitive if, for every pair of non-empty open sets , there exists an integer n such that where is the nth iterate of f. In the operator theory, a topologically transitive bounded linear operator (a continuous linear map on a topological vector space) is usually called hypercyclic operator. A related idea is expressed by the wandering set. Lemma: If X is a compact metric space with no isolated point, then f is topologically transitive if and only if there exists a hypercyclic point , that is, a point x such that its orbit is dense in X. A system is said to be topologically mixing if, given open sets and , there exists an integer N, such that, for all , one has . For a continuous-time system, is replaced by the flow , with g being the continuous parameter, with the requirement that a non-empty intersection hold for all . A weak topological mixing is one that has no non-constant continuous (with respect to the topology) eigenfunctions of the shift operator. Topological mixing neither implies, nor is implied by either weak or strong mixing: there are examples of systems that are weak mixing but not topologically mixing, and examples that are topologically mixing but not strong mixing. References Achim Klenke, Probability Theory, (2006) Springer ISBN 978-1-84800-047-6 V. I. Arnold and A. Avez, Ergodic Problems of Classical Mechanics, (1968) W. A. Benjamin, Inc. This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002. Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles. By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
ISSN: 1078-0947 eISSN: 1553-5231 All Issues Discrete & Continuous Dynamical Systems - A January 2011 , Volume 29 , Issue 1 Select all articles Export/Reference: Abstract: Non-integrability criteria, based on differential Galois theory and requiring the use of higher order variational equations (VE k), are applied to prove the non-integrability of the Swinging Atwood's Machine for values of the parameter which can not be decided using first order variational equations (VE 1). Abstract: The initial boundary-value problem for a Boussinesq system is studied on the half line and on a finite interval. Global existence of weak solutions satisfying the boundary conditions is proven and uniqueness for solutions in a suitable class is studied. A proof of the persistence of finite regularity for solutions in the whole space is also presented. Abstract: We study non-proper uniformly elliptic fully nonlinear equations involving extremal operators of Pucci type. We prove the existence of all radial spectrum for this type of operators and establish a multiplicity existence results through global bifurcation. Abstract: We provide a detailed description of long time dynamics in $l^1$ of the semigroup associated with constant coefficient infinite birth-and-death systems with proliferation. In particular, we discuss and slightly extend earlier stability results of [8] and also identify a range of parameters for which the semigroup is both stable in the sense of op. cit.and topologically chaotic. Moreover, for a range of parameters, we provide an explicit description of subspaces of $l^1$ which cannot generate chaotic orbits. Abstract: Consider a Riemannian metric on two-torus. We prove that the question of existence of polynomial first integrals leads naturally to a remarkable system of quasi-linear equations which turns out to be a Rich system of conservation laws. This reduces the question of integrability to the question of existence of smooth (quasi-) periodic solutions for this Rich quasi-linear system. Abstract: The purpose of this paper is to introduce a category whose objects are discrete dynamical systems $( X,P,H,\theta ) $ in the sense of [6] and whose arrows will be defined starting from the notion of groupoid morphism given in [10]. We shall also construct a contravariant functor $( X,P,H,\theta ) \rightarrow $C *$( X,P,H,\theta ) $ from the subcategory of discrete dynamical systems for which $PP^{-1}$ is amenable to the category of C *-algebras, where C *$( X,P,H,\theta ) $ is the C *-algebra associated to the groupoid $G( X,P,H,\theta)$. Abstract: This paper deals with the study of limit cycles that appear in a class of planar slow-fast systems, near a "canard'' limit periodic set of FSTS-type. Limit periodic sets of FSTS-type are closed orbits, composed of a Fast branch, an attracting Slow branch, a Turning point, and a repelling Slow branch. Techniques to bound the number of limit cycles near a FSTS-l.p.s. are based on the study of the so-called slow divergence integral, calculated along the slow branches. In this paper, we extend the technique to the case where the slow dynamics has singularities of any (finite) order that accumulate to the turning point, and in which case the slow divergence integral becomes unbounded. Bounds on the number of limit cycles near the FSTS-l.p.s. are derived by examining appropriate derivatives of the slow divergence integral. Abstract: We investigate the asymptotic behavior of the nonautonomous evolution problem generated by the Oscillon equation ∂ tt $u(x,t) +H $ ∂ t$ u(x,t) -\e^{-2Ht}$ ∂ xx $ u(x,t) + V'(u(x,t)) =0, \quad (x,t)\in (0,1) \times \R,$ with periodic boundary conditions, where $H>0$ is the Hubble constant and $V$ is a nonlinear potential of arbitrary polynomial growth. After constructing a suitable dynamical framework to deal with the explicit time dependence of the energy of the solution, we establish the existence of a regular global attractor $\A=\A(t)$. The kernel sections $\A(t)$ have finite fractal dimension. Abstract: We consider planar systems driven by a central force which depends periodically on time. If the force is sublinear and attractive, then there is a connected set of subharmonic and quasi-periodic solutions rotating around the origin at different speeds; moreover, this connected set stretches from zero to infinity. The result still holds allowing the force to be attractive only in average provided that an uniformity condition is satisfied and there are no periodic oscillations with zero angular momentum. We provide examples showing that these assumptions cannot be skipped. Abstract: We consider the question of computing invariant measures from an abstract point of view. Here, computing a measure means finding an algorithm which can output descriptions of the measure up to any precision. We work in a general framework (computable metric spaces) where this problem can be posed precisely. We will find invariant measures as fixed points of the transfer operator. In this case, a general result ensures the computability of isolated fixed points of a computable map. We give general conditions under which the transfer operator is computable on a suitable set. This implies the computability of many "regular enough" invariant measures and among them many physical measures. On the other hand, not all computable dynamical systems have a computable invariant measure. We exhibit two examples of computable dynamics, one having a physical measure which is not computable and one for which no invariant measure is computable, showing some subtlety in this kind of problems. Abstract: This article tackles the problem of the classification of expansive homeomorphisms of the plane. Necessary and sufficient conditions for a homeomorphism to be conjugate to a linear hyperbolic automorphism will be presented. The techniques involve topological and metric aspects of the plane. The use of a Lyapunov metric function which defines the same topology as the one induced by the usual metric but that, in general, is not equivalent to it is an example of such techniques. The discovery of a hypothesis about the behavior of Lyapunov functions at infinity allows us to generalize some results that are valid in the compact context. Additional local properties allow us to obtain another classification theorem. Abstract: In this paper we discuss the existence of α-Hölder classical solutions for non-autonomous abstract partial neutral functional differential equations. An application is considered. Abstract: We develop a renormalization method that applies to the problem of the local reducibility of analytic skew-product flows on T d$\times$ SL(2,R). We apply the method to give a proof of a reducibility theorem for these flows with Brjuno base frequency vectors. Abstract: We address the problem of analyticity up to the boundary of solutions to the Euler equations in the half space. We characterize the rate of decay of the real-analyticity radius of the solution $u(t)$ in terms of exp$\int_{0}^{t} $||$ \nabla u(s) $|| Lds , improving the previously known results. We also prove the persistence of the sub-analytic Gevrey-class regularity for the Euler equations in a half space, and obtain an explicit rate of decay of the radius of Gevrey-class regularity. ∞ Abstract: In this paper we discuss the large time behavior of the solution to the Cauchy problem governed by a transport equation with Maxwell boundary conditions arising in growing cell population in $L^1$-spaces. Our result completes previous ones established in [3] in $L^p$-spaces with $1 < p < \infty$. Abstract: We show that every continuous map from one translationally finite tiling space to another can be approximated by a local map. If two local maps are homotopic, then the homotopy can be chosen so that every interpolating map is also local. Abstract: The global well-posedness, the existence of globally absorbing sets and the existence of inertial manifolds are investigated in a class of diffusive (viscous) Burgers equations. The class includes diffusive Burgers equation with nontrivial forcing, Burgers-Sivashinsky equation and Quasi-Steady equation of cellular flames. Global dissipativity is proven in two space dimensions for periodic boundary conditions. For the proof of the existence of inertial manifolds, the spectral-gap condition, which Burgers-type equations do not satisfy in their original form, is circumvented by employing the Cole-Hopf transform. The procedure is valid in both one and two space dimensions. Abstract: We derived an age-structured population model with nonlocal effects and time delay in a periodic habitat. The spatial dynamics of the model including the comparison principle, the global attractivity of spatially periodic equilibrium, spreading speeds, and spatially periodic traveling wavefronts is investigated. It turns out that the spreading speed coincides with the minimal wave speed for spatially periodic travel waves. Abstract: This paper is concerned with the existence of large positive spiky steady states for S-K-T competition systems with cross-diffusion. Firstly by detailed integral and perturbation estimates, the existence and detailed fast-slow structure of a class of spiky steady states are obtained for the corresponding shadow system, which also verify and extend some existence results on spiky steady states obtained in [10] by different method of proof. Further by applying special perturbation method, we prove the existence of large positive spiky steady states for the original competition systems with large cross-diffusion rate. Abstract: We consider two-degree-of-freedom Hamiltonian systems with saddle-centers, and develop a Melnikov-type technique for detecting creation of transverse homoclinic orbits by higher-order terms. We apply the technique to the generalized Hénon-Heiles system and give a positive answer to a remaining question of whether chaotic dynamics occurs for some parameter values although it is known to be nonintegrable in a complex analytical meaning. Readers Authors Editors Referees Librarians More Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
If we solve the heat equation $u_t=u_{xx}$ by separation of variables, we assume that $u(x,t)=f(x)g(t)$, and solving 2 ordinary differential equations we can derive that $u(x,t)=e^{\omega^2t}(b\cdot \sin(\omega x)+ a\cdot \cos(\omega x))$ for some $a,b$, is a solution. Now, if we assume the boundary conditions $u(0,t)=u(1,t)=0$, we know that $a=0$, and $\omega=k\pi, k\in \mathbb N$. This gives us: $u(x,t)= be^{k^2\pi^2t}\sin(k\pi x)$ We then know that, since the heat equation is linear, any $u(x,t)=\sum_{k=0}^nb_ke^{k^2\pi^2t}\sin(k\pi x)$ Is also a solution. Under some assumptions we can let $n$ go to infinity. My question is: How do we know that the set of such infinite sums of sines will contain all solutions to the heat equation (with those boundary conditions)?
@egreg It does this "I just need to make use of the standard hyphenation function of LaTeX, except "behind the scenes", without actually typesetting anything." (if not typesetting includes typesetting in a hidden box) it doesn't address the use case that he said he wanted that for @JosephWright ah yes, unlike the hyphenation near box question, I guess that makes sense, basically can't just rely on lccode anymore. I suppose you don't want the hyphenation code in my last answer by default? @JosephWright anway if we rip out all the auto-testing (since mac/windows/linux come out the same anyway) but leave in the .cfg possibility, there is no actual loss of functionality if someone is still using a vms tex or whatever I want to change the tracking (space between the characters) for a sans serif font. I found that I can use the microtype package to change the tracking of the smallcaps font (\textsc{foo}), but I can't figure out how to make \textsc{} a sans serif font. @DavidCarlisle -- if you write it as "4 May 2016" you don't need a comma (or, in the u.s., want a comma). @egreg (even if you're not here at the moment) -- tomorrow is international archaeology day: twitter.com/ArchaeologyDay , so there must be someplace near you that you could visit to demonstrate your firsthand knowledge. @barbarabeeton I prefer May 4, 2016, for some reason (don't know why actually) @barbarabeeton but I have another question maybe better suited for you please: If a member of a conference scientific committee writes a preface for the special issue, can the signature say John Doe \\ for the scientific committee or is there a better wording? @barbarabeeton overrightarrow answer will have to wait, need time to debug \ialign :-) (it's not the \smash wat did it) on the other hand if we mention \ialign enough it may interest @egreg enough to debug it for us. @DavidCarlisle -- okay. are you sure the \smash isn't involved? i thought it might also be the reason that the arrow is too close to the "M". (\smash[t] might have been more appropriate.) i haven't yet had a chance to try it out at "normal" size; after all, \Huge is magnified from a larger base for the alphabet, but always from 10pt for symbols, and that's bound to have an effect, not necessarily positive. (and yes, that is the sort of thing that seems to fascinate @egreg.) @barbarabeeton yes I edited the arrow macros not to have relbar (ie just omit the extender entirely and just have a single arrowhead but it still overprinted when in the \ialign construct but I'd already spent too long on it at work so stopped, may try to look this weekend (but it's uktug tomorrow) if the expression is put into an \fbox, it is clear all around. even with the \smash. so something else is going on. put it into a text block, with \newline after the preceding text, and directly following before another text line. i think the intention is to treat the "M" as a large operator (like \sum or \prod, but the submitter wasn't very specific about the intent.) @egreg -- okay. i'll double check that with plain tex. but that doesn't explain why there's also an overlap of the arrow with the "M", at least in the output i got. personally, i think that that arrow is horrendously too large in that context, which is why i'd like to know what is intended. @barbarabeeton the overlap below is much smaller, see the righthand box with the arrow in egreg's image, it just extends below and catches the serifs on the M, but th eoverlap above is pretty bad really @DavidCarlisle -- i think other possible/probable contexts for the \over*arrows have to be looked at also. this example is way outside the contexts i would expect. and any change should work without adverse effect in the "normal" contexts. @DavidCarlisle -- maybe better take a look at the latin modern math arrowheads ... @DavidCarlisle I see no real way out. The CM arrows extend above the x-height, but the advertised height is 1ex (actually a bit less). If you add the strut, you end up with too big a space when using other fonts. MagSafe is a series of proprietary magnetically attached power connectors, originally introduced by Apple Inc. on January 10, 2006, in conjunction with the MacBook Pro at the Macworld Expo in San Francisco, California. The connector is held in place magnetically so that if it is tugged — for example, by someone tripping over the cord — it will pull out of the socket without damaging the connector or the computer power socket, and without pulling the computer off the surface on which it is located.The concept of MagSafe is copied from the magnetic power connectors that are part of many deep fryers... has anyone converted from LaTeX -> Word before? I have seen questions on the site but I'm wondering what the result is like... and whether the document is still completely editable etc after the conversion? I mean, if the doc is written in LaTeX, then converted to Word, is the word editable? I'm not familiar with word, so I'm not sure if there are things there that would just get goofed up or something. @baxx never use word (have a copy just because but I don't use it;-) but have helped enough people with things over the years, these days I'd probably convert to html latexml or tex4ht then import the html into word and see what come out You should be able to cut and paste mathematics from your web browser to Word (or any of the Micorsoft Office suite). Unfortunately at present you have to make a small edit but any text editor will do for that.Givenx=\frac{-b\pm\sqrt{b^2-4ac}}{2a}Make a small html file that looks like<!... @baxx all the convertors that I mention can deal with document \newcommand to a certain extent. if it is just \newcommand\z{\mathbb{Z}} that is no problem in any of them, if it's half a million lines of tex commands implementing tikz then it gets trickier. @baxx yes but they are extremes but the thing is you just never know, you may see a simple article class document that uses no hard looking packages then get half way through and find \makeatletter several hundred lines of trick tex macros copied from this site that are over-writing latex format internals.
AI News, Deep Learning: Regularization Notes Deep Learning: Regularization Notes In previous article (long ago, now I am back!!) I talked about overfitting and the problems faced due to overfitting. Many regularization approaches are based on limiting the capacity of models, such as neural networks, linear regression, or logistic regression, by adding a parameter norm penalty Ω(θ) to the objective function J. X, y) + αΩ(θ) — {1} where α ∈[0, ∞) is a hyperparameter that weights the relative contribution of the norm penalty term, Ω, relative to the standard objective function J. We note that for neural networks, we typically choose to use a parameter norm penalty Ω that penalizes only the weights of the affine transformation at each layer and leaves the biases unregularized. We therefore use the vector w to indicate all of the weights that should be affected by a norm penalty, while the vector θ denotes all of the parameters, including both w and the unregularized parameters. Deep Learning: Regularization Notes In previous article (long ago, now I am back!!) I talked about overfitting and the problems faced due to overfitting. Many regularization approaches are based on limiting the capacity of models, such as neural networks, linear regression, or logistic regression, by adding a parameter norm penalty Ω(θ) to the objective function J. X, y) + αΩ(θ) — {1} where α ∈[0, ∞) is a hyperparameter that weights the relative contribution of the norm penalty term, Ω, relative to the standard objective function J. We note that for neural networks, we typically choose to use a parameter norm penalty Ω that penalizes only the weights of the affine transformation at each layer and leaves the biases unregularized. We therefore use the vector w to indicate all of the weights that should be affected by a norm penalty, while the vector θ denotes all of the parameters, including both w and the unregularized parameters. It involves subtracting the mean across every individual feature in the data, and has the geometric interpretation of centering the cloud of data around the origin along every dimension. It only makes sense to apply this preprocessing if you have a reason to believe that different input features have different scales (or units), but they should be of approximately equal importance to the learning algorithm. In case of images, the relative scales of pixels are already approximately equal (and in range from 0 to 255), so it is not strictly necessary to perform this additional preprocessing step. Then, we can compute the covariance matrix that tells us about the correlation structure in the data: The (i,j) element of the data covariance matrix contains the covariance between i-th and j-th dimension of the data. To decorrelate the data, we project the original (but zero-centered) data into the eigenbasis: Notice that the columns of U are a set of orthonormal vectors (norm of 1, and orthogonal to each other), so they can be regarded as basis vectors. This is also sometimes refereed to as Principal Component Analysis (PCA) dimensionality reduction: After this operation, we would have reduced the original dataset of size [N x D] to one of size [N x 100], keeping the 100 dimensions of the data that contain the most variance. The geometric interpretation of this transformation is that if the input data is a multivariable gaussian, then the whitened data will be a gaussian with zero mean and identity covariance matrix. One weakness of this transformation is that it can greatly exaggerate the noise in the data, since it stretches all dimensions (including the irrelevant dimensions of tiny variance that are mostly noise) to be of equal size in the input. Note that we do not know what the final value of every weight should be in the trained network, but with proper data normalization it is reasonable to assume that approximately half of the weights will be positive and half of them will be negative. The idea is that the neurons are all random and unique in the beginning, so they will compute distinct updates and integrate themselves as diverse parts of the full network. The implementation for one weight matrix might look like W = 0.01* np.random.randn(D,H), where randn samples from a zero mean, unit standard deviation gaussian. With this formulation, every neuron’s weight vector is initialized as a random vector sampled from a multi-dimensional gaussian, so the neurons point in random direction in the input space. That is, the recommended heuristic is to initialize each neuron’s weight vector as: w = np.random.randn(n) / sqrt(n), where n is the number of its inputs. The sketch of the derivation is as follows: Consider the inner product \(s = \sum_i^n w_i x_i\) between the weights \(w\) and input \(x\), which gives the raw activation of a neuron before the non-linearity. And since \(\text{Var}(aX) = a^2\text{Var}(X)\) for a random variable \(X\) and a scalar \(a\), this implies that we should draw from unit gaussian and then scale it by \(a = \sqrt{1/n}\), to make its variance \(1/n\). In this paper, the authors end up recommending an initialization of the form \( \text{Var}(w) = 2/(n_{in} + n_{out}) \) where \(n_{in}, n_{out}\) are the number of units in the previous layer and the next layer. A more recent paper on this topic, Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification by He et al., derives an initialization specifically for ReLU neurons, reaching the conclusion that the variance of neurons in the network should be \(2.0/n\). This gives the initialization w = np.random.randn(n) * sqrt(2.0/n), and is the current recommendation for use in practice in the specific case of neural networks with ReLU neurons. Another way to address the uncalibrated variances problem is to set all weight matrices to zero, but to break symmetry every neuron is randomly connected (with weights sampled from a small gaussian as above) to a fixed number of neurons below it. For ReLU non-linearities, some people like to use small constant value such as 0.01 for all biases because this ensures that all ReLU units fire in the beginning and therefore obtain and propagate some gradient. However, it is not clear if this provides a consistent improvement (in fact some results seem to indicate that this performs worse) and it is more common to simply use 0 bias initialization. A recently developed technique by Ioffe and Szegedy called Batch Normalization alleviates a lot of headaches with properly initializing neural networks by explicitly forcing the activations throughout a network to take on a unit gaussian distribution at the beginning of the training. In the implementation, applying this technique usually amounts to insert the BatchNorm layer immediately after fully connected layers (or convolutional layers, as we’ll soon see), and before non-linearities. It is common to see the factor of \(\frac{1}{2}\) in front because then the gradient of this term with respect to the parameter \(w\) is simply \(\lambda w\) instead of \(2 \lambda w\). Lastly, notice that during gradient descent parameter update, using the L2 regularization ultimately means that every weight is decayed linearly: W += -lambda * W towards zero. L1 regularization is another relatively common form of regularization, where for each weight \(w\) we add the term \(\lambda \mid w \mid\) to the objective. Another form of regularization is to enforce an absolute upper bound on the magnitude of the weight vector for every neuron and use projected gradient descent to enforce the constraint. In practice, this corresponds to performing the parameter update as normal, and then enforcing the constraint by clamping the weight vector \(\vec{w}\) of every neuron to satisfy \(\Vert \vec{w} \Vert_2 < Vanilla dropout in an example 3-layer Neural Network would be implemented as follows: In the code above, inside the train_step function we have performed dropout twice: on the first hidden layer and on the second hidden layer. It can also be shown that performing this attenuation at test time can be related to the process of iterating over all the possible binary masks (and therefore all the exponentially many sub-networks) and computing their ensemble prediction. Since test-time performance is so critical, it is always preferable to use inverted dropout, which performs the scaling at train time, leaving the forward pass at test time untouched. Inverted dropout looks as follows: There has a been a large amount of research after the first introduction of dropout that tries to understand the source of its power in practice, and its relation to the other regularization techniques. As we already mentioned in the Linear Classification section, it is not common to regularize the bias parameters because they do not interact with the data through multiplicative interactions, and therefore do not have the interpretation of controlling the influence of a data dimension on the final objective. For example, a binary classifier for each category independently would take the form: where the sum is over all categories \(j\), and \(y_{ij}\) is either +1 or -1 depending on whether the i-th example is labeled with the j-th attribute, and the score vector \(f_j\) will be positive when the class is predicted to be present and negative otherwise. A binary logistic regression classifier has only two classes (0,1), and calculates the probability of class 1 as: Since the probabilities of class 1 and 0 sum to one, the probability for class 0 is \(P(y = 0 \mid x; The expression above can look scary but the gradient on \(f\) is in fact extremely simple and intuitive: \(\partial{L_i} / \partial{f_j} = y_{ij} - \sigma(f_j)\) (as you can double check yourself by taking the derivatives). The L2 norm squared would compute the loss for a single example of the form: The reason the L2 norm is squared in the objective is that the gradient becomes much simpler, without changing the optimal parameters since squaring is a monotonic operation. For example, if you are predicting star rating for a product, it might work much better to use 5 independent classifiers for ratings of 1-5 stars instead of a regression loss. If you’re certain that classification is not appropriate, use the L2 but be careful: For example, the L2 is more fragile and applying dropout in the network (especially in the layer right before the L2 loss) is not a great idea. Regularization (mathematics) In mathematics, statistics, and computer science, particularly in the fields of machine learning and inverse problems, regularization is a process of introducing additional information in order to solve an ill-posed problem or to prevent overfitting.[1] Empirical learning of classifiers (learning from a finite data set) is always an underdetermined problem, because in general we are trying to infer a function of any x 1 x 2 x n Concrete notions of complexity used include restrictions for smoothness and bounds on the vector space norm.[2][page needed] From a Bayesian point of view, many regularization techniques correspond to imposing certain prior distributions on model parameters. Regularization can be used to learn simpler models, induce models to be sparse, introduce group structure into the learning problem, and more. A simple form of regularization applied to integral equations, generally termed Tikhonov regularization after Andrey Nikolayevich Tikhonov, is essentially a trade-off between fitting the data and reducing a norm of the solution. More recently, non-linear regularization methods, including total variation regularization, have become popular. The goal of this learning problem is to find a function that fits or predicts the outcome (label) that minimizes the expected error over all possible inputs and labels. f n Without bounds on the complexity of the function space (formally, the reproducing kernel Hilbert space) available, a model will be learned that incurs zero loss on the surrogate empirical error. x i Regularization introduces a penalty for exploring certain regions of the function space used to build the model, which can improve generalization. 2 2 ∇ w w d 3 d 2 T Intuitively, a training procedure like gradient descent will tend to learn more and more complex functions as the number of iterations increases. In practice, early stopping is implemented by training on a training set and measuring accuracy on a statistically independent validation set. The exact solution to the unregularized least squares learning problem will minimize the empirical error, but may fail to generalize and minimize the expected error. The algorithm above is equivalent to restricting the number of gradient descent iterations for the empirical risk j An example is developing a simple predictive test for a disease in order to minimize the cost of performing medical tests while maximizing predictive power. 0 ‖ 0 0 1 0 1 1 A simple example is provided in the figure when the space of possible solutions lies on a 45 degree line. 1 2 Elastic net regularization tends to have a grouping effect, where correlated input features are assigned equal weights. 1 1 1 min w ∈ H The proximal method iteratively performs gradient descent and then projects the result back into the space permitted by 1 Groups of features can be regularized by a sparsity constraint, which can be useful for expressing certain prior knowledge into an optimization problem. 2 1 The algorithm described for group sparsity without overlaps can be applied to the case where groups do overlap, in certain situations. w g w ¯ g w ¯ g w g w ¯ g The proximal operator cannot be computed in closed form, but can be effectively solved iteratively, inducing an inner iteration within the proximal method iteration. Regularizers have been designed to guide learning algorithms to learn models that respect the structure of unsupervised training samples. i j i j i j f ∈ R m i i u l u u This regularizer constrains the functions learned for each task to be similar to the overall average of the functions across all tasks. An example is predicting blood iron levels measured at different times of the day, where each task represents a different person. Well-known model selection techniques include the Akaike information criterion (AIC), minimum description length (MDL), and the Bayesian information criterion (BIC). On Monday, September 23, 2019 Lecture 3 | Loss Functions and Optimization Lecture 3 continues our discussion of linear classifiers. We introduce the idea of a loss function to quantify our unhappiness with a model's predictions, and ... Lecture 18 - Epilogue Epilogue - The map of machine learning. Brief views of Bayesian learning and aggregation methods. Lecture 18 of 18 of Caltech's Machine Learning Course ... Tutorial 3.3: Lorenzo Rosasco - Machine Learning Part 3 MIT RES.9-003 Brains, Minds and Machines Summer Course, Summer 2015 View the complete course: Instructor: Lorenzo .. Mod-09 Lec-33 Kernel Functions for nonlinear SVMs; Mercer and positive definite Kernels Pattern Recognition by Prof. P.S. Sastry, Department of Electronics & Communication Engineering, IISc Bangalore. For more details on NPTEL visit ... Open Borders? Immigration, Citizenship, and Nationalism in the 21st Century | Janus Forum Series The Political Theory Project is proud to host David Miller, the Official Fellow and Professor in Social and Political Theory at Nuffield College in Oxford, and ... Ethics of AI Detecting people, optimising logistics, providing translations, composing art: artificial intelligence (AI) systems are not only changing what and how we are doing ... Support vector machine In machine learning, support vector machines (SVMs, also support vector networks) are supervised learning models with associated learning algorithms that ... CUNY 2017 Live Stream: Thursday March 30, Morning Session Least squares The method of least squares is a standard approach to the approximate solution of overdetermined systems, i.e., sets of equations in which there are more ... Least squares The method of least squares is a standard approach to the approximate solution of overdetermined systems, i.e., sets of equations in which there are more ...
Can anyone state the difference between frequency response and impulse response in simple English? The impulse response and frequency response are two attributes that are useful for characterizing linear time-invariant (LTI) systems. They provide two different ways of calculating what an LTI system's output will be for a given input signal. A continuous-time LTI system is usually illustrated like this: In general, the system $H$ maps its input signal $x(t)$ to a corresponding output signal $y(t)$. There are many types of LTI systems that can have apply very different transformations to the signals that pass through them. But, they all share two key characteristics: The system is linear, so it obeys the principle of superposition. Stated simply, if you linearly combine two signals and input them to the system, the output is the same linear combination of what the outputs would have been had the signals been passed through individually. That is, if $x_1(t)$ maps to an output of $y_1(t)$ and $x_2(t)$ maps to an output of $y_2(t)$, then for all values of $a_1$ and $a_2$, $$ H\{a_1 x_1(t) + a_2 x_2(t)\} = a_1 y_1(t) + a_2 y_2(t) $$ The system is time-invariant, so its characteristics do not change with time. If you add a delay to the input signal, then you simply add the same delay to the output. For an input signal $x(t)$ that maps to an output signal $y(t)$, then for all values of $\tau$, $$ H\{x(t - \tau)\} = y(t - \tau) $$ Discrete-time LTI systems have the same properties; the notation is different because of the discrete-versus-continuous difference, but they are a lot alike. These characteristics allow the operation of the system to be straightforwardly characterized using its impulse and frequency responses. They provide two perspectives on the system that can be used in different contexts. Impulse Response: The impulse that is referred to in the term impulse response is generally a short-duration time-domain signal. For continuous-time systems, this is the Dirac delta function $\delta(t)$, while for discrete-time systems, the Kronecker delta function $\delta[n]$ is typically used. A system's impulse response (often annotated as $h(t)$ for continuous-time systems or $h[n]$ for discrete-time systems) is defined as the output signal that results when an impulse is applied to the system input. Why is this useful? It allows us to predict what the system's output will look like in the time domain. Remember the linearity and time-invariance properties mentioned above? If we can decompose the system's input signal into a sum of a bunch of components, then the output is equal to the sum of the system outputs for each of those components. What if we could decompose our input signal into a sum of scaled and time-shifted impulses? Then, the output would be equal to the sum of copies of the impulse response, scaled and time-shifted in the same way. For discrete-time systems, this is possible, because you can write any signal $x[n]$ as a sum of scaled and time-shifted Kronecker delta functions: $$ x[n] = \sum_{k=0}^{\infty} x[k] \delta[n - k] $$ Each term in the sum is an impulse scaled by the value of $x[n]$ at that time instant. What would we get if we passed $x[n]$ through an LTI system to yield $y[n]$? Simple: each scaled and time-delayed impulse that we put in yields a scaled and time-delayed copy of the impulse response at the output. That is: $$ y[n] = \sum_{k=0}^{\infty} x[k] h[n-k] $$ where $h[n]$ is the system's impulse response. The above equation is the convolution theorem for discrete-time LTI systems. That is, for any signal $x[n]$ that is input to an LTI system, the system's output $y[n]$ is equal to the discrete convolution of the input signal and the system's impulse response. For continuous-time systems, the above straightforward decomposition isn't possible in a strict mathematical sense (the Dirac delta has zero width and infinite height), but at an engineering level, it's an approximate, intuitive way of looking at the problem. A similar convolution theorem holds for these systems: $$ y(t) = \int_{-\infty}^{\infty} x(\tau) h(t - \tau) d\tau $$ where, again, $h(t)$ is the system's impulse response. There are a number of ways of deriving this relationship (I think you could make a similar argument as above by claiming that Dirac delta functions at all time shifts make up an orthogonal basis for the $L^2$ Hilbert space, noting that you can use the delta function's sifting property to project any function in $L^2$ onto that basis, therefore allowing you to express system outputs in terms of the outputs associated with the basis (i.e. time-shifted impulse responses), but I'm not a licensed mathematician, so I'll leave that aside). One method that relies only upon the aforementioned LTI system properties is shown here. In summary: For both discrete- and continuous-time systems, the impulse response is useful because it allows us to calculate the output of these systems for any input signal; the output is simply the input signal convolved with the impulse response function. Frequency response: An LTI system's frequency response provides a similar function: it allows you to calculate the effect that a system will have on an input signal, except those effects are illustrated in the frequency domain. Recall the definition of the Fourier transform: $$ X(f) = \int_{-\infty}^{\infty} x(t) e^{-j 2 \pi ft} dt $$ More importantly for the sake of this illustration, look at its inverse: $$ x(t) = \int_{-\infty}^{\infty} X(f) e^{j 2 \pi ft} df $$ In essence, this relation tells us that any time-domain signal $x(t)$ can be broken up into a linear combination of many complex exponential functions at varying frequencies (there is an analogous relationship for discrete-time signals called the discrete-time Fourier transform; I only treat the continuous-time case below for simplicity). For a time-domain signal $x(t)$, the Fourier transform yields a corresponding function $X(f)$ that specifies, for each frequency $f$, the scaling factor to apply to the complex exponential at frequency $f$ in the aforementioned linear combination. These scaling factors are, in general, complex numbers. One way of looking at complex numbers is in amplitude/phase format, that is: $$ X(f) = A(f) e^{j \phi(f)} $$ Looking at it this way, then, $x(t)$ can be written as a linear combination of many complex exponential functions, each scaled in amplitude by the function $A(f)$ and shifted in phase by the function $\phi(f)$. This lines up well with the LTI system properties that we discussed previously; if we can decompose our input signal $x(t)$ into a linear combination of a bunch of complex exponential functions, then we can write the output of the system as the same linear combination of the system response to those complex exponential functions. Here's where it gets better: exponential functions are the eigenfunctions of linear time-invariant systems. The idea is, similar to eigenvectors in linear algebra, if you put an exponential function into an LTI system, you get the same exponential function out, scaled by a (generally complex) value. This has the effect of changing the amplitude and phase of the exponential function that you put in. This is immensely useful when combined with the Fourier-transform-based decomposition discussed above. As we said before, we can write any signal $x(t)$ as a linear combination of many complex exponential functions at varying frequencies. If we pass $x(t)$ into an LTI system, then (because those exponentials are eigenfunctions of the system), the output contains complex exponentials at the same frequencies, only scaled in amplitude and shifted in phase. These effects on the exponentials' amplitudes and phases, as a function of frequency, is the system's frequency response. That is, for an input signal with Fourier transform $X(f)$ passed into system $H$ to yield an output with a Fourier transform $Y(f)$, $$ Y(f) = H(f) X(f) = A(f) e^{j \phi(f)} X(f) $$ In summary: So, if we know a system's frequency response $H(f)$ and the Fourier transform of the signal that we put into it $X(f)$, then it is straightforward to calculate the Fourier transform of the system's output; it is merely the product of the frequency response and the input signal's transform. For each complex exponential frequency that is present in the spectrum $X(f)$, the system has the effect of scaling that exponential in amplitude by $A(f)$ and shifting the exponential in phase by $\phi(f)$ radians. Bringing them together: An LTI system's impulse response and frequency response are intimately related. The frequency response is simply the Fourier transform of the system's impulse response (to see why this relation holds, see the answers to this other question). So, for a continuous-time system: $$ H(f) = \int_{-\infty}^{\infty} h(t) e^{-j 2 \pi ft} dt $$ So, given either a system's impulse response or its frequency response, you can calculate the other. Either one is sufficient to fully characterize the behavior of the system; the impulse response is useful when operating in the time domain and the frequency response is useful when analyzing behavior in the frequency domain. Bang on something sharply once and plot how it responds in the time domain (as with an oscilloscope or pen plotter). That will be close to the impulse response. Get a tone generator and vibrate something with different frequencies. Some resonant frequencies it will amplify. Others it may not respond at all. Plot the response size and phase versus the input frequency. That will be close to the frequency response. For certain common classes of systems (where the system doesn't much change over time, and any non-linearity is small enough to ignore for the purpose at hand), the two responses are related, and a Laplace or Fourier transform might be applicable to approximate the relationship. The impulse response is the response of a system to a single pulse of infinitely small duration and unit energy (a Dirac pulse). The frequency response shows how much each frequency is attenuated or amplified by the system. The frequency response of a system is the impulse response transformed to the frequency domain. If you have an impulse response, you can use the FFT to find the frequency response, and you can use the inverse FFT to go from a frequency response to an impulse response. Shortly, we have two kind of basic responses: time responses and frequency responses. Time responses test how the system works with momentary disturbance while the frequency response test it with continuous disturbance. Time responses contain things such as step response, ramp response and impulse response. Frequency responses contain sinusoidal responses. Aalto University has some course Mat-2.4129 material freely here, most relevant probably the Matlab files because most stuff in Finnish. If you are more interested, you could check the videos below for introduction videos. I found them helpful myself. I have only very elementary knowledge about LTI problems so I will cover them below -- but there are surely much more different kinds of problems! Responses with Linear time-invariant problems With LTI (linear time-invariant) problems, the input and output must have the same form: sinusoidal input has a sinusoidal output and similarly step input result into step output. If you don't have LTI system -- let say you have feedback or your control/noise and input correlate -- then all above assertions may be wrong. With LTI, you will get two type of changes: phase shift and amplitude changes but the frequency stays the same. If you break some assumptions let say with non-correlation-assumption, then the input and output may have very different forms. If you need to investigate whether a system is LTI or not, you could use tool such as Wiener-Hopf equation and correlation-analysis. Wiener-Hopf equation is used with noisy systems. It is essential to validate results and verify premises, otherwise easy to make mistakes with differente responses. More about determining the impulse response with noisy system here. References Wikipedia article about LTI here protected by jojek♦ Mar 8 '16 at 8:55 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
A thermal process of pasteurization is an operation that aims to reduce the initial number $N_0$ of microbial pathogens that are in a state of vegetative cells. The reduction of the number of microorganisms can be represented by the following equation:$$ N_0 \to N $$ Where $N_0$ is the initial number of microorganisms and N is the number of mircoorganisms at time t. From this general expression it is possible to derive a rate law of the form:$$ \frac{dN}{dt} = - k \cdot N^{\alpha} $$ Where $\alpha$ is the reaction order and $k$ is the rate constant. This is a differential rate law, which it describes that the variation of $N$ as a function of time is equal to the product of the rate constant $k$ and the concentration of $N$. The thermal inactivation of microorganisms, generally, follows a first order reaction. Thus, for $\alpha$ = 1, the integration of such rate law gives the expression of $N$ as a function of time:$$ ln(N) - ln(N_0) = - k \cdot t $$ It should be noted that the unità of measurement of the rate constant are $s^{-1}$. In addition, the symbol $ln$ is the natural logarithm. The logarithm of a number "x" is the exponent to which the base (the Neper number, 2.71828...) must be raised, to produce that number "x". The representation of the above rate law with the Bigelow's law requires the following modifications: Since $e = 2.72$ and $log_{10}(10) = 1$, thus:$$ ln(10) \cdot log(e) = log(10) = 2.301 \cdot 0.4343 = 1$$ By substituting $ln$ with $log$:$$ log(N) - log(N_0) = - \frac{k}{2.3} \cdot t$$ Now, it is possible to substitute $k/2.3$ with $1/D$, where D is the decimal reduction time: The D-value means the time required to reduce the initial content of N of 1 log-cycle (or 90%). This expression can be referred as first Bigelow's law. It can be found in logarithmic form:$$ log\frac{N_0}{N} = \frac{t}{D}$$ or in its exponential form:$$ N = N_0 \cdot 10^{-\frac{t}{D}}$$ It follows the decimal reduction time of some relevant pathogens: Microorganism Temperature (℃) D (min) Bacillus cereus 100 5.5 Clostridium botulinum 121 0.2 Escherichia coli 70 0.03 Salmonella typhimurium 70 0.03 Clostridium perfringens 100 1 Listeria monocytogenes 70 0.3 A graphical representation of the meaning of the D-value is depicted below: The decimal reduction time (D-value) varies as a function of several parameters, like pH, presence of sugards, water activity, etc. However, the most important parameter that affects the value of D is certainly the temperature. The rate constant of a reaction varies as a function of temperature following a so-called Arrhenius relationship:$$ k = A \cdot exp(- E_a \cdot T) $$ The same relationship can be assumed valid also for the D-value:$$ D = a \cdot exp(- b \cdot T) $$ This expression can be turned into a logarithmic form:$$ ln(D) = ln(a) - b \cdot T $$ Without any attempt to provide a physical meaning to the parameters, from the mathematical point of view we can express the equation above at two distinct temperatures:$$ ln(D_1) = ln(a) - b \cdot T_1 $$ $$ ln(D_2) = ln(a) - b \cdot T_2 $$ Since the parameters $a$ and $b$ are constants, we can derive $a$ with the following general form:$$ ln(a) = ln(D) + b \cdot T $$ By substituting the expression of ln(a) into the expression of $D_1$:$$ ln(D_1) - ln(D_2) = - b \cdot T_1 + b \cdot T_2 = b \cdot (T_2 - T_1) $$ As for the first Bigelow's law, it is more convenient to transform the natural logarithms into decimal logarithms. Since ln(x) = 2.3 \cdot log(x), thus:$$ log\frac{D_1}{D_2} = \frac{b}{2.3} \cdot (T_2 - T_1) $$ If we define now a new variable $z$ that is equal to:$$ z = \frac{b}{2.3} $$ Then, we can derive the second Bigelow's Law, in its logarthmic form:$$ log\frac{D_1}{D_2} = \frac{(T_2 - T_1)}{z} $$ The same law can be expressed in its exponential form:$$ D_1 = D_2 \cdot 10^{\frac{(T_2 - T_1)}{z}} $$ In this expression, $z$ is the temperature difference that changes the values of D of 90%. Thus, $z$ is a temperature (and its unit of measurent are Celsius degree or Kelvin). If we increase the temperature of a process of exactly "$z$" degree Celsius, then, the D-value of that process would become ten times lower. Often, the value of $D_2$ refers to a reference value of D, at a specific temperature. For instance, the D value of the spores of Clostridium botulinum is often reported at 121℃ and equals to 0.2 min. In this case, assuming a z-value = 10℃, it is straightforward to know that a process performed at 131℃ needs only 0.02 min to assure the same effect on the reduction of the spores. As a rule-of-thumb, the z-value of the following reactions should be always kept in mind: These values of $z$ mean that the higher the z-value, the lower is the effect of temperature differences on the changes of D-value. Thus, the thermal distruction of vegetative cells of microorganisms is very sensitive to temperature changes. Just an increase of 5℃ would reduce ten times the D-value of the process. Instead, spore forming bacteria are more thermal resistant. In order to have a 90% reduction of the D-value, it is required an increase of temperature of 10℃. Even more stable process are the chemical degradation of vitamins, which have z-values of about 30℃. A graphical representation of the meaning of the z-value is depicted below: During the transformation of fruits into products, several heat treatments are often necessary. Some of the most common thermal operations are the following: Blanching aims to heat the product (sometimes only the surface) to high temperatures for a short period of time. This is often accomplished by using steam, either indirectly as for fruit juices, or directly for vegetables. The effect of this operation is to inactivate enzyme activity, which could otherwise cause rapid degeneration of quality. In addition, blanching may also remove air from the product, reduce the product volume and soften the tissue. In practice, blanching can be performed by immersion in hot water (80 to 100℃) or by exposing the product to steam (as depicted below). The operation lasts typically in about 1 minute. Blanching units installed in fruit processing industry have a working capacity of 5.000 up to 30.000 kg/h. The length of these units goes from 10 to 30 m. The width of the conveyor belt is approximately 2 m. Steam consumption can go from 500 to 2.000 kg/h. Water consumption goes from 10 to 40 m$^3$/h The hot break process is a thermal treatment that completely inactivates pectin enzyme activity and impede syneresis on the fruit juice or puree. Synerisis causes the separation of liquids from the product. The hot break process heated the product up to 85-95℃. The fresh fruits are generally chopped during heating. Alternatively to hot-break, cold break is also a thermal treatment where the fruit is chopped at lower temperatures, ranging from 65 to 75℃. The difference between the two processes lies in the apparent viscosity, measured in Bostwick centimeters. The Hot Break product is more viscous and therefore denser. An average Bostwick viscosity for tomato sauces treated with the hot break process results in values ranging between 3.5 and 6 centimeters. Cenversily, the cold-break product is less viscous, therefore less dense. Normally, the tomato sauce treated with a cold break process measures from 9 to 16 Bostwick centimeters. The hot break product is usually desired for fruit sauces concentrated at a level of 30°Bx. The cold break process is more useful when higher brix values (i.e. 40°Brix) and higher fluidity of the product is desired. It has been demonstrated that if pectolytic enzymes, naturally present in tomatoes and fruit, are exposed to oxygen during the chopping process, they are revitalized and begin to destroy pectin. Pectin is the substance which gives consistency to the tomato paste. It has also been observed that the pectolytic enzymes can be deactivated at temperatures exceeding 85℃. Therefore, all enzyme deactivation systems, known as Hot Break Units, raise the product’s temperature up to 85° C and over so as to deactivate the enzymes as quickly as possible and therefore preserve the product’s natural viscosity. This operation can be done, as a first step, in the evaporator. Pasteurization is a thermal process performed at temperature typically below 100℃. The goal of pasteurization is to kill microbial pathogens in a state of vegetative cells. Examples of pathogens in such state are the following: All the above microorganisms are non-sporulent. Thus, their thermal stability is weak and a simple treatment for a few seconds at temperatures above 60℃, generally, will suffice to reduce the level of contamination. It follows the D-values of some relevant pathogens at specific reference temperatures. You might notice that spore forming bacteria (i.e. Cl. botulinum) use as reference temperature values above 100℃, whereas vegetative cells use always reference temperatures below 100℃. Microorganism Temperature (℃) D (min.) Bacillus cereus 100 5.5 Clostridium botulinum 121 0.2 Escherichia coli 70 0.05 Salmonella typhimurium 70 0.1 Clostridium perfringens 100 1 Listeria monocytogenes 70 0.1 Pasteurization has also the goal to inactivate microrganisms and enzymes that can alter the product quality. Unfortunately, many fruits and vegetables are contaminated by Alicyclobacillus acidoterrestris spores. Alicyclobacillus acidoterrestris is a thermoacidophilic, nonpathogenic, sporeforming and aerobic microorganism, which has been isolated and identified in several spoiled commercial pasteurized fruit juices, such as orange and apple juices. It follows a table with the main parameters of growth for Alicyclobacillus acidoterrestris: Parameter Value Growth temperature (℃) 30 - 60 Growth pH 2 - 6 Minimum Aw 0.97 $D$ (95℃) (in min.) 1 - 5 (mean = 3) $z$ (95℃) (in ℃) 10 During growth, the metabolism of Alicyclobacillus acidoterrestris produces an unpleasant smell evocative of disinfectants (2.6-dibromofenol and 2-methoxyphenol that is guaiacol). Also, changes in pH, color and texture can contribute to the emergence of a white precipitate. It has been reported that degassing and/or reduction of the oxygen content to 0.1% (apple and white grape juice) or create anaerobic conditions (orange juice) to inhibit the growth of A. acidoterrestris. The most effective way to eliminate bacteria Alicyclobacillus spp. is ultrafiltration. However, due to the high cost of this operation, thermal pasteurization remains the most widespread operation used to control such microorganisms. Biochemical processes are the one of principal causes of fruit and vegetable deteriorations. The alteration of fruit products driven by enzyme activity is generally controlled by using severe thermal treatments, that are typically applied during the so-called hot-break process. During such operation, the operative conditions can achieve 90-95℃ for 1-3 minutes. other processes such as high pressure or pulsed electric fields have also been proposed, although, their effectiveness on enzyme inactivation is controversial. Unfortunately, it is rarely possible to generalize D and z value of enzyme activity. The type of vegetable used, the initial content of enzymes, the presence of natuaral inhibitory substances (i.e. polyphenols), the pH of the medium give some uncertainty. Furthermore, the same enzyme might be present in the product in the so-called iso-forms. An enzyme isoform is a protein that is highly similar to the original one. It has been expressed from the same gene family. By the way, during its expression results with some genetic differences. Such differences are in fruits typically observed whith enzymes like POD, PME or PPO with very high thermal resistence. To design the right thermal treatment, and account also for isoforms, fruit industry relies on the peroxidase (POD) activity. This enzyme is a good indicator of the suitability of a thermal process. This is because this enzyme is present in fruit, generally, in high concentrations. In addition, POD shows a very high thermal stability. Finally, POD assays are generally simple and quick to perform. The fact that POD has a very high thermal stability is good as it provides a safety margin. If peroxidase is inactivated, it is a reasonable assumption that other quality-related enzymes have also been inactivated. On the other hand, a thermal process designed in such way may result in overestimated heat treatments. This may cause other quality problems. Medium Enzyme D-value z-value Orange juice PME (heat sensitive fraction) 0.1 min. (at 85℃) 18℃ Orange juice PME (heat resistant fraction) 5.5 min. (at 85℃) 31℃ Tomato juice PME 10 min. (at 70℃) 5℃ Tomato juice POD 1.2 min. (at 70℃) 4℃ Carrot juice POD 3 min. (at 80℃) 4℃ Tomato juice PG (heat sensitive fraction) 3 min. (at 70℃) 11℃ Tomato juice PG (heat resistant fraction) 16 min. (at 70℃) 8℃ Pineapple puree PPO 90 min. (at 75℃) 22℃ Where, PME is pectin methyl estherase; POD is peroxydase; PPO is polyphenoloxidase; PG is pectin galacturonase. Sterilization is a severe thermal process, typically performed at temperature above 100℃. The aim of sterilization is to kill all microorganisms, either vegetative cells or spore forming bacteria. When applied to fruits, we refer implicitly to commercial sterilization. This means that the concentration of spore forming pathogen, like Cl. botulinum is reduced by at least 12D (i.e. D is the decimal reduction time). Sterilized products generally have a shelf-life of two years or more. Sterilization aims to the complete destruction of microorganisms. Because of the resistance of certain bacterial spores to heat, this frequently means a treatment of at least 121℃ of indirect steam spray for 15 min. or equivalent. It also means that every particle of the food must receive this heat treatment. If a can of food is to be sterilized, then immersing it into a 121℃ pressure cooker or retort for the 15 minutes will not be sufficient because of relatively slow rate of heat transfer through the food in the can to the most distant point. When to apply sterilization? Sterilization is expensive. In addition, high temperatures may generate off-odors or off-flavors, which are not desired by the consumers. Nowadays, there are direct heat exchangers that allow thermal treatments of 140℃ for a few seconds, obtaining a final product of excellent quality, very stable and safe. However, when the product has a pH below 4.5, its shelf-life can be shorten of a few months, then the investment on sterilization process is economically unsustainable. Most of the acidic fruit juices or puree can be processed by a simple pasteurization process, that is following to a hot-break process. Instead, sterilization becomes important when the product has a pH of 4.5 or higher and when its shelf life must be assured for 2 or more years. Tomato sauces in cans are a typical example. The pH of tomato is around 5.0. Thus, a simple pasteurization process does not allow its storage at room temperature for prolonged times, since spores of pathogen bacteria can germinate (even in absence of oxygen) and produce toxins. As a general rule-of-thumb, consider the following decision tree to decide which thermal treatment should be applied to your product: According to the above decision tree, it is possible to define the following products: It follows a simple classification of thermal treatments: Methods Temperature (°C) Hold time Batch 63 30 min HTST 72 15 s >UHT > 121 0.1 s
Circuit requirements: DC voltage gain: 50dB Unity gain bandwidth: 50MHz Phase margin: 45 deg(60 deg is recommended). The circuit is completely symmetric, so M1=M2, M3=M4, and M5=M6. Therefore the DC gain of the first stage without the buffer is \$A_{V_o}=-g_{m_1}R_{out}, \ R_{out}=\frac{1}{1/r_{o_1}+1/r_{o_3}+1/r_{o_5}+g_{m_5}-g_{m_3}}\$. Since we need a relatively high DC gain I can choose \$g_{m_5}=g_{m_3}\$ to make \$R_{out}\$ maximum. So \$R_{out}\$ becomes \$R_{out}=r_{o_1}||r_{o_3}||r_{o_5}\$. From now on let's follow two different approaches in order to satisfy the above requirements. Approach 1: The circuit without the source follower M7 and the compensation network. In this case the load capacitor, CL(=10pF), is directly connected to the drain of M2. Let's first try to find an equation for the unity gain frequency, \$f_u\$. The circuit has two poles with the dominant pole situated at the output node. If we assume that the next dominant pole is located far from the dominant pole, the transfer function can be approximated as \$H(s)=\frac{A_{V_o}}{1+s/\omega_{p_1}}, \ \omega_{p_1}=\frac{1}{C_LR_{out}}\$ Since \$|H(j2\pi f_u)|=1\$, the equation for \$f_u\$ becomes roughly \$f_u= \frac{g_{m_1}}{2\pi C_L}\$. With \$f_u\$=50MHz, \$C_L\$=10pF, the above equation gives 3.14mA/V for \$g_{m_1}\$ (Neglecting parasitic capacitances). So that's for \$g_{m_1}\$. Now with the \$g_{m_1}\$ already determined and the DC gain 50dB (or equivalently ~320V/V), it's just left to figure out \$R_{out}\$, which, as per equation for DC gain, should be ~100kohms. So I can easily modify \$r_{o_1}\$,\$r_{o_3}\$, and \$r_{o_5}\$ in order to make the parallel combination of them 100kohms. That's for \$R_{out}\$. Phase margin is 90 deg since the other high frequency pole doesn't affect the phase margin. A 90-deg PM doesn't have ringing and overshooting but trades off speed. But I think that's fine. So it seems that the first approach did work well without the need for the source follower and the compensation network. Approach 2 The complete circuit. Here comes my confusion. I do not know as to why we need to consider those added parts while approach 1 worked out fine. Could anyone please explain as to how the second approach can be better than the first (if it's better at all)? I think compensation is only required if the phase margin's dropped less than 45 as the result of subsequent stages, or if we want good unity gain bandwidth (?). But I do not see any good reason to invoke compensation in the above circuit.
I am trying to figure out the details on how to implement the 3D structure tensor in C/C++ in an easy but efficient way and need some advice! For a discrete function $ I(x_i,y_j,z_k)$ the 3D structure tensor is given by: $$ S=\begin{pmatrix} W \ast I_x^2 & W \ast (I_xI_y) & W \ast (I_xI_z)\\ W \ast (I_xI_y) & W \ast I_y^2 & W \ast I_yI_z \\ W \ast (I_xI_z) & W \ast I_yI_z & W \ast I_z^2 \\ \end{pmatrix}$$ where W is a smoothing kernel, $\ \ast $ denotes convolution and subscript denotes partial derivative with respect to. The calculation of the structure tensor have two main steps: calculate the partial derivatives of the function in a way that is robust to noise over a window. smooth products of the partial derivatives over another larger window. I start by looking at step 2. I want to use a Gaussian as smoothing kernel. The normal distribution in 3 dimensions is separable: $$ g(x,y,z) = g(x)g(y)g(z) $$ where $$ g(x) = \frac{1}{\sqrt{2\pi}\sigma}exp(-\frac{1}{2}(x/\sigma)^2) $$ etc. The Fourier transform of the normal distribution in 3 dimensions is given by: $$ G(kx,ky,kz) = G(kx)G(ky)G(kz) $$ where $$ G(kx) = \frac{\sigma}{\sqrt{2\pi}}exp(-\frac{1}{2}(kx\sigma)^2) $$ etc. How do I implement the Gaussian smoothing? The simplest way to implement the Gaussian smoothing would be to loop over a 3D Gaussian kernel for each point in I. Since the Gaussian is separable however it should be more efficent to perform a convolution with a 1D Gaussian in the x direction followed by the y direction and z direction. Another even more efficient approach (?) would be to do the convolution in the Fourier space (where it becomes a multiplication): $$ g*a=\mathcal{F}^{-1}[GA] $$ Next I look at step 1. The derivative of the Gaussian is given by: $$ G_x(x) = -x/\sigma^2G(x)G(y)G(z)$$ etc., where subscript denotes partial derivative with respect to. The Gaussian derivative can be used to estimate the partial derivatives of I in a way that is robust to noise: $$ I_x=-x/\sigma^2G(x)*G(y)*G(z)*I $$ Here I am faced with the same decision: implement it without using separability, implement it using separability or implement it in Fourier space? Any advice or comments?
Permanent link: https://www.ias.ac.in/article/fulltext/pram/073/06/0961-0968 We propose to substitute Newton’s constant $G_{N}$ for another constant $G_{2}$, as if the gravitational force would fall off with the $1/r$ law, instead of the $1/r^{2}$; so we describe a system of natural units with $G_{2} , c$ and $\hbar$. We adjust the value of $G_{2}$ so that the fundamental length $L = L_{\text{Pl}}$ is still the Planck’s length and so $G_{N} = L \times G_{2}$. We argue for this system as (1) it would express longitude, time and mass without square roots; (2) $G_{2}$ is in principle disentangled from gravitation, as in (2 + 1) dimensions there is no field outside the sources. So $G_{2}$ would be truly universal; (3) modern physics is not necessarily tied up to $(3 + 1)$-dim. scenarios and (4) extended objects with $p = 2$ (membranes) play an important role both in M-theory and in F-theory, which distinguishes three $(2, 1)$ dimensions. As an alternative we consider also the clash between gravitation and quantum theory; the suggestion is that non-commutative geometry $[x_{i} , x_{j}] = \Lambda^{2} \theta_{ij}$ would cure some infinities and improve black hole evaporation. Then the new length 𝛬 shall determine, among other things, the gravitational constant $G_{N}$. Current Issue Volume 93 | Issue 5 November 2019 Click here for Editorial Note on CAP Mode
Consider the following series: $$\sigma (\text{x$\_$})\text{:=}\sqrt{\sum _{k=0}^{x-1} \frac{(x! x!) (-1)^{x-k}}{(x-k) (k! (2 x-k)!)}+\frac{\pi ^2}{12}}$$ From the following codes, it seems that the series tends to $0$ with a very low slope as $n$ approaches $+\infty$, this is the case in the article that I'm studying σ[x_]:=Sqrt[Pi^2/12+Sum[(x!*x!)/(k!*(2 x-k)!)*(-1)^(x-k)/(x-k),{k,0,x-1}]]Y=Table[σ[n],{n,0,1000}]//N;ListLinePlot[Y,PlotRange->{0,1}] but when I compute this limit, there are the results: Limit[σ[n],n->Infinity](*Limit[Sqrt[π^2/12+(-1)^(2+2 n) (HarmonicNumber[n]-HarmonicNumber[2 n])],n->∞]*)Limit[σ[n],n->Infinity]//N(*Sqrt[0.822467 -0.693147 2.71828^((0. +2. I) Interval[{-6.67522*10^-308,3.14159}])]*)Limit[σ[n],n->Infinity]//N//FullSimplify(*Sqrt[(0. -0.693147 I) Interval[{-1,1}]+Interval[{0.12932,1.51561}]]*) How can I obtain $0$ as the result of this limit?
Consider the log likelihood of a mixture of Gaussians: $$l(S_n; \theta) = \sum^n_{t=1}\log f(x^{(t)}|\theta) = \sum^n_{t=1}\log\left\{\sum^k_{i=1}p_i f(x^{(t)}|\mu^{(i)}, \sigma^2_i)\right\}$$ I was wondering why it was computationally hard to maximize that equation directly? I was looking for either a clear solid intuition on why it should be obvious that its hard or maybe a more rigorous explanation of why its hard. Is this problem NP-complete or do we just not know how to solve it yet? Is this the reason we resort to use the EM (expectation-maximization) algorithm? Notation: $S_n$ = training data. $x^{(t)}$ = data point. $\theta$ = the set of parameters specifying the Gaussian, theirs means, standard deviations and the probability of generating a point from each cluster/class/Gaussian. $p_i$ = the probability of generating a point from cluster/class/Gaussian i.
My question is about 6.C.4. of Mas-Colell et.al.'s Microeconomic Theory book. We have N risky assets with returns $z_n, n = (1,...,N)$ per dollar invested which are distributed over $F(z_1,...,z_n).$ All returns are non-negative with probability one. For an individual with a continuous, increasing, and concave Bernoulli utility function $u(\cdot)$ over $\mathbb{R_+}$, we define the utility function $U(\cdot)$ over $\mathbb{R}_+^N$, the set of nonnegative portfolios as: $$U(\alpha_1,...,\alpha_N) = \int u(\alpha_1 z_1 + \cdots + \alpha_N z_N)dF(z_1,...,z_N)$$ We are asked to show for part c that $U(\cdot)$ is continuous. The book's solution is a little difficult (particularly since I haven't really worked with measure theory before), but my teacher has said there is a more straightforward way of doing the question. I have questions about both of these points. First, looking at the book's argument. We create a sequence $(\alpha^m)_{m \in N} \to \alpha \in \mathbb{R}_+^N$. Then if $U$ is continuous, we should get $$(\alpha^m)_{m \in \mathbb{N}} \to \alpha \implies U(\alpha^m)_{m \in \mathbb{N}} \to U(\alpha)$$ $\exists \ \delta > 0$ s.t. $\alpha^m \leq (\delta,...,\delta) \ \forall \ m$ Since $U(\delta,...,\delta)$ is finite, $z \to u(\sum_n \delta z_n)$ is integrable. Since $u(\cdot)$ monotone and returns are nonnegative with prob. zero: $$u(\sum_n \alpha_n^m z_n) \leq u(\sum_n \delta z_n) \ \forall \ m, (z_1,...,z_N)$$ Since $u(\cdot)$ continuous, $$u(\sum_n \alpha_n^m z_n) \to u(\sum_n \alpha_n z_n)$$ for almost all $(z_1,...,z_N)$ Here the book decides to apply Lebesgue's dominated convergence theorem. We are in some measure space $(S, \Sigma, \mu)$ which states that for some sequence of functions ${f_n}$, if it pointwise converges to some function $f$ and $$\mid f_n(x) \mid \leq g(x) \ \forall n \quad \text{in index}, \ \forall x \in S$$ (recall we constructed $g(x)$ earlier) Then $F$ is integrable and $$\lim_{n \to \infty} \int_S \mid f_n - f \ \mid d \mu = 0$$ $$\implies \int_S \mid f_n - f \ \mid d \mu \to f d \mu$$ So $$\int u(\sum_n \alpha_n^m x_n) dF(x_1,...,x_N) \to \int u(\sum_n \alpha_n x_n) dF(x_1,...,x_N)$$ $$\implies U(\alpha^m)_{m \in \mathbb{N}} \to U(\alpha)$$ I added in the direct definition of Lebesgue's dominated convergence for my own clarity, and I think I understand the proof, but my question for this part is what is the measure space? I know $S = \mathbb{R}_+^N$, and $\mu = F$, but what is $\Sigma$ supposed to be? I think it's supposed to be a $\sigma$ - algebra, but I have no idea how to construct/find it here. It is also very well possible that I am completely mistaken on what space I am working on, so there's that. So then I tried to setup a proof for this question on my own. I also set up a sequence. $(\alpha^m)_{m \in N} \to \alpha \in \mathbb{R}_+^N$. Then I suppose that there is some sort of jump discontinuity in $U$ and show that it leads to a contradiction. So $(\alpha^m)_{m \in \mathbb{N}} \to \alpha$ and $U(\alpha^m)_{n \in \mathbb{N}} \not \to U(\alpha)$ My attempts so far to find a satisfactory proof have been fruitless. Any help would be appreciated, whether it's the proof itself or just an outline or some hints on what information to use from the original question to use.
1) Let's imagine we have a string of length $L$ that is fixed at both ends. When we refer to harmonics we are talking about standing waves. Since we are talking about standing waves, we are limited on the allowed wavelengths our standing waves can have. (See picture below) As you can see, the first harmonic has wavelength $\lambda = 2L$, the second has wavelength $\lambda=L$, and the third has $\lambda=\frac 23L$. In general, our allowed harmonics have $\lambda_n=\frac 2nL$, where $n$ is the number of the harmonic. So what about other properties like wave speed and frequency? Well the speed of a wave in a medium is completely dependent on the properties of the medium and the "tension" the medium is under (which you could argue is a property of the specific medium you are using). For waves on a string, the speed is given by $$v=\sqrt {\frac T\rho}$$, where $T$ is the tension the string is under (which is something that is easily controlled), and $\rho$ is the linear mass density of the string (something that is harder to control if you already have the string made). So as you can see, once you have your string system set up, your speed and allowed wavelengths are pretty much set already. With all of that in mind, we can now determine what frequencies we need to oscillate one end of the rope at in order to get these harmonics. The speed, wavelength, and frequency of waves are related by $v=f\lambda$, so the frequencies we want are $$f_n=\frac v\lambda_n=\frac{n}{2L}\sqrt {\frac T\rho}$$ So this is how you "get to your harmonics". You have to oscillate an end of your string at one of these allowed frequencies. Of course, you can change what these frequency values are if you modify things like the length or tension of the string, but once you have the string physically set up, you just need to oscillate the string at the right frequency. (Arguments like these are true for other media and waves, but the forms of the equations will look somewhat different). 2) If you just use the equation from before $v=f\lambda$, you can see that if we double $f$ and half $\lambda$, then $v$ stays the same. You can also argue that the speed will not change since we are not altering any physical properties of the string. Although other things will be different than above since the rope is not fixed at both ends, but you did not ask for details into that, so I will not discuss them either.
I use the poweRlaw package in R to fit a power law to my data. I am trying to figure out what is the value of the Pareto exponent. Assume the probability mass function is defined by: $$ p(x) = \frac{\alpha-1}{x_{min}} \left(\frac{x}{x_{min}} \right)^{-\alpha} $$ and the complementary cumulative density function is defined by: $$ P(x) = \int_x ^\infty p(x’) dx’ = \left(\frac{x}{x_{min}}\right)^{-\alpha + 1} $$ Is the Pareto exponent $\alpha$ or $- \alpha + 1$ or $\alpha - 1$? In most literature, the CCDF is used to describe the income/wealth distribution, and $-(\alpha-1)$ is the slope of the CCDF on a log-log plot, so $\alpha-1$ seems the most intuitive. I'm pretty sure the R library poweRlaw returns $\alpha$ as defined above. I am using Newman as a reference.
and I need to determine inductance of the filtering chokes. I have deduced following formula $$L=\frac{V_{dc}\cdot(1-d)}{f_s\cdot\Delta i_L}$$ (based on \$v=L\frac{di}{dt}\$), where \$V_{dc}\$ is dc link voltage, \$f_s\$ is switching frequency (=12 kHz), \$\Delta i_L\$ is desired current ripple and \$d\$ is duty ratio (\$0\leq d\leq1\$). My problem is that I would like to use Sinewave Pulse Width Modulation so the duty ratio is changing sinusoidally. I don't know what value to substitute. The solution could be to choose the worst case but I don't know what value of \$d\$ coresponds to the worst case. Can anybody give me an advice how to solve that? Thanks in advance. The solution could be to choose the worst case but I don't know what value of d coresponds to the worst case You have derived a formula for L and the worst case scenario is when L is maximum. With inductors, a smaller value is usually preferred for technical reasons so, choose a value of d that makes L large. However, I doubt that your formula is correct because if I choose d to be zero then the value of L isn't infinite as I would expect it to be. Alternatively go and down load LTSpice (free from LT) and simulate solutions to double check your formulas.
Let’s say I have a content type article with a field “taxonomy”. The vocabulary is “fruits”. The terms are “apple”, “orange”, “bananas” The article belongs to “orange”. I can easily print in my twig template the term of the article : {{ content.field_taxo_fruit }} Result is “orange”. But I don’t find an easy way to print the vocabulary : “fruits”. On an other project, I used views field and I used “rewrite result”. But here, I use twig template like node--article--teaser.html.twig I need to create a template file just for the parents of taxonomy terms. First of all I don’t know if the same as drupal 7 in 8 I should write this code in template.php file or not. In the second place, I don’t know what does loadParents($ tid) return if the term does not have any parent. In the third place, I don’t know if I must create page_taxonomy_term_cat.html.twig to use. function THEMENAME_theme_suggestions_page_alter(array &$ suggestions, array $ variables) { $ tid = \Drupal::routeMatch()->getRawParameter('taxonomy_term'); $ parent = \Drupal::entityManager()->getStorage('taxonomy_term')->loadAllParents($ tid); if (is_null($ parent)) { $ suggestions[] = 'page_taxonomy_term_cat'; } } plese help me with these three questions I want to set the value of a field of a term based on a custom field set on the vocabulary. Basically if the vocabulary promote_to_index field is checked I want to the field_index_page of the term checked. I understand I have to write a hook like but I don’t understand how to get the value of the vocabulary custom field function polaris_drupal_taxonomy_form_alter(&$ form, FormStateInterface &$ form_state, $ form_id) { switch ($ form_id) { case 'taxonomy_term_category_form': case 'taxonomy_term_article_type_form': case 'taxonomy_term_main_purpose_form': case 'taxonomy_term_tags_form': case 'taxonomy_term_content_partnership_form': case 'taxonomy_term_traffic_source_form': case 'taxonomy_term_client_form': $ form['field_index_page']['widget']['value']['#default_value'] = true; } } How do I programmatically get all the terms within a vocabulary for the current language? This is a similar question to Get translated term name, but not for a single term, but for the entire tree. One of the guidelines around building ubiquitous languages is that there should be one per bounded context. In a domain that has more than one bounded context, and therefore more than one ubiquitous language, how should you deal with vocabulary that is different in each context, but related to each other with respect to the entire domain? For example, objects in our domain contain a property that dictates the object’s identity, but constraints within each bounded context restrict ubiquity in both implementations, and these could not easily be circumvented… // Implemented in one bounded context object Customer { id: guid } // Implemented in another bounded context object Customer implements Identity { identity: Identifier } Note that id and identity in each bounded context refer to the same thing, but their names and data types differ. How do I achieve ubiquity across these implementations when referring to this from the perspective of the domain as a whole, and is there a suitable method for translating between the two? I have a products catalog based on taxonomy vocabulary, as advised the term reference field is on the product content type. Products imports are working fine, but when I try to import product displays the mapping for term reference (Category vocabulary)does not appear in the target list. I’m using Feeds 7.x-2.0-alpha8, Commerce Feeds 7.x-1.3, Commerce Kickstart 7.x-1.19 and Drupal Core 7.22 Thanks in advance I want to add a field in product variation that lets user choose from a list of terms in a specific vocabulary. For example, I have a vocabulary of “Pizza Recipes”. The terms in the vocabulary will be an option in a select drop down of my content. How will I do that? Example: The terms in the vocabulary must appear as options in a select drop down. I don’t know what field type should I use. Is it a term reference or entity reference or what? I tried both but it doesn’t give me what I desire. Two players (player C and player G) are playing a (modified) word guessing game. Both players share the same vocabulary $ V$ and words in $ V$ are grouped into $ K$ bins, denoted as $ b_1$ , $ b_2$ , …, $ b_{K}$ . Furthermore, we know that $ b_{i} \subset V$ , $ \cup_{i=1}^{K} b_i = V$ . Note that those bins may be overlapping and thus there may exist some case where $ b_i \cap b_j \neq \emptyset$ for $ i \neq j$ . The interaction protocol is described as follows: Player C uniformly chooses a word $ w$ from the vocabulary $ V$ . Player G does not know which word $ w$ is. Player G chooses one bin and asks Player C whether his/her chosen word $ w$ is in the bin. If it is, the game ends. Otherwise, Player G will choose another bin. Questions: What is the best bin choosing order and what is the expected number of times of choosing the bin, according to the best possible order? Example: Suppose we have a vocabulary consisting of ten words $ V = \{w_1, w_2, …, w_{10} \}$ and three bins $ b_1 = \{w_1, w_2, …, w_5\}$ , $ b_2 = \{w_6, w_7 \}$ , and $ b_3 = \{w_8, w_9, w_{10} \}$ . One possible bin choosing order is $ b_1 \rightarrow b_3 \rightarrow b_2$ and the expected number of times of choosing the bin is $ \frac{1}{2}*1 + \frac{1}{2}*\frac{3}{5}*2 + \frac{1}{2}*\frac{2}{5}*\frac{2}{2}*3 = 1.7$ . I suspect this is the best bin choosing order but how can we prove this result? Notes: In this problem, we do NOT have the additional knowledge that all bins are non-overlapping, compared to a related problem. Thanks. I have tried to give the user access to vocabulary granularly, currently it is only possible to give access to use vocabularies and terms and I only want certain roles to only manipulate certain vocabularies, I do not want any role to create new vocabularies. I have tested with the vppr module and apparently it does not work well with D8. I also want that if I give access to a certain vocabulary this can be accessed through the menu (admin_toolbar). Does anyone have any ideas or references on how to implement this operation? Greetings. Drupal 8 installation in English, German lang was added and set as default, vocabularies are set to English (no result even they are in German too), terms are set to German by default, Content translation, Configuration translation and Interface translation are enabled. On /admin/config/regional/content-language “Content” and “Taxonomy term” are enabled (all fields). There is nothing about vocabularies. On /admin/structure/taxonomy Operations have a link “Translate” per vocabulary and added translation through is saved. But switching language with language switcher do not show vocabularies name translated – table columns names are translated and shown properly (and any UI string in table) just not category name itself. I clear cache after translation was added. Any advice are welcome.
Oh so I've been wondering recently, how exactly does analytic continuation work? I've heard about it all the time as how you extend domains of functions, but like, what's the mechanism by which you do so? @Daminark You look at the full subdomain $\Omega$ of $\Bbb C$ where $f$ can be analytically continued by paths from the original point, say, $p$. Then, construct an object as follows: Take all the paths $\gamma$ starting at $p$, and patch together the disks of radius of convergence $D_1, \cdots, D_n$ covering $\gamma$ according to whether they intersect (this is just quotient space $\bigsqcup D_i/\sim$). Do this for all paths starting at $p$. Then you'll end up constructing a Riemann surface $X$ where $f$ is actually a well-defined function $f : X \to \Bbb C$. Sorry, I took ages to find a way to write that construction @Adeek I don't have a particular view on abstraction for what it's worth. Some people care about it as a means to an end of understanding more concrete things, others seem to prefer to think about abstract things than concrete things. I'm just one of the "let people do what they want" types And @Balarka so basically the idea would be that if you try to build a function along a circle and fail, the Riemann surface construction would simply not bring that path back to its starting point, right? Notaton wise, if $M$ is a smooth manifold, $p \in M$ and $v_p \in T_p(M)$, $v_p(f)$ means the tangent vector $v_p$ acting on $f$, and it equals the directional derivative in the direction $v$ of $f$ at $p$, which notation-ally is $D_vf(p)$ correct? @Perturbative In Ted's notation $Df(p)$ is the Jacobian matrix of $f$ at $p$, and $Df(p)v$ is then that matrix eating the vector $v \in T_p M$ which is the same as the directional derivative $D_v f(p)$. @LeakyNun I think if you have "a logic" in the sense that you have some rules of inference for well-formed sentences (I don't want to make this super precise right now, I'm not an expert on this), then you can consider a category with morphisms = implications even without assigning actual truth-values to the sentences @LeakyNun this is what my category book writes on logic "We can associate to every mathematical theory in the sense of mathematical logic a category. Objects are theorems of the theory. A morphism $A\to B$ is a proof of $A \to B$. Composition is the composition of proofs" Hello, who know this lemma on topology: let $X$ be a Hausdorff space, let $(x_n)$ be a sequence and $(x_{n_k})\subset (x_n)$ and $u\in X$, if there exists a sub sequence $(u_{n_{k_l}})\subset (u_{n_k}), u_{n_{k_l}}\to u$ then $u_n\to u$ ? @TastyRomeo if a space is covered by a simply connected cover, then it has to be semilocally simply connected: for any point in the base space, you have an open set that is evenly covered, so if you have loop in that open set, you can lift it to the universal cover and there it will be entirely contained in one leaf, so it is a loop in the universal cover which can be contraced to a point in the cover as it is simpylyconnected, you can just push down the homotopy. For a smooth manifold $M$ of dimension $n$ and a point $p \in M$ do y'all normally think of the tangent space $T_pM$ as the image of a parameterization $\varphi : U \subseteq \mathbb{R}^n \to M$ for some open set $U$ in $\mathbb{R}^n$ or as the set of all derivations of $C^{\infty}(M)$ at $p$?
Sampling at a higher frequency will give you more effective number of bits (ENOB), up to the limits of the spurious free dynamic range of the Analog to Digital Converter (ADC) you are using (as well as other factors such as the analog input bandwidth of the ADC). However there are some important aspects to understand when doing this that I will detail further. This is due to the general nature of quantization noise, which under conditions of sampling a signal that is uncorrelated to the sampling clock is well approximated as a white (in frequency) uniform (in magnitude) noise distribution. Further, the Signal to Noise Ratio (SNR) of a full scale real sine-wave will be well approximated as: $$SNR = 6.02 \text{ dB/bit} + 1.76 \text{dB}$$ For example, a perfect 12 bit ADC samping a full scale sine wave will have an SNR of $6.02\times 12+1.76 = 74$ dB. By using a full scale sine wave, we establish a consistent reference line from which we can determine the total noise power due to quantization. Within reason, that noise power remains the same even as the sine wave amplitude is reduced, or when we use signals that are composites of multiple sine waves (meaning via the Fourier Series Expansion, any general signal). This classic formula is derived from the uniform distribution of the quantization noise, as for any uniform distribution the variance is $\frac{A^2}{12}$, where A is the width of the distribution. This relationship and how we arrive at the formula above is detailed in the figure below, comparing the histogram and variance for a full-scale sine wave ($\sigma_s^2$), to the histogram and variance for the quantization noise ($\sigma_N^2$), where $\Delta$ is a quantization level and b is the number of bits. Therefore the sinewave has a peak to peak amplitude of $2^b\Delta$. You will see that taking the square root of the equation shown below for the variance of the sine wave $\frac{(2^b\Delta)^2}{8}$ is the familiar $\frac{V_p}{\sqrt{2}}$ as the standard deviation of a sine wave at peak amplitude $V_p$. Thus we have the variance of the signal divided by the variance of the noise as the SNR. Further as mentioned earlier, this noise level due to quantization is well approximated as a white noise process when the sampling rate is uncorrelated to the input (which occurs with incommensurate sampling with a sufficient number of bits and the input signal is fast enough that it is spanning multiple quantization levels from sample to sample, and incommensurate sampling means sampling with a clock that is not an integer multiple relationship in frequency with the input). As a white noise process in our digital sampled spectrum, the quantization noise power will be spread evenly from a frequency of 0 (DC) to half the sampling rate ($f_s/2$) for a real signal, or $-f_s/2$ to $+f_s/2$ for a complex signal. In a perfect ADC, the total variance due to quantization remains the same independent of the sampling rate (it is proportional to the magnitude of the quantization level, which is independent of sampling rate). To see this clearly, consider the standard deviation of a sine wave which we reminded ourselves earlier is $\frac{V_p}{\sqrt{2}}$; no matter how fast we sample it as long as we sample it sufficiently to meet Nyquist's criteria, the same standard deviation will result. Notice that it has nothing to do with the sampling rate itself. Similarly the standard deviation and variance of the quantization noise is independent of frequency, but as long as each sample of quantization noise is independent and uncorrelated from each previous sample, then the noise is a white noise process meaning that it is spread evenly across our digital frequency range. If we raise the sampling rate, the noise density goes down. If we subsequently filter since our bandwidth of interest is lower, the total noise will go down. Specifically if you filter away half the spectrum, the noise will go down by 2 (3 dB). Filter 1/4 of the spectrum and the noise goes down by 6 dB which is equivalent to gaining 1 more bit of precision! Thus the formula for SNR that accounts for oversampling is given as: Actual ADC's in practice will have limitations including non-linearities, analog input bandwidth, aperture uncertainly etc that will limit how much we can oversample, and how many effective bits can be achieved. The analog input bandwidth will limit the maximum input frequency we can effectively sample. The non-linearities will lead to "spurs" which are correlated frequency tones that will not be spread out and therefore will not benefit from the same noise processing gain we saw earlier with the white quantization noise model. These spurs are quantified on ADC datasheets as the spurious-free dynamic range (SFDR). In practice I refer to the SFDR and usually take advantage of oversampling until the predicted quantization noise is on level with the SFDR, at which point if the strongest spur happens to be in band, there will be no further increase in SNR. To detail further I would need to refer to the specific design in more detail. All noise contributions are captured nicely in the effective number of bits (ENOB) specification also given on ADC data sheets. Basically the actual total ADC noise expected is quantified by reversing the SNR equation that I first gave to come up with the equivalent number of bits a perfect ADC would provide. It will always be less than the actual number of bits due to these degradation sources. Importantly, it will also go down as the sampling rate goes up so there will be a diminishing point of return from oversampling. For example, consider an actual ADC which has a specified ENOB of 11.3 bits and SFDR of 83 dB at 100 MSPS sampling rate. 11.3 ENOB is an SNR of 69.8 dB (70 dB) for a full scale sine wave. The actual signal sampled will likely be at a lower input level so as not to clip, but by knowing the absolute power level of a full scale sinewave, we now know the absolute power level of the total ADC noise. If for example the full scale sine wave that results in the maximum SFDR and ENOB is +9 dBm (also note that this level with best performance is typically 1-3 dB lower than the actual full scale where a sine wave would start to clip!), then the total ADC noise power will be +9dBm-70 dB = -61 dBm. Since the SFDR is 83 dB, then we can easily expect to gain up to that limit by oversampling (but not more if the spur is in our final band of interest). In order to achieve this 22 dB gain, the oversampling ratio N would need to be at least $N= 10^{\frac{83-61}{10}} = 158.5$ Therefore if our actual real signal bandwidth of interest was 50MHz/158.5 = 315.5 KHz, we could sample at 100 MHz and gain 22 dB or 3.7 additional bits from the oversampling, for a total ENOB of 11.3+ 3.7 = 15 bits. As a final note, know that Sigma Delta ADC architectures use feedback and noise shaping to achieve a much better increase in number of bits from oversampling than what I described here of what can be achieved with traditional ADC's. We saw an increase of 3dB/octave (every time we doubled the frequency we gained 3 dB in SNR). A simple first order Sigma Delta ADC has a gain of 9dB/octave, while a 3rd order Sigma Delta has a gain of 21 dB/octave! (Fifth order Sigma Delta's are not uncommmon!). Also see related responses at How do you simultaneously undersample and oversample? Oversampling while maintaining noise PSD How to choose FFT depth for ADC performance analysis (SINAD, ENOB) How increasing the Signal to Quantization noise increases the resolution of ADC
Measurements of the QED structure of the photon 38 Downloads Citations Abstract. The structure of both quasi-real and highly virtual photons is investigated using the reaction \({\rm e^+e^-}\rightarrow{\rm e^+e^-}\mu^+\mu^-\), proceeding via the exchange of two photons. The results are based on the complete OPAL dataset taken at \({\rm e^+e^-}\) centre-of-mass energies close to the mass of the Z boson. The QED structure function \(F^\gamma _2\) and the differential cross-section \({\rm d}\sigma/{\rm d}x\) for quasi-real photons are obtained as functions of the fractional momentum x from the muon momentum which is carried by the struck muon in the quasi-real photon for values of \(Q^2\) ranging from 1.5 to 400 GeV\(^2\). The differential cross-section \({\rm d}\sigma/{\rm d}x\) for highly virtual photons is measured for \(1.5< Q^2 < 30\) GeV\(^2\) and \(1.5< P^2 < 20\) GeV\(^2\), where \(Q^2\) and \(P^2\) are the negative values of the four-momentum squared of the two photons such that \(Q^2>P^2\). Based on azimuthal correlations the QED structure functions \(F^\gamma _{\rm A}\) and \(F^\gamma _{\rm B}\) for quasi-real photons are determined for an average \(Q^2\) of 5.4 GeV\(^2\). KeywordsStructure Function Virtual Photon Fractional Momentum Azimuthal Correlation Complete Opal Preview Unable to display preview. Download preview PDF.
DFT Graphical Interpretation: Centroids of Weighted Roots of Unity Introduction This is an article to hopefully give a better understanding to the Discrete Fourier Transform (DFT) by framing it in a graphical interpretation. The bin calculation formula is shown to be the equivalent of finding the center of mass, or centroid, of a set of points. Various examples are graphed to illustrate the well known properties of DFT bin values. This treatment will only consider real valued signals. Complex valued signals can be analyzed in a similar manner with the only distinction being rotation as well as rescaling will occur. Most of the signals analyzed will be simple pure tones. Geometric Series Summation Formula Check out the following factorization pattern: $$ 1 - x^2 = (1 - x)(1 + x ) $$ $$ 1 - x^3 = (1 - x)(1 + x + x^2 ) $$ $$ 1 - x^4 = (1 - x)(1 + x + x^2 + x^3 ) $$ $$ 1 - x^5 = (1 - x)(1 + x + x^2 + x^3 + x^4 ) $$ $$ 1 - x^6 = (1 - x)(1 + x + x^2 + x^3 + x^4 + x^5 ) $$ $$ 1 - x^7 = (1 - x)(1 + x + x^2 + x^3 + x^4 + x^5 + x^6 ) $$ It can be generalized using summation notation: $$ 1 - x^N = (1 - x) \sum_{n=0}^{N-1} {x^n} $$ As long as $ x \neq 1 $ both sides can be divided by $ (1 - x) $ and the equation reversed to yield: $$ \sum_{n=0}^{N-1} {x^n} = { { 1 - x^N } \over { 1 - x } } $$ The expression represented by the summation is called a geometric series and the equation is known as the Geometric Series Summation Formula. It will be used in the future so knowing how it is derived is useful. The Discrete Fourier Transform Suppose there is a signal represented by a sequence of $ N $ values $ S_n $, where $ n $ ranges from $ 0 $ to $ N-1 $. This is the mathematical summation definition of the DFT of the signal according to common convention: $$ Z_k = { 1 \over N } \sum_{n=0}^{N-1} { S_n e^{ -i2 \pi { k \over N } n } } $$ The leading $ 1/N $ value is called a normalization factor. Some conventions have a normalization factor of 1, others use $ 1/\sqrt{N} $. There are reasons for each. For this article, $ 1/N $ is the correct one. The negative sign in the exponent is also by convention. This article will use it because it is more of a common convention than not having it. The mathematics of the two alternatives are equivalent. The formula can be greatly simplified in appearance and understanding by making a substitution. $$ R_k = e^{ -i2 \pi { k \over N } } $$ Each $ R_k $ is an Nth Root of Unity. This is easily proved: $$ R_k^N = (e^{ -i2 \pi { k \over N } })^N = e^{ -i2 \pi k } = (e^{ -i2 \pi })^k = 1^k = 1 $$ Substituting the definition of $ R_k $ into the definition of the DFT gives a simpler version. $$ Z_k = { 1 \over N } \sum_{n=0}^{N-1} { S_n R_k^n } $$ The DFT definition is now recognizable as a straight up average formula: Add up N items, divide by N, and you get the average. Since the items are complex numbers, they correspond to points on the complex plane. Therefore, the average can be viewed as the centroid of the set of points. Each item is an Nth Root of Unity rescaled by the value of the signal. You can think of the Roots of Unity as being spokes on a wheel and the rescaling factor how much the spoke is lengthened or shortened. In the case of a complex signal, the spoke will also be rotated. DFTs of Constant Signals The simplest possible non-zero signal is a constant signal where every value is the same. $$ S_n = v $$ In this case, the constant value can be factored out of every term in the DFT summation definition and it becomes: $$ Z_k = { 1 \over N } \sum_{n=0}^{N-1} { v R_k^n } = v \cdot { 1 \over N } \sum_{n=0}^{N-1} { R_k^n } $$ The summation formula now becomes a geometric series and the geometric series summation formula can be applied for all values of $ k $, except for $ k = 0 $. This is because $ R_0 = 1 $ which would make the denominator $ 0 $. $$ Z_k = v \cdot { 1 \over N } \cdot { { 1 - R_k^N } \over { 1 - R_k } } $$ Since $ R_k $ is a $ Nth $ root of unity $ R_k^N = 1 $. This makes the numerator equal to zero. $$ Z_k = v \cdot { 1 \over N } \cdot { 0 \over { 1 - R_k } } = 0 $$ The DFT of a constant signal is zero on all bins where $ k > 0 $. The DC Bin The first bin, $ Z_0 $, is different than the rest in that the base Root of Unity is one itself ($ R_0 = 1 $). Therefore the summation equation gets greatly simplified. $$ Z_0 = { 1 \over N } \sum_{n=0}^{N-1} { S_n R_0^n } = { 1 \over N } \sum_{n=0}^{N-1} { S_n } $$ $ Z_0 $ is the simple arithmetic average of the signal values. It is called the "DC bin" from applications where the signal is a voltage level. DC stands for Direct Current. When the DC component is subtracted from the signal, the rest of the signal is zero centered. With a real valued signal, the average will also be real valued so the imaginary part will always be zero. In the case of a constant signal, like in the previous section, the value of the DC bin is the value of the signal points. $$ Z_0 = { 1 \over N } \sum_{n=0}^{N-1} { v } = v \cdot { 1 \over N } \sum_{n=0}^{N-1}{ 1 } = v \cdot { 1 \over N } \cdot N = v $$ Skip Sizes When an Nth Root of Unity is raised to an integer power the result is also an Nth Root of Unity. $$ R_k^n = (e^{ -i2 \pi { k \over N } })^n = e^{ -i2 \pi { { kn } \over N } } = R_{kn} $$ Inserting this into the DFT definition modifies it slightly: $$ Z_n = { 1 \over N } \sum_{n=0}^{N-1} { S_n R_{kn} } $$ Unfurling the summation notation for the first six bins shows the pattern more clearly: $$ Z_0 = { 1 \over N }( S_0 R_0 + S_1 R_0 +S_2 R_0 +S_3 R_0 + ... + S_{N-1} R_0 ) $$ $$ Z_1 = { 1 \over N }( S_0 R_0 + S_1 R_1 +S_2 R_2 +S_3 R_3 + ... + S_{N-1} R_{N-1} ) $$ $$ Z_2 = { 1 \over N }( S_0 R_0 + S_1 R_2 +S_2 R_4 +S_3 R_6 + ... + S_{N-1} R_{N-2} ) $$ $$ Z_3 = { 1 \over N }( S_0 R_0 + S_1 R_3 +S_2 R_6 +S_3 R_9 + ... + S_{N-1} R_{N-3} ) $$ $$ Z_4 = { 1 \over N }( S_0 R_0 + S_1 R_4 +S_2 R_8 +S_3 R_{12} + ... + S_{N-1} R_{N-4} ) $$ $$ Z_5 = { 1 \over N }( S_0 R_0 + S_1 R_5 +S_2 R_{10} +S_3 R_{15} + ... + S_{N-1} R_{N-5} ) $$ What can be seen is the skip size on the set of Roots of Unity for each bin is the same as the index for that bin. The simplest form for the subscript of the Root of Unity is $ kn $ modulo $ N $. Since $ R_0 = 1 $, all the references to $ R_0 $ can be removed from the formulas. They are left intact for the purpose of clarity. Explanation of the Figures The figures are drawn by a program custom written just for this blog article. They are designed to show graphically the relationship between the shape of the signal and the resulting point sets which come from the DFT equation. The signal value points are shown on the upper right graph. The horizontal axis spans from 0 to $ 2\pi $ angle wise and is segmented from 0 to $N$. The data points, consisting of one frame, range from 0 to $ N-1 $, never reaching the right vertical axis. The vertical axes range from -1 to 1 and have tic marks every tenth. The six polar graphs at the bottom show the calculations of the first six bins of the DFT on the complex plane. The set of $ R_k $ points are shown around the circumference of the circle. The horizontal and vertical tic marks show tenths along the radii. The outer circle is the unit circle. The blue circle is centered on the centroid location. The first graph is for $ Z_0 $, also known as the DC bin. In the constant signal example shown in Figure 1, all the data values, and thus the centroid, all fall on the same spot. The second polar graph is for $ Z_1 $. The skip size is one, so the function is wrapped once around the circle. The third polar graph is for $ Z_2 $. In a like manner, the skip size is two, so the function is wrapped twice around the circle hitting every other $ R_k $. The last three are for $ Z_3 $, $ Z_4 $, $ Z_5 $, with skip sizes of 3, 4, and 5. Of course, these are also the number of times the function is stretched and wrapped around the circle. The data points, and the tracer lines between them, are colored to fade from green to red. This allows the sequence of points to be seen in the polar graphs. Figure 1 shows the plots of a constant signal at one half. The skip sizes for each polar graph should be determinable by the coloration of the points. The equation of the generated signal is given on the top left. The frequency is measured in cycles per frame, or equivalently, cycles per N points. A prime number for $ N (=31) $ was selected so that all the $ R_k $ points were hit no matter what the step size. Integer Frequency Signals The first set of figures show the DFT calculations for the first five integer frequencies on the six bins. There are four main take aways from this set of figures. First, the DC bin is zero because a tone signal with a whole number of cycles will be centered on the zero line. Second, when the number of cycles matches the number of wrappings, all the bumps line up and the centroid is off center. Third, when the number of cycles doesn't match the number of wrappings, the bumps are radially distributed, and by symmetry the centroid is on center. Fourth, the magnitude of the centroid in the matching case is half the amplitude of the signal, just as the center of a circle is at half the diameter. Non-Integer Frequency Signals This next set of figures shows samples with non-integer frequencies. Since there is not a set of whole cycles, and the signal starts at an extreme, the average value is no longer zero so the DC bin is no longer centered. Also, the partial cycle throws off the radial symmetry so the all the bins are non-zero. The most off center results occur where the bin wrapping counts are closest to the frequency and taper off from there. The centroid switches sides on the two bins with the wrapping counts on either side of the fractional frequency. Here is the halfway point. Notice that $ Z_3 $ and $ Z_4 $ are nearly opposite each other. Varying the Phase This set of figures show what happens as you adjust the phase value. The phase shift translates into a rotation. The amount of the rotation is dependent on the bin number. For integer frequencies, the rotation in the corresponding bin will equal the phase shift. Varying the Magnitude Varying the magnitude is the easiest to understand. Any rescaling of the signal gets factored out of the summation and applies to the sum as a whole. Therefore the polar graphs and centroids are resized by the same factor. Signals with a DC Offset This set of figures shows the effect of moving the signal up and down vertically by adding a constant value. This changes the shapes of the polar graphs slightly, but does not have any effects on the location of the centroids, except in the DC bin. $$ Z_k = { 1 \over N } \sum_{n=0}^{N-1} { ( S_n + d ) R_k^n } = { 1 \over N } \sum_{n=0}^{N-1} { S_n R_k^n} + { 1 \over N } \sum_{n=0}^{N-1} { d R_k^n} = { 1 \over N } \sum_{n=0}^{N-1} { S_n R_k^n } + 0 $$ The Sum of Two Signals The centroid of a sum is the sum of centroids. Mathematically it is said like this: $$ Z_k = { 1 \over N } \sum_{n=0}^{N-1} { ( S_n + T_n ) R_k^n } = { 1 \over N } \sum_{n=0}^{N-1} { S_n R_k^n } + { 1 \over N } \sum_{n=0}^{N-1} { T_n R_k^n } $$ Where $ S_n $ and $ T_n $ are the two signals. In the case of two distinct integer value frequency signals, the results of the DFT will be completely separable. The two bin values associated with the two frequencies will be the only off center ones. Here is a sample: The only bins that are off center are $ Z_1 $ and $ Z_4 $. The angles also reflect the phase values. Bin Symmetry of Real Signals For real valued signals the top half of the DFT bin set will be the complex conjugate of the bottom half. In terms of the polar graphs, this can be understood by recognizing that it is the equivalent centroid calculations with the signal being wrapped counter-clockwise instead of clockwise. The result is the mirror image reflected by the real axis. This is exactly what complex conjugates are. $$ R_p = \overline{ R_{N-p} } $$ $$ R_p^2 = R_{2p} = \overline{ R_{N-p}^2 } = \overline{ R_{2N-2p} } = \overline{ R_N } \cdot \overline{ R_{N-2p} } = 1 \cdot \overline{ R_{N-2p} } = \overline{ R_{N-2p} } $$ This same pattern occurs with the higher powers of $ R_p $. This shows that the powers work in reverse just as well as forward. The Nyquist Bin When $ N $ is an even number, the $ N/2 $ bin is known as the Nyquist bin. The only Roots of Unity used in the centroid calculation will be $ R_0 $ and $ R_{N/2} $, also known as 1 and -1. If the signal consists of only real values, then the resulting centroid will be real and the imaginary part will be zero. Another way to convince yourself of this is in the real signal case, the Nyquist bin has to equal its own conjugate. Conclusion DFT bin values can be seen as the centroids of polar graphed signal points on the complex plane. In this manner the well known properties of DFTs can be visualized qualitatively. Although this article only dealt with simple pure tone samples, the ideas can be extended to more complicated signals and even complex signals. An understanding of the principles outlined should help in knowing the strengths and weaknesses of DFT analysis in particular circumstances. [Edit 2015-06-01: Swap n and k in subscripts] Previous post by Cedron Dawg: The Exponential Nature of the Complex Unit Circle Next post by Cedron Dawg: DFT Bin Value Formulas for Pure Real Tones To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments. Registering will allow you to participate to the forums on ALL the related sites and give you access to all pdf downloads.
Difference between revisions of "Stiffened equation of state" (Quite a number of chages; fixed LANL reference) (fixed typo in las formula) Line 20: Line 20: It is useful to notice that, given this equation of state, the [[adiabatic]] law is modified from its ideal form: It is useful to notice that, given this equation of state, the [[adiabatic]] law is modified from its ideal form: − :<math> p+p^* = (\gamma p_0 +p^* ) \left(\frac{\rho}{\rho_0}\right)^\gamma </math> + :<math> p+p^* = (\gamma p_0 +p^* ) \left(\frac{\rho}{\rho_0}\right)^\gamma </math> Revision as of 16:11, 3 December 2013 The Stiffened equation of state is a simplified form of the Grüneisen equation of state [1].When considering water under very high pressures (typical applications are underwater explosions, extracorporeal shock wave lithotripsy, and sonoluminescence) the stiffened equation of state is often used: where is the internal energy per unit mass, given by (Eq. 15 in [2]): where is the heat capacity at constant pressure. is an empirically determined constant typically taken to be about 6.1, and is another constant, representing the molecular attraction between water molecules. The magnitude of the later correction is about 2 gigapascals (20,000 atmospheres). , from which the value of $p^*$ may be computed given all the other variables. Thus water behaves as though it is an ideal gas that is already under about 20,000 atmospheres (2 GPa) pressure, and explains why water is commonly assumed to be incompressible: when the external pressure changes from 1 atmosphere to 2 atmospheres (100 kPa to 200 kPa), the water behaves as an ideal gas would when changing from 20,001 to 20,002 atmospheres (2000.1 MPa to 2000.2 MPa). This equation mispredicts the heat capacity of water but few simple alternatives are available for severely nonisentropic processes such as strong shocks. It is useful to notice that, given this equation of state, the adiabatic law is modified from its ideal form:
Borel set of ambiguous class $\alpha$ A Borel subset of a metric space, or more generallly, a perfectly-normal topological space, that is at the same time a set of additive class $\alpha$ and of multiplicative class $\alpha$, i.e. belongs to the classes $F_\alpha$ and $G_\alpha$ at the same time. The Borel sets of ambiguous class 0 are the closed and open sets. Borel sets of ambiguous class 1 are sets of types $F_\sigma$ and $G_\delta$ at the same time. Any Borel set of class $\alpha$ is a Borel set of ambiguous class $\beta$ for any $\beta > \alpha$. The Borel sets of ambiguous class $\alpha$ form a field of sets. References [1] K. Kuratowski, "Topology" , 1 , Acad. Press (1966) (Translated from French) [2] F. Hausdorff, "Grundzüge der Mengenlehre" , Leipzig (1914) (Reprinted (incomplete) English translation: Set theory, Chelsea (1978)) Comments The notations $F_\alpha$, $G_\alpha$ are still current in topology. Outside topology one more often uses the notation $\Sigma^0_\alpha$, $\Pi^0_\alpha$, respectively. For $\alpha \ge \omega$ one has $F_\alpha = \Sigma^0_\alpha$, $G_\alpha = \Pi^0_\alpha$; but for $n < \omega$ one has $F_n = \Sigma^0_{n+1}$ and $G_n = \Pi^0_{n+1}$. The notation for the ambiguous classes is $\Delta^0_\alpha = \Sigma^0_\alpha \cap \Pi^0_\alpha$. See also [a1]. References [a1] Y.N. Moschovakis, "Descriptive set theory" , North-Holland (1980) How to Cite This Entry: Borel set of ambiguous class. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Borel_set_of_ambiguous_class&oldid=39789
Difference between revisions of "Convergence of measures" m m Line 12: Line 12: In the real-valued case the above definition simplifies as In the real-valued case the above definition simplifies as \[ \[ − \abs{\mu}(B) = \sup_{A\in \mathcal{B}, A\subset B} \left(\abs{\mu (A)} + \abs{\mu ( + \abs{\mu}(B) = \sup_{A\in \mathcal{B}, A\subset B} \left(\abs{\mu (A)} + \abs{\mu (\setminus )}\right). \] \] The total variation of $\mu$ is then defined as $\left\|\mu\right\|_v := The total variation of $\mu$ is then defined as $\left\|\mu\right\|_v := Revision as of 15:52, 31 July 2012 A concept in measure theory, determined by a certain topology in a space of measures that are defined on a certain σ-algebra $\mathcal{B}$ of subsets of a space $X$ or, more generally, in a space $\mathcal{M} (X, \mathcal{B})$ of charges, i.e. countably-additive real (resp. complex) functions $\mu: \mathcal{B}\to \mathbb R$ (resp. $\mathbb C$), often also called $\mathbb R$ (resp. $\mathbb C$) valued or signed measures. The total variation measure of a $\mathbb C$-valued measure is defined on $\mathcal{B}$ as: \[ \abs{\mu}(B) :=\sup\left\{ \sum \abs{\mu(B_i)}: \text{'"`UNIQ-MathJax11-QINU`"' is a countable partition of '"`UNIQ-MathJax12-QINU`"'}\right\}. \] In the real-valued case the above definition simplifies as \[ \abs{\mu}(B) = \sup_{A\in \mathcal{B}, A\subset B} \left(\abs{\mu (A)} + \abs{\mu (B\setminus A)}\right). \] The total variation of $\mu$ is then defined as $\left\|\mu\right\|_v := \abs{\mu}(X)$. The space $\mathcal{M}^b (X, \mathcal{B})$ of $\mathbb R$ (resp. $\mathbb C$) valued measure with finite total variation is a Banach space and the following are the most commonly used topologies. 1) The norm or strong topology: $\mu_n\to \mu$ if and only if $\left\|\mu_n-\mu\right\|_v\to 0$. 2) The weak topology: a sequence of measures $\mu_n \rightharpoonup \mu$ if and only if $F (\mu_n)\to F(\mu)$ for every bounded linear functional $F$ on $\mathcal{M}^b$. 3) When $X$ is a topological space and $\mathcal{B}$ the corresponding $\sigma$-algebra of Borel sets, we can introduce on $X$ the narrow topology. In this case $\mu_n$ converges to $\mu$ if and only if \begin{equation}\label{e:narrow} \int f\, \mathrm{d}\mu_n \to \int f\, \mathrm{d}\mu \end{equation} for every bounded continuous function $f:X\to \mathbb R$ (resp. $\mathbb C$). This topology is also sometimes called the weak topology, however such notation is inconsistent with the Banach space theory, see below. The following is an important consequence of the narrow convergence: if $\mu_n$ converges narrowly to $\mu$, then $\mu_n (A)\to \mu (A)$ for any Borel set such that $\abs{\mu}(\partial A) = 0$. 4) When $X$ is a locally compact topological space and $\mathcal{B}$ the $\sigma$-algebra of Borel sets yet another topology can be introduced, the so-called wide topology, or sometimes referred to as weak$^\star$ topology. A sequence $\mu_n\rightharpoonup^\star \mu$ if and only if \eqref{e:narrow} holds for continuous functions which are compactly supported. This topology is in general weaker than the narrow topology. If $X$ is compact and Hausdorff the Riesz representation theorem shows that $\mathcal{M}^b$ is the dual of the space $C(X)$ of continuous functions. Under this assumption the narrow and weak$^\star$ topology coincides with the usual weak$^\star$ topology of the Banach space theory. Since in general $C(X)$ is not a reflexive space, it turns out that the narrow topology is in general weaker than the weak topology. A topology analogous to the weak$^\star$ topology is defined in the more general space $\mathcal{M}^b_{loc}$ of locally bounded measures, i.e. those measures $\mu$ such that for any point $x\in X$ there is a neighborhood $U$ with $\abs{\mu}(U)<\infty$. References [AmFuPa] L. Ambrosio, N. Fusco, D. Pallara, "Functions of bounded variations and free discontinuity problems". Oxford Mathematical Monographs. The Clarendon Press, Oxford University Press, New York, 2000. MR1857292Zbl 0957.49001 [Bo] N. Bourbaki, "Elements of mathematics. Integration" , Addison-Wesley (1975) pp. Chapt.6;7;8 (Translated from French) MR0583191 Zbl 1116.28002 Zbl 1106.46005 Zbl 1106.46006 Zbl 1182.28002 Zbl 1182.28001 Zbl 1095.28002 Zbl 1095.28001 Zbl 0156.06001 [DS] N. Dunford, J.T. Schwartz, "Linear operators. General theory" , 1 , Interscience (1958) MR0117523 [Bi] P. Billingsley, "Convergence of probability measures" , Wiley (1968) MR0233396 Zbl 0172.21201 [Ma] P. Mattila, "Geometry of sets and measures in euclidean spaces. Cambridge Studies in Advanced Mathematics, 44. Cambridge University Press, Cambridge, 1995. MR1333890 Zbl 0911.28005 How to Cite This Entry: Convergence of measures. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Convergence_of_measures&oldid=27264
Difference between revisions of "Talk:Gamma-function" (Mid-TeX) (Mid-TeX) Line 50: Line 50: \Gamma(z)\Gamma(1-z) = \frac{\pi}{\sin \pi z}. \Gamma(z)\Gamma(1-z) = \frac{\pi}{\sin \pi z}. $$ $$ − In particular, $\ + In particular, $\(1/2)=\sqrt{\pi}$; $$ $$ \Gamma(n+1/2) = \frac{1.3\ldots(2n-1)}{2^n}\sqrt{\pi} \Gamma(n+1/2) = \frac{1.3\ldots(2n-1)}{2^n}\sqrt{\pi} Revision as of 00:21, 27 April 2012 $\Gamma$-function $ \newcommand{\Re}{\mathop{\mathrm{Re}}} $ A transcendental function $\Gamma(z)$ that extends the values of the factorial $z!$ to any complex number $z$. It was introduced in 1729 by L. Euler in a letter to Ch. Goldbach, using the infinite product $$ \Gamma(z) = \lim_{n\rightarrow\infty}\frac{n!n^z}{z(z+1)\ldots(z+n)} = \lim_{n\rightarrow\infty}\frac{n^z}{z(1+z/2)\ldots(1+z/n)}, $$ which was used by L. Euler to obtain the integral representation (Euler integral of the second kind, cf. Euler integrals) $$ \Gamma(z) = \int_0^\infty x^{z-1}e^{-x} \rd x, $$ which is valid for $\Re z > 0$. The multi-valuedness of the function $x^{z-1}$ is eliminated by the formula $x^{z-1}=e^{(z-1)\ln x}$ with a real $\ln x$. The symbol $\Gamma(z)$ and the name gamma-function were proposed in 1814 by A.M. Legendre. If $\Re z < 0$ and $-k-1 < \Re z < -k$, $k=0,1,\ldots$, the gamma-function may be represented by the Cauchy–Saalschütz integral: $$ \Gamma(z) = \int_0^\infty x^{z-1} \left( e^{-x} - \sum_{m=0}^k (-1)^m \frac{x^m}{m!} \right) \rd x. $$ In the entire plane punctured at the points $z=0,-1,\ldots $, the gamma-function satisfies a Hankel integral representation: $$ \Gamma(z) = \frac{1}{e^{2\pi iz} - 1} \int_C s^{z-1}e^{-s} \rd s, $$ where $s^{z-1} = e^{(z-1)\ln s}$ and $\ln s$ is the branch of the logarithm for which $0 < \arg\ln s < 2\pi$; the contour $C$ is represented in Fig. a. [FIXME] It is seen from the Hankel representation that $\Gamma(z)$ is a meromorphic function. At the points $z_n = -n$, $n=0,1,\ldots$ it has simple poles with residues $(-1)^n/n!$. Figure: g043310a Contents Fundamental relations and properties of the gamma-function. 1) Euler's functional equation: $$ z\Gamma(z) = \Gamma(z+1), $$ or $$ \Gamma(z) = \frac{1}{z\ldots(z+n)}\Gamma(z+n+1); $$ $\Gamma(1)=1$, $\Gamma(n+1) = n!$ if $n$ is an integer; it is assumed that $0! = \Gamma(1) = 1$. 2) Euler's completion formula: $$ \Gamma(z)\Gamma(1-z) = \frac{\pi}{\sin \pi z}. $$ In particular, $\Gamma(1/2)=\sqrt{\pi}$; $$ \Gamma(n+1/2) = \frac{1.3\ldots(2n-1)}{2^n}\sqrt{\pi} $$ if $n>0$ is an integer; $ $ 3) Gauss' multiplication formula: $ $ If $ $, this is the Legendre duplication formula. 4) If $ $ or $ $, then $ $ can be asymptotically expanded into the Stirling series: $ $ $ $ where $ $ are the Bernoulli numbers. It implies the equality $ $ $ $ In particular, $ $ More accurate is Sonin's formula [6]: $ $ 5) In the real domain, $ $ for $ $ and it assumes the sign $ $ on the segments $ $, $ $ (Fig. b). Figure: g043310b The graph of the function $ $. For all real $ $ the inequality $ $ is valid, i.e. all branches of both $ $ and $ $ are convex functions. The property of logarithmic convexity defines the gamma-function among all solutions of the functional equation $ $ up to a constant factor. For positive values of $ $ the gamma-function has a unique minimum at $ $ equal to $ $. The local minima of the function $ $ form a sequence tending to zero as $ $. Figure: g043310c The graph of the function $ $. 6) In the complex domain, if $ $, the gamma-function rapidly decreases as $ $, $ $ 7) The function $ $ (Fig. c) is an entire function of order one and of maximal type; asymptotically, as $ $, $ $ where $ $ It can be represented by the infinite Weierstrass product: $ $ which converges absolutely and uniformly on any compact set in the complex plane ($ $ is the Euler constant). A Hankel integral representation is valid: $ $ where the contour $ $ is shown in Fig. d. Figure: g043310d $ $ G.F. Voronoi [7] obtained integral representations for powers of the gamma-function. In applications, the so-called poly gamma-functions — $ $-th derivatives of $ $ — are of importance. The function (Gauss' $ $-function) $ $ $ $ is meromorphic, has simple poles at the points $ $ and satisfies the functional equation $ $ The representation of $ $ for $ $ yields the formula $ $ where $ $ This formula may be used to compute $ $ in a neighbourhood of the point $ $. $ $ The functions $ $ and $ $ are transcendental functions which do not satisfy any linear differential equation with rational coefficients (Hölder's theorem). The exceptional importance of the gamma-function in mathematical analysis is due to the fact that it can be used to express a large number of definite integrals, infinite products and sums of series (see, for example, Beta-function). In addition, it is widely used in the theory of special functions (thehypergeometric function, of which the gamma-function is a limit case, cylinder functions, etc.), in analytic number theory, etc. References [1] E.T. Whittaker, G.N. Watson, "A course of modern analysis" , Cambridge Univ. Press (1952) [2] H. Bateman (ed.) A. Erdélyi (ed.) , Higher transcendental functions , 1. The gamma function. The hypergeometric functions. Legendre functions , McGraw-Hill (1953) [3] N. Bourbaki, "Elements of mathematics. Functions of a real variable" , Addison-Wesley (1976) (Translated from French) [4] , Math. anal., functions, limits, series, continued fractions , Handbook Math. Libraries , Moscow (1961) (In Russian) [5] N. Nielsen, "Handbuch der Theorie der Gammafunktion" , Chelsea, reprint (1965) [6] N.Ya. Sonin, "Studies on cylinder functions and special polynomials" , Moscow (1954) (In Russian) [7] G.F. Voronoi, "Studies of primitive parallelotopes" , Collected works , 2 , Kiev (1952) pp. 239–368 (In Russian) [8] E. Jahnke, F. Emde, "Tables of functions with formulae and curves" , Dover, reprint (1945) (Translated from German) [9] A. Angot, "Compléments de mathématiques. A l'usage des ingénieurs de l'electrotechnique et des télécommunications" , C.N.E.T. (1957) Comments The $ $-analogue of the gamma-function is given by $ $ $ $ References [a1] E. Artin, "The gamma function" , Holt, Rinehart & Winston (1964) [a2] R. Askey, "The $ $-Gamma and $ $-Beta functions" Appl. Anal. , 8 (1978) pp. 125–141 How to Cite This Entry: Gamma-function. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Gamma-function&oldid=25562
A New Contender in the Digital Differentiator Race This blog proposes a novel differentiator worth your consideration. Although simple, the differentiator provides a fairly wide 'frequency range of linear operation' and can be implemented, if need be, without performing numerical multiplications. Background In reference [1] I presented a computationally-efficient tapped-delay line digital differentiator whose $h_{ref}(k)$ impulse response is:$$ h_{ref}(k) = {-1/16}, \ 0, \ 1, \ 0, \ {-1}, \ 0, \ 1/16 \tag{1} $$ and whose $ \lvert H_{ref}(\omega) \rvert $ frequency magnitude response is shown by the blue dashed curve in Figure 1. Figure 1: Normalized frequency magnitude responses of the reference [1] differentiator, and a central-difference differentiator. For comparison purposes, Figure 1 shows the frequency magnitude response of the popular central-difference differentiator (green dotted curve), $ \lvert H_{cd}(\omega) \rvert $, whose $h_{cd}(k)$ impulse response is:$$ h_{cd}(k) = 1/2, \ 0, \ {-1/2}. \tag{2} $$ In Figure 1 we see the $ \lvert H_{ref}(\omega) \rvert $ response's increased 'frequency range of linear operation' compared to the $ \lvert H_{cd}(\omega) \rvert $ frequency magnitude response. That's the primary benefit of the $h_{ref}(k)$ differentiator. The response curves in Figure 1 are what I call "normalized" magnitude responses. I'll define what I mean by normalized later in this blog. (As a bit of trivia, it's interesting to note that the central-difference differentiator's magnitude response is equal to the first half cycle of a sine wave. I prove that fact in Appendix A.) The Proposed Differentiator The improved differentiator I propose here has the impulse response given by:$$ h_{pro}(k) = {-3/16}, \ 31/32, 0, \ {-31/32}, \ 3/16 \tag{3} $$ and whose $ \lvert H_{pro}(\omega) \rvert $ frequency magnitude response is shown by the black solid curve in Figure 2. Figure 2: Normalized frequency magnitude responses of the proposed differentiator, the reference [1] differentiator, and a central-difference differentiator. Figure 2 shows the benefit of the proposed $ h_{pro}(k) $ differentiator. Its $ \lvert H_{pro}(\omega) \rvert $ response has an increased-width (by roughly 33%) 'frequency range of linear operation' compared to the $ \lvert H_{ref}(\omega) \rvert $ frequency magnitude response. Based on Figure 2, the $ h_{pro}(k) $ differentiator should be useful in applications where the spectral components of the input signal are less than $ \pi/2 $ radians/sample $ (f_s/4 \ Hz.) $ The $ h_{pro}(k) $ differentiator has, due to its antisymmetrical coefficients, linear phase in the frequency domain. In addition, the differentiator's input/output time delay (group delay) is exactly two sample periods. Having a delay that's an integer number of samples makes this differentiator useful when its output must be synchronized with other time-domain sequences, such as in FM demodulation applications. Proposed Differentiator Implementation The traditional tapped-delay line implementation of the proposed differentiator is shown in Figure 3(a). A folded delay line implementation reducing the number of computations from four multiplications to two multiplications per output sample is given in Figure 3(b). And a folded multiplier-free implementation is shown in Figure 3(b) where multipliers are replaced by binary right-shift operations. In that figure the 'BRS,x' nomenclature means a binary right shift by x bits. Figure 3: Proposed $ h_{pro}(k) $ differentiator implementations: (a) traditional implementation; (b) folded implementation; (c) folded multiplier-free implementation. Differentiator Gains and Normalized Magnitude Curves For completeness I mention that the $ h_{ref}(k) $ and $ h_{pro}(k) $ differentiators have magnitude response gains greater than unity. This is shown by the actual frequency magnitude responses shown in Figure 4. To explain what I mean by "gain", notice how the slope of the central-difference differentiator's $ \lvert H_{cd}(\omega) \rvert $ response (green dotted) curve is unity at low frequencies in Figure 4. Thus I claim the gain of the $ h_{cd}(k) $ differentiator has a gain of one. The slope of the reference [1] differentiator's $ \lvert H_{ref}(\omega) \rvert $ response (blue dashed) curve is 1.63 at low frequencies and the slope of the proposed differentiator's $ \lvert H_{pro}(\omega) \rvert $ response (black solid) curve is 1.2 at low frequencies. So I claim that the $h_{ref}(k)$ reference [1] and $ h_{pro}(k) $ proposed differentiators have gains of 1.63 and 1.2 respectively. Figure 4: Actual frequency magnitude responses of the proposed differentiator, the reference [1] differentiator, and a central-difference differentiator. Those actual magnitude response curve in Figure 4 prevent us from comparing the 'frequency ranges of linear operation' of the various differentiators. So to make that comparison I divided the Figure 4 $ \lvert H_{ref}(\omega) \rvert $ response sample values by 1.63 to create the normalized blue dashed curves in Figures 1 and 2. Likewise, I divided the Figure 4 $ \lvert H_{pro}(\omega) \rvert $ response sample values by 1.2 to create the normalized solid black curve in Figure 2. A Digital Differentiator Clarification When thinking about digital differentiators it's useful to be aware of the algebraic form of their time-domain difference equations. For example, let's assume that $ y(n) $ represents a central-difference derivative of an $ x(n) $ sequence. The correct estimation of the $ y(n) $ derivative of an $ x(n) $ is:$$ \text{estimated derivative of } x(n) = y(n) = {{x(n) - x(n-2)} \over {2t_s}} \tag{4} $$ where $ t_s $ is the time between the $ x(n) $ samples measured in seconds. (Variable $ t_s $ is the 'dt' in a generic differentiator's dx/dt derivative.) Often in the DSP literature, for convenience, the time between $ x(n) $ samples is assumed to be unity, i.e. $ t_s = 1 $ and a central-difference differentiator's difference equation takes the popular form of:$$ y'(n) = {{x(n) - x(n-2)} \over 2} . \tag {5} $$ Computing Eq. (5)'s estimated $ y'(n) $ derivative is acceptable in many applications because $ y'(n) $ will always be proportional to Eq. (4)'s $ y(n) $. But to be dimensionally correct, Eq. (4)'s $ y(n) $ should be computed, rather than Eq. (5)'s $ y'(n) $. For example, let's say the continuous red-dotted curve in Figure 5 is the $ x(t) $ instantaneous water pressure inside a water pipe measured in pounds per square inch (psi). And we have three $ x(n) $ samples of $ x(t) $ shown by the large $ x(n) $ black dots. The $ t_s $ time between sample is 0.5 seconds. Figure 5: Digital differentiator example. Estimating the slope, $ dx(t)/dt $, of the continuous $ x(t) $ signal by way of digital differentiation of the sampled $ x(n) $ sequence. To estimate the rate of pressure change at time instant two seconds $ (4t_s) $ once sample $ x(5) $ is available, using Eq. (5) gives us a $ y'(5) $ value of:$$ y'(5) = {{x(5)-x(3)} \over 2} = {{10 \ psi - 5 \ psi} \over 2} = 2.5 psi. $$ However, time rate of pressure change values must dimensionally be psi/second. What we should do is use Eq. (4) giving us a dimensionally-correct time rate of pressure change value of:$$ y(5) = {{x(5) - x(3)} \over {2t_s \ seconds}} = {{10 \ psi - 5 \ psi} \over {1 \ second}} = 5 \ psi/second. $$ Result $ y(5) $ tells us the water pressure increased by 5 psi during the one-second time interval from 1.5 seconds to 2.5 seconds. So my point is, Eq. (5)'s $ y'(n) $ is the popular form of a central-difference differentiator's difference equation, but Eq. (4)'s $ y(n) $ expression is the dimensionally-correct form. References Appendix A: Proof of Cent-Diff Differentiator's Sinusoidal Magnitude Response The proof that a central-difference differentiator's magnitude response is equal to the magnitude of a sine wave proceeds as follows: With the differentiator's time-domain impulse response being $ h_{cd}(k) = [1/2, \ 0, \ -1/2] $, its z-transform is$$ H_{cd}(z) = {{z^{-0} - z^{-2}} \over 2} . \tag{A-1} $$ Multiplying Eq. (A-1) by $ z/z $ gives us a new, and equivalent, z-transform of:$$ H_{cd}(z) = {1 \over z}{{z^{1} - z^{-1}} \over 2} . \tag{A-2} $$ Replacing $ z $ with $ e^{j\omega} $ gives us the differentiator's frequency response of:$$ H_{cd}(\omega) = {1 \over {e^{j\omega}}}{{e^{j\omega} - e^{-j\omega}} \over 2} . \tag{A-3} $$ Knowing that $ (e^{-j\omega} -e^{-j\omega})/2 = jsin(\omega) $, we write:$$ H_{cd}(\omega) = {j \over {e^{j\omega}}}[sin(\omega)] = {{e^{j\pi/2}} \over {e^{j\omega}}}[sin(\omega)] . \tag{A-4} $$ Therefore the $ \lvert H_{cd}(\omega) \rvert $ frequency magnitude response is equal to the magnitude of a sine wave as:$$ \lvert H_{cd}(\omega) \rvert = \left\lvert\frac{e^{j\pi/2}}{e^{j\omega}}[sin(\omega)]\right\rvert = \left\lvert {e^{-j(\omega-\pi/2)}[sin(\omega)]} \right\rvert = \lvert sin(\omega) \rvert . \tag{A-5} $$ Previous post by Rick Lyons: The Most Interesting FIR Filter Equation in the World: Why FIR Filters Can Be Linear Phase Next post by Rick Lyons: Implementing Simultaneous Digital Differentiation, Hilbert Transformation, and Half-Band Filtering perhaps you didn't take into account the 'normalization' that I described in my blog. Try the following MATLAB code and if you're still having problems please let me know. [-Rick-] clear h_CentDiff = [1, 0, -1]/2; h_Prop = [-3/16, 31/32, 0, -31/32, 3/16]; h_Prop_Norm = 0.7855*[-3/16, 31/32, 0, -31/32, 3/16]; [Freq_Resp_CentDiff,Freq] = freqz(h_CentDiff,1); [Freq_Resp_Prop,Freq] = freqz(h_Prop,1); [Freq_Resp_Prop_Norm,Freq] = freqz(h_Prop_Norm,1); Mag_CentDiff = abs(Freq_Resp_CentDiff); Mag_Prop = abs(Freq_Resp_Prop); Mag_Prop_Norm = abs(Freq_Resp_Prop_Norm); figure(1), clf subplot(2,1,1) plot([0, pi],[0, pi],':k') % Ideal freq mag response hold on plot(Freq, Mag_CentDiff,'g', Freq, Mag_Prop,'k') hold off axis([0, pi, 0, 3]) ylabel('Gain'), grid on, zoom on title('CenDiff = green, Proposed = black') subplot(2,1,2) plot([0, pi],[0, pi],':k') % Ideal freq mag response hold on plot(Freq, Mag_CentDiff,'g', Freq, Mag_Prop_Norm,'k') hold off axis([0, pi, 0, 3]), xlabel('Freq (Radians/Sample)') ylabel('Gain'), grid on, zoom on title('CenDiff = green, Proposed-Normalized = black') To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments. Registering will allow you to participate to the forums on ALL the related sites and give you access to all pdf downloads.
AliFMDMultCuts () AliFMDMultCuts (EMethod method, Double_t fmd1i, Double_t fmd2i=-1, Double_t fmd2o=-1, Double_t fmd3i=-1, Double_t fmd3o=-1) AliFMDMultCuts (const AliFMDMultCuts &o) AliFMDMultCuts & operator= (const AliFMDMultCuts &o) void Reset () Double_t GetMultCut (UShort_t d, Char_t r, Double_t eta, Bool_t errors) const Double_t GetMultCut (UShort_t d, Char_t r, Int_t etabin, Bool_t errors) const void SetMultCuts (Double_t fmd1i, Double_t fmd2i=-1, Double_t fmd2o=-1, Double_t fmd3i=-1, Double_t fmd3o=-1) void SetMPVFraction (Double_t frac=0) void SetNXi (Double_t nXi) void SetIncludeSigma (Bool_t in) void SetProbability (Double_t cut=1e-5) void Set (EMethod method, Double_t fmd1i, Double_t fmd2i=-1, Double_t fmd2o=-1, Double_t fmd3i=-1, Double_t fmd3o=-1) void Print (Option_t *option="") const void FillHistogram (TH2 *h) const void Output (TList *l, const char *name=0) const Bool_t Input (TList *l, const char *name) EMethod GetMethod () const const char * GetMethodString (Bool_t latex=false) const Cuts used when calculating the multiplicity. We can define our cuts in four ways (in order of priorty) Using a fixed value \( v\)- AliFMDMultCuts:: SetMultCuts Using a fraction \( f\) of the most probably value ( \( \Delta_p\)) from the energy loss fits Using some number \( n\) of widths ( \( \xi\)) below the most probable value ( \( \Delta_p\)) from the energy loss fits Using some number \( n\) of widths ( \( \xi+\sigma\)) below the most probable value ( \( \Delta_p\)) from the energy loss fits Using the \( x\) value for which \( P(x>p)\) given some cut value \( p\) Using the lower fit range of the energy loss fits The member function AliFMDMultCuts::Reset resets all cut values, meaning the lower bound on the fits will be used by default. This is useful to ensure a fresh start: The member function AliFMDMultCuts::GetMethod will return the method identifier for the current method employed (AliFMDMultCuts::EMethod). Like wise will the method AliFMDMultCuts::GetMethodString give a human readable string of the current method employed. Definition at line 38 of file AliFMDMultCuts.h. Set the cut for specified method. Note, that if method is kFixed, and only fmd1i is specified, then the outer rings cut value is increased by 20% relative to fmd1i. Also note, that if method is kLandauWidth, and cut2 is larger than zero, then \(\sigma\) of the fits are included in the cut value. Parameters method Method to use fmd1i Value for FMD1i fmd2i Value for FMD2i (if < 0, use fmd1i) fmd2o Value for FMD2o (if < 0, use fmd1i) fmd3i Value for FMD3i (if < 0, use fmd1i) fmd3o Value for FMD3o (if < 0, use fmd1i) Definition at line 63 of file AliFMDMultCuts.cxx. Referenced by AliFMDMultCuts(), AliFMDSharingFilter::AliFMDSharingFilter(), and DepSet().
Let us now take a closer look at the hex-fractal we sliced last week. Chopping a level 0, 1, 2, and 3 Menger sponge through our slanted plane gives the following: This suggests an iterative recipe to generate the hex-fractal. Any time we see a hexagon, chop it into six smaller hexagons and six triangles as illustrated below. Similarly, any time we see a triangle, chop it into a hexagon and three triangles like this: In the limit, each triangle and hexagon in the above image becomes a hex-fractal or a tri-fractal, respectively. The final hex-fractal looks something like this (click for larger image): Now we are in a position to answer last week’s question: how can we compute the Hausdorff dimension of the hex-fractal? Let d be its dimension. Like last week, our computation will proceed by trying to compute the “ d-dimensional volume” of our shape. So, start with a “large” hex-fractal and tri-fractal, each of side-length 1, and let their d-dimensional volumes be h and t respectively. [1] Break these into “small” hex-fractals and tri-fractals of side-length 1/3, so these have volumes \(h/3^d\) and \(t/3^d\) respectively (this is how “ d-dimenional stuff” scales). Since $$\begin{gather*}(\text{large hex}) = 6(\text{small hex})+6(\text{small tri}) \quad \text{and}\\ (\text{large tri}) = (\text{small hex})+3(\text{small tri}),\end{gather*}$$ we find that \(h=6h/3^d + 6t/3^d\) and \(t=h/3^d+3t/3^d\). Surprisingly, this is enough information to solve for the value of \(3^d\). [2] We find \(3^d = \frac{1}{2}(9+\sqrt{33})\), so $$d=\log_3\left(\frac{9+\sqrt{33}}{2}\right) = 1.8184\ldots,$$ as claimed last week. As a final thought, why did we choose to slice the Menger sponge on this plane? Why not any of the (infinitely many) others? Even if we only look at planes parallel to our chosen plane, a mesmerizing pattern emerges: More Information It takes a bit more work to turn the above computation of the hex-fractal’s dimension into a full proof, but there are a few ways to do it. Possible methods include mass distributions [3] or similarity graphs [4]. This diagonal slice through the Menger sponge has been proposed as an exhibit at the Museum of Math. Sebastien Perez Duarte seems to have been the first to slice a Menger sponge in this way (see his rendering), and his animated cross section inspired my animation above. Thanks for reading! Notes We’re assuming that the hex-fractal and tri-fractal have the same Hausdorff dimension. This is true, and it follows from the fact that a scaled version of each lives inside the other. [↩] There are actually two solutions, but the fact that hand tare both positive rules one out. [↩] Proposition 4.9 in: Kenneth Falconer. Fractal Geometry: Mathematical Foundations and Applications.John Wiley & Sons: New York, 1990. [↩] Section 6.6 in: Gerald Edgar. Measure, Topology, and Fractal Geometry(Second Edition). Springer: 2008. [↩]
If we write Schrodinger equation in imaginary time $\tau = it$, then one can easily show that the energy $E(\tau) = \langle \psi(\tau)| \hat{H} |\psi(\tau)\rangle$ is a diminishing quantity, i.e. $$\partial_{\tau} E(\tau) \le 0$$ Imagine that we have a state that depends on variational parameters denoted collectively by $\{R_{k} \}$. Can one prove in general that energy is a diminishing quantity if we impose some constraints on the state evolution, e.g. constant normalization ? Equation of motion for $\{ R_{k}\}$ is derived from stationary action principle: $$ S[R_{k}] = \int d\tau \langle R_{k}(\tau)| \partial_{\tau} + \hat{H} | R_{k}(\tau)\rangle - \nu(\tau) \langle R_{k}(\tau)| R_{k}(\tau) \rangle$$ It holds for a trivial case: $$|C_{k}(\tau)\rangle = \sum\limits_{k}C_{k}(\tau) |k\rangle$$ where $|k\rangle$ are eigenstates of the Hamiltonian.
The way you pose your question, being so vague, makes it hard to pinpoint exactly things you should avoid, pitfalls you must be aware of, etc, etc, because those things are often related to the context in which you substitute. It is common to just substitute a big expression that is repeated plenty of times with a simple variable, to make algebraic manipulation easier and more quick to perform and the we substitute back. For example $$\left(\frac {(\cos^2(x)\cdot \sin(x) - 1)\cos(x)\sin(x)}{x^2 - 5} + 5\cos^2(x)\sin(x)\right)\cdot \cos^2(x)\sin(x)$$ To simplify that just put $a = \cos^2(x)\sin(x)$ and work from there. Or you make a substitution that puts in evidence some expression that yields a favorable result. Again in algebraic manipulation: $a^2 + 2a + 1 - b^2$, making $c = a + 1$ yields $c^2 - b^2 = (c - b)(c + b) = (a - b + 1)(a + b + 1)$ Some international maths olympic problems even involve finding a good substitution to make them solvable! Substitution is also a lot used in limits, integration and derivation, in a sense because it makes it easier to manipulate smaller things as well. In those types of substitutions you must make sure you check your new bounds on the integral, for example, or that your new variable is approaching the right value. For example a distracted student might not understand what this is $$\lim_{x \rightarrow 0^+} (1 + x)^{1/x} $$ But making $y = 1/x $ and noticing $x \rightarrow 0^+ \iff y \rightarrow \infty $ we get $$\lim_{y \rightarrow \infty} (1 + 1/y)^{y} = e $$ Thus substitution is really used a lot and its limits are the ones of your creativity. When you make a substitution though, just make sure everything stays coherent and that you did not lose any constraint/property.
At zero flow the pressure at the shower head is simply the hydrostatic pressure given by Pascal's law: $$p=p_0+\rho gy$$Where $p_0$ is the atmospheric pressure, $y$ the height difference between tank meniscus and shower head, $\rho$ the density of water ($g\approx 10$$\:\mathrm{m/s^2}$). The water pressure coming from the head is too low. What the OP really means here is that the shower only delivers a trickle of water (low flow speed). So here I'll evaluate the factors that influence that flow speed. When flow starts, $p$ is lowered by: 1. Viscous losses in the pipe: Acc. Darcy Weisbach pressure loss in a straight pipe due to flow is given by: $$\Delta p=f_D\frac{\rho}{2}\frac{v^2}{D}L$$Where $f_D$ is a friction factor, $v$ is flow speed ($mathrm{m/s}$), $D$ pipe diameter and $L$ pipe length. For laminar flow: $$f_D=\frac{64\mu}{\rho D v}$$ Where $\mu$ is the viscosity of the fluid. So for laminar flow: $$\Delta p=\frac{32\mu v}{D^2}L$$ 2. Local resistances: Valves, bends, kinks, sudden changes in diameter etc. all cause head loss $h$, usually modelled as: $$h_r=c\frac{v^2}{2g}$$ Where $c$ is a coefficient that depends on the type of local resistance. In the OP's stated problem the main local resistance is almost certainly the shower head itself. 3. Bernoulli's principle: Using Bernoulli's principle we can now write (for laminar flow): $$y=\frac{v^2}{2g}+\frac{32\mu v}{\rho gD^2}L+c_{shower}\frac{v^2}{2g}$$Or:$$y=(c_{shower}+1)\frac{v^2}{2g}+\frac{32\mu v}{\rho gD^2}L$$This is a simple quadratic equation in $v$ and if $c_{shower}$ and the other factors where known, then it could be solved quite easily. But in the absence of that information we can still say that $v$: will increase with $y$, will increase with $D$, will decrease with $L$, will decrease with $c_{shower}$. 4. Turbulent flow: In the case of turbulent flow (high $v$, $Re > 4000$), $f_D$ becomes a function of $v$, $f_D=f(v)$ and the calculation becomes more complicated. But the general conclusions above still hold.
Finite Lexicographic Order on Well-Ordered Sets is Well-Ordering Theorem $\left({x_1, x_2, \ldots, x_n}\right) \prec \left({y_1, y_2, \ldots, y_n}\right)$ if and only if: $\exists k: 1 \le k \le n$ such that $\forall 1 \le j < k: x_j = y_j$ but $x_k \prec y_k$ in $S$. Then for a given $n \in \N_{>0}$, $\preccurlyeq$ is a well-ordering on $T_n$. Proof It is straightforward to show that $\preccurlyeq$ is a total ordering on $T_n$. It remains to investigate the nature of $\preccurlyeq$ as to whether or not it is a well-ordering. Consider $T_n$ where $n \in \N_{>0}$. It is clear that $\left({T_1, \preccurlyeq}\right)$ is order isomorphic to $\left({S, \preceq}\right)$. Now, let us assume that $\preccurlyeq$ is a well-ordering on $T_k$ for some $k \in \N: k \ge 1$. Let $A_1$ be the set of all of the first components of the ordered $n$-tuples that comprise $A$. Let $A_x$ be the subset of $A$ in which the first component equals $x$. We may consider $A_x$ to be a subset of $T_k$ where this first component $x$ has been suppressed. But we assumed that $T_k$ is well-ordered by $\preccurlyeq$. So $A_x$ contains a minimal element $\left({x, x_2, x_3, \ldots, x_{k+1}}\right)$ by $\preccurlyeq$. This element $\left({x, x_2, x_3, \ldots, x_{k+1}}\right)$ is the minimal element of $A$ by $\preccurlyeq$. Hence, by definition, $T_{k+1}$ is well-ordered by $\preccurlyeq$. The result follows by induction. $\blacksquare$
But if you don't want to have a Google account: Chrome is really good. Much faster than FF (I can't run FF on either of the laptops here) and more reliable (it restores your previous session if it crashes with 100% certainty). And Chrome has a Personal Blocklist extension which does what you want. : ) Of course you already have a Google account but Chrome is cool : ) Guys, I feel a little defeated in trying to understand infinitesimals. I'm sure you all think this is hilarious. But if I can't understand this, then I'm yet again stalled. How did you guys come to terms with them, later in your studies? do you know the history? Calculus was invented based on the notion of infinitesimals. There were serious logical difficulties found in it, and a new theory developed based on limits. In modern times using some quite deep ideas from logic a new rigorous theory of infinitesimals was created. @QED No. This is my question as best as I can put it: I understand that lim_{x->a} f(x) = f(a), but then to say that the gradient of the tangent curve is some value, is like saying that when x=a, then f(x) = f(a). The whole point of the limit, I thought, was to say, instead, that we don't know what f(a) is, but we can say that it approaches some value. I have problem with showing that the limit of the following function$$\frac{\sqrt{\frac{3 \pi}{2n}} -\int_0^{\sqrt 6}(1-\frac{x^2}{6}+\frac{x^4}{120})^ndx}{\frac{3}{20}\frac 1n \sqrt{\frac{3 \pi}{2n}}}$$equal to $1$, with $n \to \infty$. @QED When I said, "So if I'm working with function f, and f is continuous, my derivative dy/dx is by definition not continuous, since it is undefined at dx=0." I guess what I'm saying is that (f(x+h)-f(x))/h is not continuous since it's not defined at h=0. @KorganRivera There are lots of things wrong with that: dx=0 is wrong. dy/dx - what/s y? "dy/dx is by definition not continuous" it's not a function how can you ask whether or not it's continous, ... etc. In general this stuff with 'dy/dx' is supposed to help as some kind of memory aid, but since there's no rigorous mathematics behind it - all it's going to do is confuse people in fact there was a big controversy about it since using it in obvious ways suggested by the notation leads to wrong results @QED I'll work on trying to understand that the gradient of the tangent is the limit, rather than the gradient of the tangent approaches the limit. I'll read your proof. Thanks for your help. I think I just need some sleep. O_O @NikhilBellarykar Either way, don't highlight everyone and ask them to check out some link. If you have a specific user which you think can say something in particular feel free to highlight them; you may also address "to all", but don't highlight several people like that. @NikhilBellarykar No. I know what the link is. I have no idea why I am looking at it, what should I do about it, and frankly I have enough as it is. I use this chat to vent, not to exercise my better judgment. @QED So now it makes sense to me that the derivative is the limit. What I think I was doing in my head was saying to myself that g(x) isn't continuous at x=h so how can I evaluate g(h)? But that's not what's happening. The derivative is the limit, not g(h). @KorganRivera, in that case you'll need to be proving $\forall \varepsilon > 0,\,\,\,\, \exists \delta,\,\,\,\, \forall x,\,\,\,\, 0 < |x - a| < \delta \implies |f(x) - L| < \varepsilon.$ by picking some correct L (somehow) Hey guys, I have a short question a friend of mine asked me which I cannot answer because I have not learnt about measure theory (or whatever is needed to answer the question) yet. He asks what is wrong with \int_0^{2 \pi} \frac{d}{dn} e^{inx} dx when he applies Lesbegue's dominated convergence theorem, because apparently, if he first integrates and then derives, the result is 0 but if he first derives and then integrates it's not 0. Does anyone know?
Can anyone please help me to prove $ X=M$ using the following set of the equation(first-order logic)in Isabelle/HOL? $ N>=M$ $ \forall n. 0\leq n<N \rightarrow n<M$ $ X=N$ where $ N, M, X$ are integers constant. $ n$ integer variable. Can anyone please help me to prove $ X=M$ using the following set of the equation(first-order logic)in Isabelle/HOL? $ N>=M$ $ \forall n. 0\leq n<N \rightarrow n<M$ $ X=N$ where $ N, M, X$ are integers constant. $ n$ integer variable. Even I have tried java-mysql but also showing this Reading package lists… Done Building dependency tree Reading state information… Done E: Unable to locate package java-mysql libmysql-java– Reading package lists… Done Building dependency tree Reading state information… Done E: Unable to locate package libmysql-java I have Just downloaded the package and now I don’t know where to place it. Apologies, I am relatively new to SharePoint/JSON, So forgive me if I am not seeing the obvious. I would be INCREDIBLY grateful if someone would help. I would like to apply conditional formatting to a date field using a nested IF statement. IF the current field is blank, then set the CSS class to ‘blocked’. However, if the current field is populated, run a second IF: IF the current field is more than 1 year before today, then set the CSS class to ‘blocked’, however if the current field is within the last year, set the CSS class to ‘good’. I have ran the following, without success. No formatting is applied whatsoever. { "$ schema": "https://developer.microsoft.com/json-schemas/sp/v2/column-formatting.schema.json", "elmType": "div", "attributes": { "class": "=if(toString(@currentField)=='', 'sp-field-severity--blocked', if([@currentField]+31104000000 < @now,'sp-field-severity--blocked', 'sp-field-severity--good'))" }, "txtContent": "@currentField" } HOWEVER, when I split the conditions into two, they work independently. { "$ schema": "https://developer.microsoft.com/json-schemas/sp/column-formatting.schema.json", "elmType": "div", "attributes": { "class": "=if(toString(@currentField)=='','sp-field-severity--blocked', 'sp-field-severity--good')" }, "txtContent": "@currentField" } { "$ schema": "https://developer.microsoft.com/json-schemas/sp/v2/column-formatting.schema.json", "elmType": "div", "attributes": { "class": "=if(@currentField+31104000000 < @now, 'sp-field-severity--blocked', 'sp-field-severity--good')" }, "txtContent": "@currentField" } My next step would be to add the corresponding CSS class icon into the field. We have a table with a column name A with type nvarchar(23). following query will always return 23 which means that the actual length of all records are 23. select length(trim(req.A)), count(*) from tableName req group by length(trim(req.A)); |length(trim(req.A))|count(*)| ------------------------------ |23 |1006 | But when we select from this table with this query it behaves different and it seems that the last character is always removed in result Gridview in the pl/sql developer. select LENGTHB(req.A) lenb, length(req.A) len, req.* from tableName req where req.A = 'NHBBBBB1398052635902235'; -- Note to the equal sign and the last charactar (5) of the where clause the result is: |lenb|len| A | --------------------------------- |46 |23 |NHBBBBB139805263590223| As you can see the last character ( ) is removed in the select result. 5 Can you please explain whats happen!? Is this related to pl/sql configs? How to solve this? If you are looking for a letter that will get things going for you, you have found the right person to help you in writing or editing the ideal letter. If you are applying for any post or event, be it a grant, a job position, an admission or some other position or opportunity, I will create an impeccable letter of intent (or statement of purpose) that will ensure that your desire is secured. Just provide me with a brief and all the information related to the application and I will make you a letter you will fall in love with. by: Ruthmn Created: — Category: Content & Writing Viewed: 191 Conner Campbell from Santa Fe was looking for best personal statement writer site for masters Charlie Harris found the answer to a search query best personal statement writer site for masters essay writing service cheap dissertation ghostwriter for hire for mba The Essay of Gloria Anzaldua, Borderland 100… best personal statement writer site for masters I have this simple if statement where there are two columns. Column 1 = Score Column 2 = Test This is the If Statement Code: =IF([Score]>70,"A","F") If the score is greater than 70 it should give an A in the Test Column. If it’s less than 70 it’s F. The picture below is the issue. Am I doing something wrong here? I know you need prepared statements and such to avoid SQL injection, and I’ve seen that there are different questions about exploits for SELECT, INSERT, UPDATE injectable queries. But I couldn’t come up with an exploit sample for USE statement. Suppose I have an injectable single statement that looks like this: USE `data_from_attacker`; What data could the attacker use if they can put anything in place of the data_from_attacker, considering I’m looking for an exploit example that is not just selecting a DB (ie: selecting information_schema or mysql DB seems harmless, as the next queries won’t work because tables won’t exist; and selecting a DB that do not exists seems also harmless). Also, consider that mysql will only interpret the 1st query, so attacker cannot inject: mysql`; SELECT * FROM `users Can you find such exploit for MySQL? The USE syntax seems very “poor” for such injection… I am working on a AWK script which should replace the value of 3 column from an excel csv sheet to a particular value and the awk should ignore the first and the last line. The problem is with the string i am trying to update is causing issue, Below is the command i am using: awk -v sq="'" -F, ' t{print t} {a=t=$ 0} NR>2{$ 3=sqops_data<dbms=Teradata::instance=idw-prod>sq;t=$ 0} END {print a} ' OFS=, test1.csv But, it is giving me syntax error at :: . I have created the following list workflow inside sharepoint online using sharepoint designer:- now as shown in the above screen, i am updating the current item inside the If statements. but inside some IF statements, i want to end the workflow so the other If statements will not be executed. so this possible? as seems i can not define “ Go to End of Workflow” inside the IF statements. any advice or help? Thanks
Arc length formula is used to calculate the measure of the distance along the curved line making up the arc (segment of a circle). In simple words, the distance that runs through the curved line of the circle making up the arc is known as the arc length. It should be noted that the arc length is longer than the straight line distance between its endpoints. Formulas for Arc Length The formula to measure the length of the arc is – Arc Length Formula (if ϴ is in degrees) s = 2 π r (θ/360) Arc Length Formula (if ϴ is in radians) s = ϴ × r Arc Length Formula in Integral Form s= \(\int^{a}_b\sqrt{1+(\frac{dy}{dx})^2}dx\) Denotations in the Arc Length Formula s is the arc length r is the radius of the circle θ is the central angle of the arc Example Questions Using the Formula for Arc Length Question 1: Calculate the length of an arc if the radius of an arc is 8 cm and the central angle is 40°? Solution: Radius, r = 8 cm Central angle, θ = 40° Arc length = 2 π r × (ϴ/360) So, s = 2 × π × 8 × (40/360) = 5.582 cm Question 2: What is the arc length for a function f(x)=6 between x=4 and x=6? Solution: Since the function is a constant, the differential of it will be 0. So, the arc length will now be- s = \(s=\int^{6}_4\sqrt{1+(0)^2}dx\) So, arc length (s) = (6-4) = 2. Practice Questions Based on Arc Length Formula What would be the length of the arc formed by 75° of a circle having the diameter of 18cm? The length of an arc formed by 60° of a circle of radius “r” is 8.37 cm. Find the radius ® of that circle. Calculate the perimeter of a semicircle of radius 1. cm using the arc length formula. Also Check: Stay to get more such mathematics related formulas and their explanations. Also, get various maths lessons, preparation tips, question papers, sample papers, syllabus, and other study materials by registering at BYJU’S.
I'll assume that you want the surface gravity to be the same as earth's. This means that the average density must be 1/3 that of earth, since for a body, if gs is the surface gravity, $\rho$ the density, and r the radius $$gs=\frac{GM}{r^2} = \frac {GMr}{r^3} = G \rho r$$ and increasing r requires a corresponding decrease in $\rho$ if gs is to remain constant. Since the density of earth is 5.5 times that of water, the density of Earth3 must be on the order of $$\rho = \frac{5.5}{3} = 1.8 $$ This low density means there is no iron core and no magnetic field. As a result, the surface will have a slightly increased solar radiation dose. Actually, it's not clear that there could be any significant rocky material at all, since virtually no rock has a density that low, even at the low pressures associated with surface conditions. Quartz, for instance (aka "sand") has a density of 2.6. The surface of Earth3 has to be water, and to a very considerable depth, with no reasonable expectation of land anywhere. Of course, if you allow the planetary makup to remain the same as earth's, the increased pressure at the core will increase the density, so the surface gravity will be somewhere north of 3 g's. I'm not competent to figure the exact increase, so someone else will have to fill this one in. Now, about those rings. Let's assume that the Earth3 system has the same proportions as Saturn. Saturn has a radius 9.88 times that of earth, while the rings extend from 6600 to 121,000 km above the surface. Scaling this to Earth3's radius (6400 km) gives rings extending from 2,000 km to 37,000 km above the equator. I've not been able to find the reflectivity of Saturn's rings, but let's assume 100% for incidence angles less than 30 degrees. If Earth3 has an orbital tilt the same as earth's (23 degrees) it's clear that winter's will be ferocious, as the rings will shade much of the hemisphere during deep winter. The "vertical" extent of the ring's shadow will be $$x = 37000 km \times \sin{23} = 14,500 km$$ which, compared to a radius of 19,200 km gives full shadow to $$\theta = cos^{-1}(\frac{14,500}{19,200}) = 41 degrees$$ Since this is virtually all of the tropics (except for about a 2000 km belt north or south of the equator), and much of the temperate zone (especially the warmer parts), the intensification of winter temperatures should be fairly severe. Finally, the moons. Meh. If Earth3 is like Mars, with dinky little moons, there will obviously be no tides to speak of, although the existence of tidal pools is often speculated to be a possible source of the earliest life forms. Two large moons with much the effect of Luna suggests that the two orbit each other, but I suspect that this is difficult to justify in terms of the mechanics of formation.
Traditionally, these filters were implemented as analog filters, and they are different from the usual discrete-time first-order FIR pre-emphasis filter used in speech processing. The transfer function of such an analog pre-emphasis filter is $$H(s)=g\frac{1+s\tau_1}{1+s\tau_2},\qquad\tau_1\gt\tau_2,\tag{1}$$ where $g$ is the DC gain, and $\tau_1$ and $\tau_2$ are two time constants determining the locations of the zero and the pole: $$s_0=-\frac{1}{\tau_1}\\s_{\infty}=-\frac{1}{\tau_2}\tag{2}$$ If the pole (or the zero) are given in terms of frequencies $f_i$, then the following relation holds: $$\tau_i=\frac{1}{2\pi f_i},\qquad i\in\{1,2\}\tag{3}$$ So for a given frequency $f_i$ you can use $(3)$ to determine the corresponding time constant, from which you obtain the desired transfer function $(1)$. If you want to implement that filter in discrete time, the most common option is to use the bilinear transform: $$s=\frac{2}{T}\frac{z-1}{z+1}\tag{4}$$ where $T$ is the sampling period, i.e., the inverse of the sampling frequency. Note that $(2)$ is the bilinear transform without pre-warping, which means that the frequency axis will be warped, so neither the pole frequency nor the zero frequency of the analog filter will be exactly realized in the discrete-time domain. You can pre-warp the analog frequencies (or time constants) to make sure that the resulting discrete-time filter has the desired pole and zero frequencies. In order to realize a (pole or zero) frequency $f_i$ in the discrete-time domain, you need to use the following time constant in the analog filter transfer function $(1)$: $$\tau_i=\frac{T}{2\tan\left(\frac{\pi f_i}{f_s}\right)}\tag{5}$$ where $f_s=1/T$ is the sampling frequency. You can find more information on pre-warping in this answer.
№ 9 All Issues Volume 57, № 3, 2005 Ukr. Mat. Zh. - 2005. - 57, № 3. - pp. 291–304 We obtain exact estimates of the approximation in the metrics $C$ and $L_2$ of functions, that are defined on a sphere, by means of linear methods of summation of the Fourier series in spherical harmonics in the case where differential and difference properties of functions are defined in the space $L_2$. Polynomial Form of de Branges Conditions for the Denseness of Algebraic Polynomials in the Space $C_w^0$ Ukr. Mat. Zh. - 2005. - 57, № 3. - pp. 305–319 In the criterion for polynomial denseness in the space $C_w^0$ established by de Brange in 1959, we replace the requirement of the existence of an entire function by an equivalent requirement of the existence of a polynomial sequence. We introduce the notion of strict compactness of polynomial sets and establish sufficient conditions for a polynomial family to possess this property. Ukr. Mat. Zh. - 2005. - 57, № 3. - pp. 320–328 In this paper, we redefine the torus homotopy groups of Fox and give a proof of the split exact sequence of these groups. Evaluation subgroups are defined and are related to the classical Gottlieb subgroups. With our constructions, we recover the Abe groups and prove some results of Gottlieb for the evaluation subgroups of Fox homotopy groups. We further generalize Fox groups and define a group $\tau = \left[ \sum\left(V \times WU*\right), X\right]$ in which the generalized Whitehead product of Arkowitz is again a commutator. Finally, we show that the generalized Gottlieb group lies in the center of $\tau$, thereby improving a result of Varadarajan. Ukr. Mat. Zh. - 2005. - 57, № 3. - pp. 329–337 In this article, we study conditions for the asymptotic equivalence of differential equations in Hilbert spaces. We also discuss the relationship between the properties of solutions of differential equations of triangular form and those of truncated differential equations. Asymptotic Behavior of Unbounded Solutions of Essentially Nonlinear Second-Order Differential Equations. I Ukr. Mat. Zh. - 2005. - 57, № 3. - pp. 338–355 We establish asymptotic representations for one class of unbounded solutions of second-order differential equations whose right-hand sides contain a sum of terms with nonlinearities of a more general form than nonlinearities of the Emden-Fowler type. Ukr. Mat. Zh. - 2005. - 57, № 3. - pp. 356–365 We investigate different measure transformations of the mapping-multiplication type in the cases where the corresponding chains of differential equations can be efficiently found and integrated. Ukr. Mat. Zh. - 2005. - 57, № 3. - pp. 366–376 We consider the problem of solvability and optimization for a pseudohyperbolic operator of the general form. We prove theorems on existence and uniqueness for various right-hand sides of the equation. The results obtained are applied to the problem of trajectory-final controllability. Ukr. Mat. Zh. - 2005. - 57, № 3. - pp. 377–387 In spaces of classical functions with power weight, we prove the correct solvability of a boundary-value problem for parabolic equations with an arbitrary power order of degeneracy of coefficients with respect to both time and space variables. Ukr. Mat. Zh. - 2005. - 57, № 3. - pp. 388–393 For a bounded operator that is not a sum of scalar and compact operators and is similar to a diagonal operator, we prove that it is a linear combination of three idempotents. It is also proved that any self-adjoint diagonal operator is a linear combination of four orthoprojectors with real coefficients. Ukr. Mat. Zh. - 2005. - 57, № 3. - pp. 394–399 We study some problems of the approximation of continuous functions defined on the real axis. As approximating aggregates, the de la Vallee-Poussin operators are used. We establish asymptotic equalities for upper bounds of the deviations of the de la Vallee-Poussin operators from functions of low smoothness belonging to the classes \(\hat C^{\bar \psi } \mathfrak{N}\). Ukr. Mat. Zh. - 2005. - 57, № 3. - pp. 400–420 We consider a continuous function that changes its sign on an interval finitely many times and pose the problem of the approximation of this function by a polynomial that inherits its sign. For this approximation, we obtain (in the case where this is possible) Jackson-type estimates containing modified weighted moduli of smoothness of the Ditzian-Totik type. In some cases, constants in these estimates depend substantially on the location of points where the function changes its sign. We give examples of functions for which these constants are unimprovable. We also prove theorems that are analogous, in a certain sense, to inverse theorems of approximation without restrictions. On Isometric Immersion of Three-Dimensional Geometries $SL_2$, $Nil$ and $Sol$ into a Four-Dimensional Space of Constant Curvature Ukr. Mat. Zh. - 2005. - 57, № 3. - pp. 421–426 We prove the nonexistence of isometric immersion of geometries $\text{Nil}^3$, $\widetilde{SL}_2$ into the four-dimensional space $M_c^4$ of the constant curvature $c$. We establish that the geometry $\text{Sol}^3$ cannot be immersed into $M_c^4$ if $c \neq -1$ and find the analytic immersion of this geometry into the hyperbolic space $H^4(-1)$. Ukr. Mat. Zh. - 2005. - 57, № 3. - pp. 427–431 We study classes of convex functions on $(1, \infty)$ that tend to zero at infinity. Relations between different elements of these classes are determined.
Multi-Decimation Stage Filtering for Sigma Delta ADCs: Design and Optimization During my research on digital FIR decimation filters I have been developing various Matlab scripts and functions. In which I have decided later on to consolidate it in a form of a toolbox. I have developed this toolbox to assist and automate the process of designing the multi-stage decimation filter(s). The toolbox is published as an open-source at the MathWorks web-site. My dissertation is open for public online as well. The toolbox has a wide set of examples to guide the user through the steps. Furthermore, there is a design template. I hope you all enjoy and get the best of it. I will be glad for your constructive feedback. Overview The Multi-Stage Decimation toolbox (MSD-toolbox) is developed in Matlab language. Designing a decimation filter system starts with definite specifications such as the sampling frequency ($f_s$), oversampling ratio (OSR), passband frequency ($f_{pb}$), signal-to-noise ratio (SNR, regarded also as ADC resolution), in-band noise (IBN), and a stimuli bit-stream from sigma delta modulator. It should be noted that, the transition band-width ($\Delta f=(f_{sb}-f_{pb})/f_{sb}$) could be provided at the input specifications instead of the passband frequency. The stimuli bit-stream permits accurate analysis for the intra decimation stages. Moreover, involving the stimuli affords performing spectral analysis in addition to the frequency domain analysis for the decimation filter. Whereas, involving the acceptable tolerance in the IBN maintains additional flexibility for filter coefficient optimization. The stimuli bit-stream can be imported from DelSig-toolbox [1], DISCO-toolbox [2]. The MSD-toolbox has several routines and sub-programs for calculating implementation parameters and performance evaluation. The sub-programs and routines are based on state-of-the-art algorithms. The essential routines are: Calculating $k$ and $M$ ($R_{T}(k,M)$) Calculating $h_k$ and $Q$ (P-M-E) Coefficient optimization (Optimization) Cost estimation (Cost) For further details refer my dissertation can be found at DISSERTATION. $k$ and $M$ Calculations The optimal number of decimation stages and the decimation factor at each stage is calculated for minimum computational effort ($R_T$), as given in $$ {R_T} = {D_\infty }{f_s}\sum\limits_{i = 1}^k {\frac{{{M_i}}}{{\left( {\prod\limits_{j = 1}^i {{M_j}} } \right)\left( {1 - \frac{{{f_{sb}} + {f_{pb}}}}{{{f_s}}}\prod\limits_{j = 1}^i {{M_j}} } \right)}}} $$ The problem is constrained for $k\in\lbrace2,3,4\rbrace$ and $M_k=2$ for even $M$'s, where $M$ is the overall decimation factor (equivalent to OSR), $f_s$ is the sampling frequency, $f_{pb}$ is the passband frequency, $f_{sb}$ is the stopband frequency, and $D_\infty$ is a function in passband ($\delta_{pb}$) and stopband ($\delta_{sb}$) ripples, as depicted in $$ {D_\infty } = {\log _{10}}{\delta _{sb}}[0.005309{({\log _{10}}{\delta _{pb}})^2} + 0.07114{\log _{10}}{\delta _{pb}} - 0.4761] $$ $$ - [0.00266{({\log _{10}}{\delta _{pb}})^2} + 0.5941{\log _{10}}{\delta _{pb}} + 0.4278] $$ where $k$ is calculated for minimum $R_T$ at distinct values of $M$. Detailed analysis for optimizing the number of decimation stages and the decimation factor for each stage is given in [3], [4], and [5]. $\delta_{pb}$ and $\delta_{sb}$ Calculations The passband ripples ($\delta_{pb}$) and stopband ripple ($\delta_{sb}$) are calculated using iterative simulations preserving a predefined acceptable tolerance in the in-band noise (IBN). This is done by designing a single stage filter and tuning the $\delta_{pb}$ and $\delta_{sb}$, by defining a certain range or a set of discrete values. The $\delta_{sb}$ remains the same for all the $k$-stages. While, the $\delta_{pb_{i}}=\delta_{pb}/k$ for stage $i$. $h_k$ and $Q$ Calculations The exact number of decimation stages and the dedicated decimation factor for each stage were calculated in addition to the given design specifications, which sustain all the necessary parameters required for calculating the filter coefficients ($h_k$) for each decimation filter stage employing the Parks-McClellan Equiripple. Following the filter coefficient ($h_k$) calculation comes the quantization bit-width ($Q$) calculation step in order to compute the scaled filter coefficient set ($\hat h_k$). Initially, a set of parameters is calculated for each stage, such as baseband frequency ($f_B$), passband frequency ($f_{pb}$), and stopband frequency ($f_{sb}$). The $f_B$ is calculated by $$ {f_B} = \frac{{{f_s}}}{{2{\rm{OSR}}}} $$ where OSR is the oversampling ratio and $f_s$ is the sampling frequency. The $f_{pb}$ and the $f_{sb}$ are calculated by the equations given below. However, the intermediate stopband frequency ($f_{sb_{i}}$) is calculated by $$ {f_{pb}} = {f_B}(1 - \Delta f) $$ $$ {f_{sb}} = {f_B} $$ $$ {f_{s{b_i}}} = \frac{{{f_s}}}{{\prod\limits_{i = 1}^k {{M_i}} }} - {f_B}\\ $$ FIR filter has distinct implementation types, such as standard FIR (FIR), half-band (HB), and multi-band (MB) types. The FIR filter implementation type affects the filter response and coefficients. The Parks-McClellan Equiripple (P-M-E) algorithm is used to obtain the FIR filter coefficients ($h_k$) for the predefined filter types except the CIC filter type. Subsequently, the required quantization bit-width ($Q$) is calculated iteratively preserving minimum mean error (ME) or minimum increase in the in-band noise (IBN). Samples References [1] R. Schreier and G.C. Temes. Understanding delta-sigma data converters. IEEE press Piscataway, NJ, 2005. [2] A. Buhmann, M. Keller, M. Maurer, M. Ortmanns, and Y. Manoli. Disco - a toolbox for the discrete-time simulation of continuous-time sigma-delta modulators using matlab. In Proc. Midwest Symposium on Circuits and Systems (MWSCAS'07), pages 1082--1085, Aug. 2007. [3] R. Crochiere and L. Rabiner. Optimum FIR digital filter implementations for decimation, interpolation, and narrow-band filtering. 23(5):444--456, 1975. [4] M. Coffey. Optimizing multistage decimation and interpolation processing. IEEE Signal Processing Letters, 10(4):107--110, 2003. [5] M. Coffey. Optimizing Multistage Decimation and Interpolation Processing: Part II. IEEE Signal Processing Letters, 14(1):24--26, 2007. Hi. Your definition of transition bandwidth being '(f_sb - f_pb)/f_sb' doesn't seem correct to me. Shouldn't that transition bandwidth be '(f_sb - f_pb)/f_s' ? Dear Rick, Thank you very much for your reply. Your remark is totally valid, however, I believe the definition of the $$\Delta f$$ is based on how it is used within the toolbox. The formulas used are: $$f_b = \frac{f_s}{2OSR}$$ $$f_p = f_b(1 - \Delta f)$$ $$f_c = \frac{f_s}{OSR} - f_b$$ Let's consider the following simple numerical example, for $$f_s = 960kHz, OSR = 24, f_p = 18kHz, f_c = 20kHz$$. $$\Delta f = (20-18)/20 = 0.1$$ $$f_p = f_b(1 - \Delta f) = 20(1 - 0.1) = 18$$ where $$f_b = \frac{f_s}{2OSR}$$ Hello ahmedshahein. I'm not able to understand you message here. You have used the phrase "baseband frequency" (f_b). I've never heard that phrase before from people when they talk or write about decimation filters. You defined that mysterious f_b variable in terms of an undefined variable that you call OSR. The phrase "oversampling ratio" makes me think of some kind of ratio (a fraction) being "something" divided by "something else." But I have no idea of what is in the numerator or denominator of that mysterious ratio. Using non-standard (non-traditional) terminology makes it hard for a reader to understand your blog. Can you tell us in words what the phrases "f_b = baseband frequency" and "OSR" mean? Thanks. Hi Rick, The terminology shall be the standard one used for Sigma-Delta modulators (SDM). The OSR is the Over-Sampling Ratio, and the f_b is the base-band frequency for the SDM. The f_b is equivalent to the cut-off frequency of the decimation filter used for the decimating the SDM output. I hope it is more clear now. Regards. I think the blog is not about decimation in general but rather it is about a specific corner(that of SDM). So the title is a bit misleading. Dear Kaz, Thanks for your advice, I have renamed the title. Regards. To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments. Registering will allow you to participate to the forums on ALL the related sites and give you access to all pdf downloads.
AI News, Speech Processing for Machine Learning: Filter banks, Mel-Frequency Cepstral Coefficients (MFCCs) and What's In-Between On Thursday, October 4, 2018 By Read More Speech Processing for Machine Learning: Filter banks, Mel-Frequency Cepstral Coefficients (MFCCs) and What's In-Between Speech processing plays an important role in any speech system whether its Automatic Speech Recognition (ASR) or speaker recognition or something else. Mel-Frequency pre-emphasis filter is useful in several ways: (1) balance the frequency spectrum since high frequencies usually have smaller magnitudes compared to lower frequencies, (2) avoid numerical problems during the Fourier transform operation and (3) may also improve the Signal-to-Noise Ratio (SNR). The pre-emphasis filter can be applied to a signal \(x\) using the first order filter in the following equation: \[y(t) = x(t) - \alpha x(t-1)\] which can be easily implemented using the following line, where typical values for the filter coefficient (\(\alpha\)) are 0.95 or 0.97, pre_emphasis = 0.97: Pre-emphasis has a modest effect in modern systems, mainly because most of the motivations for the pre-emphasis filter can be achieved using mean normalization (discussed later in this post) except for avoiding the Fourier transform numerical issues which should not be a problem in modern FFT implementations. rationale behind this step is that frequencies in a signal change over time, so in most cases it doesn’t make sense to do the Fourier transform across the entire signal in that we would loose the frequency contours of the signal over time. To A Hamming window has the following form: \[w[n] = 0.54 − 0.46 cos ( \frac{2πn}{N − 1} )\] where, \(0 \leq n \leq N - 1\), \(N\) is the window length. This could be implemented with the following lines: The final step to computing filter banks is applying triangular filters, typically 40 filters, nfilt = 40 on a Mel-scale to the power spectrum to extract frequency bands. The can convert between Hertz (\(f\)) and Mel (\(m\)) using the following equations: \[m = 2595 \log_{10} (1 + \frac{f}{700})\] \[f = 700 (10^{m/2595} - 1) \] Each filter in the filter bank is triangular having a response of 1 at the center frequency and decrease linearly towards 0 till it reaches the center frequencies of the two adjacent filters where the response is 0, as shown in this figure: Filter bank on a Mel-Scale This can be modeled by the following equation (taken from here): \[ H_m(k) = \hfill 0 \hfill & \hfill \dfrac{k - f(m - 1)}{f(m) - f(m - 1)} \hfill & \hfill 1 \hfill & \hfill 0 \hfill & On Monday, September 23, 2019 Filter bank for signal processing To download the FBD GUI, please click here: To download the MATLAB .. Speaker Independent Isolated Word Recogntition System using mfcc and DWT This Video shows MATLAB implementation of Speaker Independent Isolated Word Recogntition System using Mel Frequency Cepstrum Coefficient (mfcc) and ... What is FILTER BANK? What does FILTER BANK mean? FILTER BANK meaning, definition & explanation What is FILTER BANK? What does FILTER BANK mean? FILTER BANK meaning - FILTER BANK definition - FILTER BANK explanation. Source: Wikipedia.org ... Mel Frequency Cepstral Coefficients ANALYSIS OF SPEECH RECOGNITION USING MEL FREQUENCY CEPSTRAL COEFFICIENTS (MCFC) Filter Bank Design Please download the MATLAB function here: Please check out my homepage at .. Fast Filter Bank Design Please download the MATLAB function here: Please check out my .. Wavelet Speaker Recognition Matlab code Discrete Wavelet Transform for Speaker Recognition Extraction and selection of the best parametric .. Lec-35 Polyphase Decomposition Lecture Series on Digital Signal Processing by Prof.T.K.Basu, Department of Electrical Engineering, IIT Kharagpur. For more details on NPTEL visit ... Lecture 12: End-to-End Models for Speech Processing Lecture 12 looks at traditional speech recognition systems and motivation for end-to-end models. Also covered are Connectionist Temporal Classification (CTC) ...
Faddeeva Package From AbInitio Revision as of 22:54, 29 October 2012 (edit) Stevenj (Talk | contribs) (→Usage) ← Previous diff Revision as of 22:55, 29 October 2012 (edit) Stevenj (Talk | contribs) (→Usage) Next diff → Line 23: Line 23: :<math>\mathrm{erfcx}(x) = e^{x^2} \mathrm{erfc}(x) = w(ix)</math> (scaled complementary error function) :<math>\mathrm{erfcx}(x) = e^{x^2} \mathrm{erfc}(x) = w(ix)</math> (scaled complementary error function) :<math>\mathrm{erfc}(x) = e^{-x^2} w(ix) = \begin{cases} e^{-x^2} w(ix) & \mathrm{Re}\,x \geq 0 \\ 2 - e^{-x^2} w(-ix)) & \mathrm{Re}\,x < 0 \end{cases}</math> (complementary error function) :<math>\mathrm{erfc}(x) = e^{-x^2} w(ix) = \begin{cases} e^{-x^2} w(ix) & \mathrm{Re}\,x \geq 0 \\ 2 - e^{-x^2} w(-ix)) & \mathrm{Re}\,x < 0 \end{cases}</math> (complementary error function) - :<math>\mathrm{erf}(x) = 1 - \mathrm{erfc}(x) = \begin{cases} 1 - e^{-x^2} w(ix) & \mathrm{Re}\,x \geq 0 \\ e^{-x^2} w(-ix)) - 1 & \mathrm{Re}\,x < 0 \end{cases}</math> (error function) + :<math>\mathrm{erf}(x) = 1 - \mathrm{erfc}(x) = \begin{cases} 1 - e^{-x^2} w(ix) & \mathrm{Re}\,x \geq 0 \\ e^{-x^2} w(-ix) - 1 & \mathrm{Re}\,x < 0 \end{cases}</math> (error function) :<math>\mathrm{erfi}(x) = -i\mathrm{erf}(ix) = -i[e^{x^2} w(x) - 1]</math> (imaginary error function) :<math>\mathrm{erfi}(x) = -i\mathrm{erf}(ix) = -i[e^{x^2} w(x) - 1]</math> (imaginary error function) :<math>F(x) = \frac{i\sqrt{\pi}}{2} \left[ e^{-x^2} - w(x) \right]</math> ([[w:Dawson function|Dawson function]]) :<math>F(x) = \frac{i\sqrt{\pi}}{2} \left[ e^{-x^2} - w(x) \right]</math> ([[w:Dawson function|Dawson function]]) Revision as of 22:55, 29 October 2012 Contents Faddeeva / complex error function Steven G. Johnson has written free/open-source C++ code (with wrappers for other languages) to compute the scaled complex error function w( z) = e − z2erfc(− iz), also called the Faddeeva function(and also the plasma dispersion function), for arbitrary complex arguments zto a given accuracy. Given the Faddeeva function, one can easily compute Voigt functions, the Dawson function, and similar related functions. Download the source code from: http://ab-initio.mit.edu/Faddeeva_w.cc (updated 29 October 2012) Usage To use the code, add the following declaration to your C++ source (or header file): #include <complex> extern std::complex<double> Faddeeva_w(std::complex<double> z, double relerr=0); The function Faddeeva_w(z, relerr) computes w( z) to a desired relative error relerr. Omitting the relerr argument, or passing relerr=0 (or any relerr less than machine precision ε≈10 −16), corresponds to requesting machine precision, and in practice a relative error < 10 −13 is usually achieved. Specifying a larger value of relerr may improve performance (at the expense of accuracy). You should also compile Faddeeva_w.cc and link it with your program, of course. In terms of w( z), some other important functions are: (scaled complementary error function) (complementary error function) (error function) (imaginary error function) (Dawson function) Note that in the case of erf and erfc, we provide different equations for positive and negative x, in order to avoid numerical problems arising from multiplying exponentially large and small quantities. Wrappers: Matlab, GNU Octave, and Python Wrappers are available for this function in other languages. Matlab (also available here): A function Faddeeva_w(z, relerr), where the arguments have the same meaning as above (the relerrargument is optional) can be downloaded from Faddeeva_w_mex.cc (along with the help file Faddeeva_w.m. Compile it into an octave plugin with: mex -output Faddeeva_w -O Faddeeva_w_mex.cc Faddeeva_w.cc GNU Octave: A function Faddeeva_w(z, relerr), where the arguments have the same meaning as above (the relerrargument is optional) can be downloaded from Faddeeva_w_oct.cc. Compile it into a MEX file with: mkoctfile -DMPICH_SKIP_MPICXX=1 -DOMPI_SKIP_MPICXX=1 -s -o Faddeeva_w.oct Faddeeva_w_oct.cc Faddeeva_w.cc Python: Our code is used to provide scipy.special.wofzin SciPy starting in version 0.12.0 (see here). Algorithm This implementation uses a combination of different algorithms. For sufficiently large | z|, we use a continued-fraction expansion for w( z) similar to those described in Walter Gautschi, "Efficient computation of the complex error function," SIAM J. Numer. Anal. 7(1), pp. 187–198 (1970). G. P. M. Poppe and C. M. J. Wijers, "More efficient computation of the complex error function," ACM Trans. Math. Soft. 16(1), pp. 38–46 (1990); this is TOMS Algorithm 680. Unlike those papers, however, we switch to a completely different algorithm for smaller | z|: Mofreh R. Zaghloul and Ahmed N. Ali, "Algorithm 916: Computing the Faddeyeva and Voigt Functions," ACM Trans. Math. Soft. 38(2), 15 (2011). Preprint available at arXiv:1106.0151. (I initially used this algorithm for all z, but the continued-fraction expansion turned out to be faster for larger | z|. On the other hand, Algorithm 916 is competitive or faster for smaller | z|, and appears to be significantly more accurate than the Poppe & Wijers code in some regions, e.g. in the vicinity of | z|=1 [although comparison with other compilers suggests that this may be a problem specific to gfortran]. Algorithm 916 also has better relative accuracy in Re[ z] for some regions near the real- z axis. You can switch back to using Algorithm 916 for all z by changing USE_CONTINUED_FRACTION to 0 in the code.) Note that this is SGJ's independent re-implementation of these algorithms, based on the descriptions in the papers only. In particular, we did not refer to the authors' Fortran or Matlab implementations (respectively), which are under restrictive "semifree" ACM copyright terms and are therefore unusable in free/open-source software. Algorithm 916 requires an external complementary error function erfc( x) function for real arguments x to be supplied as a subroutine. More precisely, it requires the scaled function erfcx( x) = e erfc( x2 x). Here, we use an erfcx routine written by SGJ that uses a combination of two algorithms: a continued-fraction expansion for large xand a lookup table of Chebyshev polynomials for small x. (I initially used an erfcx function derived from the DERFC routine in SLATEC, modified by SGJ to compute erfcx instead of erfc, by the new erfcx routine is much faster.) Test program To test the code, a small test program is included at the end of Faddeeva_w.cc which tests w( z) against several known results (from Wolfram Alpha) and prints the relative errors obtained. To compile the test program, #define FADDEEVA_W_TEST in the file (or compile with -DFADDEEVA_W_TEST on Unix) and compile Faddeeva_w.cc. The resulting program prints SUCCESS at the end of its output if the errors were acceptable. License The software is distributed under the "MIT License", a simple permissive free/open-source license: Copyright © 2012 Massachusetts Institute of Technology Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Two-body problem Set definiendum $\langle \mathbb R^{2\times 3}, H\rangle \in \mathrm{it} $ … Classical Hamiltonian system postulate $ H({\bf r}_1,{\bf r}_2,{\bf p}_1,{\bf p}_2) = \frac{1}{m_1}\frac{1}{2}{\bf p}_1^2 + \frac{1}{m_2}\frac{1}{2}{\bf p}_2^2 + V(|{\bf r}_1-{\bf r}_2|) $ Discussion Equations of motion in suitable coordinates range $ M \equiv m_1+m_2 $ range $ \mu \equiv m_1 m_2/M $ The following choice of coordinates eliminates the singles out the center of mass, which the Hamiltonian is independent of. range $ {\bf r} \equiv {\bf r}_2 - {\bf r}_1 $ range $ {\bf R} \equiv (m_1\ {\bf r}_1 + m_2\ {\bf r}_2)/M $ range $ {\bf p} \equiv (m_1\ {\bf p}_2 - m_2\ {\bf p}_1)/M $ range $ {\bf P} \equiv {\bf p}_1 + {\bf p}_2 $ range $ r \equiv |{\bf r}| $ $ H({\bf r},{\bf R},{\bf p},{\bf R}) = \frac{1}{M}\frac{1}{2}{\bf P}^2 + \frac{1}{\mu}\frac{1}{2}{\bf p}^2 + V(r) $ The Hamiltonian equations of motion give ${\bf p}(t)=\mu\frac{\partial}{\partial t}{\bf r}(t)$ and the vector ${\bf P}(t) = M\frac{\partial}{\partial t} {\bf R}(t)$ is conserved. The conserved angular momentum $L$ turns out to imply hat the planar angle component of ${\bf p}(t)$, denoted $p_{\phi}(t)$ is conserved as well. We end up with planar motion in the center of mass frame the system is reduced to only two parameters. The energy takes the form $ H = \frac{1}{\mu}\frac{1}{2}\left(p_r^2+(L/r)^2\right) + V(r) + \text{const.} $ and we denote it's value for $r(0), p_r(0)$ by $E$. The equations of motion read $ \mu\frac{\partial}{\partial t} r = \sqrt{2\mu(E-V(r))-(L/r)^2}$ $ \mu\frac{\partial}{\partial t} \phi = L/r^2 $ These furthermore imply the relation $\phi = \int \frac{L/r^2}{\sqrt{2\mu(E-V(r))-(L/r)^2}} \mathrm dr + \text{const.}$ Scattering process range $ {\bf g} \equiv \frac{\partial}{\partial t}{\bf r} $ range $ {\bf g}_{-\infty} \equiv \lim_{t\to-\infty}{\bf g} $ range $ {\bf g}_{+\infty} \equiv \lim_{t\to+\infty}{\bf g} $ Geometric considerations lead to the conclusion that there is a unit vector ${\bf \alpha}^V$, depending on the particle interaction potential $V$, with range $ S_{ij} \equiv \delta_{ij}-2\alpha_i^V\alpha_j^V $ $ {\bf g}_{+\infty} = S\ {\bf g}_{-\infty} $ If $b$ is the impact parameter of a collision under consideration, for $V(r)=K/r^n$ the angle between ${\bf g}_{-\infty}$ and ${\bf g}_{+\infty}$ is given by $\bar\beta+(\bar\beta/b)^n = 1$ $2\int_0^{\bar\beta} \frac{1}{\sqrt{1-\beta^2-(\beta/b)^n}} \mathrm d\beta $ For $N=1$, this gives the $\sin^{-4}$ formulas of Rutherford scattering. Reference Subset of
1. Ahmed, Babu, Chitra, David and Eesha each choose a large different number. Ahmed says, “ My number is not the largest and not the smallest”. Babu says, “My number is not the largest and not the smallest”. Chitra says, “My number is the largest”. David says, “ My number is the smallest”. Eesha says, “ My number is not the smallest”. Exactly one of the five children is lying. The others are telling the truth. Who has the largest number? a) Eesha b) David c) Chitra d) Babu Ans: A Largest -> A B C D E A F T/F T/F T/F T/F B T/F F T/F T/F T/F C F F T F F D T/F T/F T/F F T/F E T/F T/F T/F T/F T From the above table, If we assume that A has the largest then A and C both are lying. Similarly if we find the truthfullness of the remaining people, it is clear that E has the largest and C lied. (Only one F in the last column) 2. In the equation A + B + C + D + E = FG where FG is the two digit number whose value is 10F + G and letters A, B , C , D , E, F and G each represent different digits. If FG is as large as possible. What is the value of G? a) 4 b) 2 c) 1 d) 3 Ans: B FG is as large as possible and all the 7 numbers should be different. By trial and Error method, 9 + 8 + 7 + 6 + 5 = 35…5 is getting repeated twice. 9 + 8 + 7 + 6 + 4 = 34…4 is getting repeated 9 + 8 + 7 + 5 + 4 = 33…3 repeats 9 + 8 + 6 + 5 + 4 = 32 None of the numbers repeat in the above case and 32 is the maximum number FG can have. The value of G is 2. 3. A farmer has a rose garden. Every day he either plucks 7 or 6 or 24 or 23 roses. The rose plants are intelligent and when the farmer plucks these numbers of roses, the next day 37 or 36 or 9 or 18 new roses bloom in the garden respectively. On Monday, he counts 189 roses in the garden. He plucks the roses as per his plan on consecutive days and the new roses bloom as per intelligence of the plants mentioned above. After some days which of the following can be the number of roses in the garden? a) 4 b) 7 c) 30 d) 37 Ans: A If he plucks 23, then only 18 grows the next day. This means total roses get decreases by 5. So after n days assume the number of roses got decreased 185 where n = 37, then 4 roses left. 4. What is the value of (44444445*88888885*44444442+44444438)/44444444^2 a) 88888883 b) 88888884 c) 88888888 d) 44444443 Ans: A Let x = 44444444 $\displaystyle\frac{{(x + 1) \times (2x - 3) \times (x - 2) + (x - 6)}}{{{x^2}}}$ $\displaystyle\frac{{({x^2} - x - 2) \times (2x - 3) + (x - 6)}}{{{x^2}}}$ $\displaystyle\frac{{2{x^3} - 2{x^2} - 4x - 3{x^2} + 3x + 6 + x - 6}}{{{x^2}}}$ $\displaystyle\frac{{2{x^3} - 5{x^2}}}{{{x^2}}} = 2x - 5$ Substituting the value of x in 2x - 5, we get 88888883 4. For which of the following “n” is the number 2^74 +2^2058+2^2n is a perfect square? a) 2012 b) 2100 c) 2011 d) 2020 Ans: D 2^74 +2^2058+2^2n = ${K^2}$ 2^74 +2^2058+2^2n = ${\left( {{2^{37}}} \right)^2} + {2^{2058}} + {\left( {{2^n}} \right)^2}$ We try to write this expression as ${(a + b)^2} = {a^2} + 2ab + {b^2}$ Now a = ${{2^{37}}}$, 2ab = ${2^{2058}}$ and b = ${{2^n}}$ Substituting the value of a in 2ab, we get b = 2020 5. Raj writes a number. He sees that the number of two digits exceeds four times the sum of its digit by 3. If the number is increased by 18, the result is the same as the number formed by reversing the digit. Find the number a) 35 b) 57 c) 42 d) 49 Ans: A Going by the options, 35 = 8(4) + 3. 6. Weight of M, D and I is 74. Sum of D and I is 46 greater than M. I is 60% less than D. What is D's weight. Ans: 10 M + D + I = 74 - - - (1) (D + I) - M = 46 - - - (2) I = $\displaystyle\frac{4}{{10}}$ D $ \Rightarrow $ 5I = 2D $ \Rightarrow $ I = 2D/5 - - - (3) Adding (1) and (2) we get 2D + 2I = 120 Substituting the value of I in the above equation, $2D + 2\left( {\dfrac{{2D}}{5}} \right) = 120$ $ \Rightarrow $ 14D = 600 $ \Rightarrow $ D = 300/7 = 42.8 Adding (1) and (2) we get 2D + 2I = 120 Substituting the value of I in the above equation, $2D + 2\left( {\dfrac{{2D}}{5}} \right) = 120$ $ \Rightarrow $ 14D = 600 $ \Rightarrow $ D = 300/7 = 42.8 7. Father is 5 times faster than son. Father completes a work in 40 days before son. If both of them work together, when will the work get complete? a. 8 days b. 8 1/3 days c. 10 days d. 20 days Ans: B As efficiency is inversely proportional to days, If Father : son's efficiency is 5 : 1, then Days taken by them should be 1 : 5. Assume, the days taken by them are k, 5k. Given that father takes 40 days less. So 5k - k = 40 $ \Rightarrow $ k = 10 Father takes 10 days to complete the work. Total work is 10 x 5 = 50 units. If both of them work together, they complete 5 + 1 units a day. 6/day. To complete 50 units, they take 50/6 = 8 1/3 days. 8. A beaker contains 180 liters of alcohol. On 1st day, 60 l of alcohol is taken out and replaced by water. 2nd day, 60 l of mixture iss taken out and replaced by water and the process continues day after day. What will be the quantity of alcohol in beaker after 3 days Ans: 53.3 Use the formula, Final Alcohol = Initial Alcohol $\times{\left( {{\rm{1 - }}\displaystyle\frac{{{\rm{Replacement quantity}}}}{{{\rm{Final Volume}}}}} \right)^{\rm{n}}}$ Final Alcohol = ${\rm{180}}{\left( {1 - \displaystyle\frac{{60}}{{180}}} \right)^3}$ = $180 \times {\left( {\displaystyle\frac{2}{3}} \right)^3} = 53.3$ 9. If f(f(n)) + f(n) = 2n+3, f(0) = 1 then f(2012) = ? Ans: 2013 f (f(0)) + f(0) = 2(0) + 3 $ \Rightarrow $ f(1) = 3-1 = 2, f(1) = 2 f(f(1)) + f(1) = 2(1) + 3 $ \Rightarrow $ f(2) = 5-2 = 3, f(2) = 3 f(f(2)) + f(2) = 2(2) + 3 $ \Rightarrow $ f(3) = 7-3 = 4, f(3) = 4 .............. f(2012) = 2013 10. What will be in the next series 1, 7, 8, 49, 56, 57, 343, ... Ans: 344 1 = 1 7 = 1 x 7 8 = 1 x 7 + 1 49 = 7 x 7 + 1 50 = 7 x 7 + 1 56 = 8 x 7 57 = 8 x 7 + 1 343 = 49 x 7 Next term should be 49 x 7 + 1 = 344 11. In a 3 x 3 grid, comprising 9 tiles can be painted in red or blue. When tile is rotated by 180 degrees, there is no difference which can be spotted. How many such possibilities are there? a. 16 b. 32 c. 64 d. 256 Ans: B This grid even rotated 180 degrees the relative positions of the tiles do not change. So we paint tile number 1's with red or blue (only one color should be used) , 2's with red or blue.....tile 5 red or blue. Then total possibilities are ${2^5}$ = 32
I have a finite harmonic potential where I trap an electron. The confinement length changes in size. Now, I'm interested in the ground state energy, so I have this 1D Poisson solver which gives me the ground state energy $E_0$ and wave function. If we now add a confinement dimension, can we estimate how $E_0$ is going to change? Let's assume the second dimension has the same shape, so our new potential well is just a finite 2D parabola. For the case on the left, we can say we're safely below the energy continuum, so the energy formula for a N-dimensional harmonic potential is a good estimate: $$E_{n_x, n_y} \approx \left(n_x + n_y + \frac{N}{2} \right)\hbar \omega $$ Adding the additional confinement will therefore just double $E_0$. But how can we estimate the new $E_0$ for the case on the right where we're close to the continuum? My guess is that for the situation on the right, the new $E_0$ is going to be roughly in the middle between the energy continuum and the old $E_0$. However, I don't have a justification for that. How can we treat this problem? Is there some good physical argumentation for why this should be like that (or why it should be different)? Are there approximate formulas for this case? Background For my case, calculating the confinement potential (which is not exactly but roughly harmonic) is computationally intensive, that's why I don't want to compute the whole 2D potential landscape. It would be nice to solve the 1D case and then have a rough estimate for what $E_0$ is going to be in 2D.
Difference between revisions of "Talk:Gamma-function" (Mid TeX) (Mid TeX) Line 138: Line 138: $$ $$ − 7) The function $ $ (Fig. c) is an entire function of order one and of maximal type; asymptotically, as $ $, + 7) The function $$ (Fig. c) is an entire function of order one and of maximal type; asymptotically, as $$, − + − + − + $$ where where − + − + = = − + $$ It can be represented by the infinite Weierstrass product: It can be represented by the infinite Weierstrass product: − + − + = = − + - − which converges absolutely and uniformly on any compact set in the complex plane ($ $ is the [[ + $$ + which converges absolutely and uniformly on any compact set in the complex plane ($$ is the [[Euler constant]]). A Hankel integral representation is valid: <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;">$ $</td> </tr></table> <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;">$ $</td> </tr></table> Revision as of 18:22, 27 April 2012 $\Gamma$-function $ \newcommand{\abs}[1]{\left|#1\right|} \newcommand{\Re}{\mathop{\mathrm{Re}}} \newcommand{\Im}{\mathop{\mathrm{Im}}} $ A transcendental function $\Gamma(z)$ that extends the values of the factorial $z!$ to any complex number $z$. It was introduced in 1729 by L. Euler in a letter to Ch. Goldbach, using the infinite product $$ \Gamma(z) = \lim_{n\rightarrow\infty}\frac{n!n^z}{z(z+1)\ldots(z+n)} = \lim_{n\rightarrow\infty}\frac{n^z}{z(1+z/2)\ldots(1+z/n)}, $$ which was used by L. Euler to obtain the integral representation (Euler integral of the second kind, cf. Euler integrals) $$ \Gamma(z) = \int_0^\infty x^{z-1}e^{-x} \rd x, $$ which is valid for $\Re z > 0$. The multi-valuedness of the function $x^{z-1}$ is eliminated by the formula $x^{z-1}=e^{(z-1)\ln x}$ with a real $\ln x$. The symbol $\Gamma(z)$ and the name gamma-function were proposed in 1814 by A.M. Legendre. If $\Re z < 0$ and $-k-1 < \Re z < -k$, $k=0,1,\ldots$, the gamma-function may be represented by the Cauchy–Saalschütz integral: $$ \Gamma(z) = \int_0^\infty x^{z-1} \left( e^{-x} - \sum_{m=0}^k (-1)^m \frac{x^m}{m!} \right) \rd x. $$ In the entire plane punctured at the points $z=0,-1,\ldots $, the gamma-function satisfies a Hankel integral representation: $$ \Gamma(z) = \frac{1}{e^{2\pi iz} - 1} \int_C s^{z-1}e^{-s} \rd s, $$ where $s^{z-1} = e^{(z-1)\ln s}$ and $\ln s$ is the branch of the logarithm for which $0 < \arg\ln s < 2\pi$; the contour $C$ is represented in Fig. a. [FIXME] It is seen from the Hankel representation that $\Gamma(z)$ is a meromorphic function. At the points $z_n = -n$, $n=0,1,\ldots$ it has simple poles with residues $(-1)^n/n!$. Figure: g043310a Contents Fundamental relations and properties of the gamma-function. 1) Euler's functional equation: $$ z\Gamma(z) = \Gamma(z+1), $$ or $$ \Gamma(z) = \frac{1}{z\ldots(z+n)}\Gamma(z+n+1); $$ $\Gamma(1)=1$, $\Gamma(n+1) = n!$ if $n$ is an integer; it is assumed that $0! = \Gamma(1) = 1$. 2) Euler's completion formula: $$ \Gamma(z)\Gamma(1-z) = \frac{\pi}{\sin \pi z}. $$ In particular, $\Gamma(1/2)=\sqrt{\pi}$; $$ \Gamma\left(n+\frac{1}{2}\right) = \frac{1.3\ldots(2n-1)}{2^n}\sqrt{\pi} $$ if $n>0$ is an integer; $$ \abs{\Gamma\left(\frac{1}{2} + iy\right)}^2 = \frac{\pi}{\cosh y\pi}, $$ where $y$ is real. 3) Gauss' multiplication formula: $$ \prod_{k=0}^{m-1} \Gamma\left( z + \frac{k}{m} \right) = (2\pi)^{(m-1)/2}m^{(1/2)-mz}\Gamma(mz), \quad m = 2,3,\ldots $$ If $m=2$, this is the Legendre duplication formula. 4) If $\Re z \geq \delta > 0$ or $\abs{\Im z} \geq \delta > 0$, then $\ln\Gamma(z)$ can be asymptotically expanded into the Stirling series: $$ \ln\Gamma(z) = \left(z-\frac{1}{2}\right)\ln z - z + \frac{1}{2}\ln 2\pi + \sum_{n=1}^m \frac{B_{2n}}{2n(2n-1)z^{2n-1}} + O\bigl(z^{-2m-1}\bigr), \quad m = 1,2,\ldots, $$ where $B_{2n}$ are the Bernoulli numbers. It implies the equality $$ \Gamma(z) = \sqrt{2\pi} z^{z-1/2} z^{-z} \left( 1 + \frac{1}{12}z^{-1} + \frac{1}{288}z^{-2} - \frac{139}{51840}z^{-3} - \frac{571}{2488320}z^{-4} + O\bigl(z^{-5}\bigr) \right). $$ In particular, $$ \Gamma(1+x) = \sqrt{2\pi} x^{x+1/2} e^{-x + \theta/12x}, \quad 0 < \theta < 1. $$ More accurate is Sonin's formula [6]: $$ \Gamma(1+x) = \sqrt{2\pi} x^{x+1/2} e^{-x + 1/12(x+\theta)}, \quad 0 < \theta < 1/2. $$ 5) In the real domain, $\Gamma(x) > 0$ for $x > 0$ and it assumes the sign $(-1)^{k+1}$ on the segments $-k-1 < x < -k$, $k = 0,1,\ldots$ (Fig. b). Figure: g043310b The graph of the function $ $. For all real $x$ the inequality $$ \Gamma\Gamma^{\prime\prime} > \bigl(\Gamma^\prime\bigr)^2 \geq 0 $$ is valid, i.e. all branches of both $\abs{\Gamma(x)}$ and $\ln\abs{\Gamma(x)}$ are convex functions. The property of logarithmic convexity defines the gamma-function among all solutions of the functional equation $$ \Gamma(1+x) = x\Gamma(x) $$ up to a constant factor (see also the Bohr–Mollerup theorem). For positive values of $x$ the gamma-function has a unique minimum at $x=1.4616321\ldots$ equal to $0.885603\ldots$. The local minima of the function $\abs{\Gamma(x)}$ form a sequence tending to zero as $x\rightarrow -\infty$. Figure: g043310c The graph of the function $ $. 6) In the complex domain, if $\Re z > 0$, the gamma-function rapidly decreases as $\abs{\Im z} \rightarrow \infty$, $$ \lim_{\abs{\Im z} \rightarrow \infty} \abs{\Gamma(z)}\abs{\Im z}^{(1/2)-\Re z}e^{\pi\abs{\Im z}/2} = \sqrt{2\pi}. $$ 7) The function $1/\Gamma(z)$ (Fig. c) is an entire function of order one and of maximal type; asymptotically, as $r \rightarrow \infty$, $$ \ln M(r) \sim r \ln r, $$ where $$ M(r) = \max_{\abs{z} = r} \frac{1}{\abs{\Gamma(z)}}. $$ It can be represented by the infinite Weierstrass product: $$ \frac{1}{\Gamma(z)} = z e^{\gamma z} \prod_{n=1}^\infty \left(\left( 1 + \frac{z}{n} \right) e^{-z/n} \right), $$ which converges absolutely and uniformly on any compact set in the complex plane ($\gamma$ is the Euler constant). A Hankel integral representation is valid: $ $ where the contour $ $ is shown in Fig. d. Figure: g043310d $ $ G.F. Voronoi [7] obtained integral representations for powers of the gamma-function. In applications, the so-called poly gamma-functions — $ $-th derivatives of $ $ — are of importance. The function (Gauss' $ $-function) $ $ $ $ is meromorphic, has simple poles at the points $ $ and satisfies the functional equation $ $ The representation of $ $ for $ $ yields the formula $ $ where $ $ This formula may be used to compute $ $ in a neighbourhood of the point $ $. $ $ The functions $ $ and $ $ are transcendental functions which do not satisfy any linear differential equation with rational coefficients (Hölder's theorem). The exceptional importance of the gamma-function in mathematical analysis is due to the fact that it can be used to express a large number of definite integrals, infinite products and sums of series (see, for example, Beta-function). In addition, it is widely used in the theory of special functions (thehypergeometric function, of which the gamma-function is a limit case, cylinder functions, etc.), in analytic number theory, etc. References [1] E.T. Whittaker, G.N. Watson, "A course of modern analysis" , Cambridge Univ. Press (1952) [2] H. Bateman (ed.) A. Erdélyi (ed.) , Higher transcendental functions , 1. The gamma function. The hypergeometric functions. Legendre functions , McGraw-Hill (1953) [3] N. Bourbaki, "Elements of mathematics. Functions of a real variable" , Addison-Wesley (1976) (Translated from French) [4] , Math. anal., functions, limits, series, continued fractions , Handbook Math. Libraries , Moscow (1961) (In Russian) [5] N. Nielsen, "Handbuch der Theorie der Gammafunktion" , Chelsea, reprint (1965) [6] N.Ya. Sonin, "Studies on cylinder functions and special polynomials" , Moscow (1954) (In Russian) [7] G.F. Voronoi, "Studies of primitive parallelotopes" , Collected works , 2 , Kiev (1952) pp. 239–368 (In Russian) [8] E. Jahnke, F. Emde, "Tables of functions with formulae and curves" , Dover, reprint (1945) (Translated from German) [9] A. Angot, "Compléments de mathématiques. A l'usage des ingénieurs de l'electrotechnique et des télécommunications" , C.N.E.T. (1957) Comments The $ $-analogue of the gamma-function is given by $ $ $ $ References [a1] E. Artin, "The gamma function" , Holt, Rinehart & Winston (1964) [a2] R. Askey, "The $ $-Gamma and $ $-Beta functions" Appl. Anal. , 8 (1978) pp. 125–141 How to Cite This Entry: Gamma-function. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Gamma-function&oldid=25595
The bounty period lasts 7 days. Bounties must have a minimum duration of at least 1 day. After the bounty ends, there is a grace period of 24 hours to manually award the bounty. Simply click the bounty award icon next to each answer to permanently award your bounty to the answerer. (You cannot award a bounty to your own answer.) @Mathphile I found no prime of the form $$n^{n+1}+(n+1)^{n+2}$$ for $n>392$ yet and neither a reason why the expression cannot be prime for odd n, although there are far more even cases without a known factor than odd cases. @TheSimpliFire That´s what I´m thinking about, I had some "vague feeling" that there must be some elementary proof, so I decided to find it, and then I found it, it is really "too elementary", but I like surprises, if they´re good. It is in fact difficult, I did not understand all the details either. But the ECM-method is analogue to the p-1-method which works well, then there is a factor p such that p-1 is smooth (has only small prime factors) Brocard's problem is a problem in mathematics that asks to find integer values of n and m for whichn!+1=m2,{\displaystyle n!+1=m^{2},}where n! is the factorial. It was posed by Henri Brocard in a pair of articles in 1876 and 1885, and independently in 1913 by Srinivasa Ramanujan.== Brown numbers ==Pairs of the numbers (n, m) that solve Brocard's problem are called Brown numbers. There are only three known pairs of Brown numbers:(4,5), (5,11... $\textbf{Corollary.}$ No solutions to Brocard's problem (with $n>10$) occur when $n$ that satisfies either \begin{equation}n!=[2\cdot 5^{2^k}-1\pmod{10^k}]^2-1\end{equation} or \begin{equation}n!=[2\cdot 16^{5^k}-1\pmod{10^k}]^2-1\end{equation} for a positive integer $k$. These are the OEIS sequences A224473 and A224474. Proof: First, note that since $(10^k\pm1)^2-1\equiv((-1)^k\pm1)^2-1\equiv1\pm2(-1)^k\not\equiv0\pmod{11}$, $m\ne 10^k\pm1$ for $n>10$. If $k$ denotes the number of trailing zeros of $n!$, Legendre's formula implies that \begin{equation}k=\min\left\{\sum_{i=1}^\infty\left\lfloor\frac n{2^i}\right\rfloor,\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\right\}=\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\end{equation} where $\lfloor\cdot\rfloor$ denotes the floor function. The upper limit can be replaced by $\lfloor\log_5n\rfloor$ since for $i>\lfloor\log_5n\rfloor$, $\left\lfloor\frac n{5^i}\right\rfloor=0$. An upper bound can be found using geometric series and the fact that $\lfloor x\rfloor\le x$: \begin{equation}k=\sum_{i=1}^{\lfloor\log_5n\rfloor}\left\lfloor\frac n{5^i}\right\rfloor\le\sum_{i=1}^{\lfloor\log_5n\rfloor}\frac n{5^i}=\frac n4\left(1-\frac1{5^{\lfloor\log_5n\rfloor}}\right)<\frac n4.\end{equation} Thus $n!$ has $k$ zeroes for some $n\in(4k,\infty)$. Since $m=2\cdot5^{2^k}-1\pmod{10^k}$ and $2\cdot16^{5^k}-1\pmod{10^k}$ has at most $k$ digits, $m^2-1$ has only at most $2k$ digits by the conditions in the Corollary. The Corollary if $n!$ has more than $2k$ digits for $n>10$. From equation $(4)$, $n!$ has at least the same number of digits as $(4k)!$. Stirling's formula implies that \begin{equation}(4k)!>\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\end{equation} Since the number of digits of an integer $t$ is $1+\lfloor\log t\rfloor$ where $\log$ denotes the logarithm in base $10$, the number of digits of $n!$ is at least \begin{equation}1+\left\lfloor\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)\right\rfloor\ge\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right).\end{equation} Therefore it suffices to show that for $k\ge2$ (since $n>10$ and $k<n/4$), \begin{equation}\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)>2k\iff8\pi k\left(\frac{4k}e\right)^{8k}>10^{4k}\end{equation} which holds if and only if \begin{equation}\left(\frac{10}{\left(\frac{4k}e\right)}\right)^{4k}<8\pi k\iff k^2(8\pi k)^{\frac1{4k}}>\frac58e^2.\end{equation} Now consider the function $f(x)=x^2(8\pi x)^{\frac1{4x}}$ over the domain $\Bbb R^+$, which is clearly positive there. Then after considerable algebra it is found that \begin{align*}f'(x)&=2x(8\pi x)^{\frac1{4x}}+\frac14(8\pi x)^{\frac1{4x}}(1-\ln(8\pi x))\\\implies f'(x)&=\frac{2f(x)}{x^2}\left(x-\frac18\ln(8\pi x)\right)>0\end{align*} for $x>0$ as $\min\{x-\frac18\ln(8\pi x)\}>0$ in the domain. Thus $f$ is monotonically increasing in $(0,\infty)$, and since $2^2(8\pi\cdot2)^{\frac18}>\frac58e^2$, the inequality in equation $(8)$ holds. This means that the number of digits of $n!$ exceeds $2k$, proving the Corollary. $\square$ We get $n^n+3\equiv 0\pmod 4$ for odd $n$, so we can see from here that it is even (or, we could have used @TheSimpliFire's one-or-two-step method to derive this without any contradiction - which is better) @TheSimpliFire Hey! with $4\pmod {10}$ and $0\pmod 4$ then this is the same as $10m_1+4$ and $4m_2$. If we set them equal to each other, we have that $5m_1=2(m_2-m_1)$ which means $m_1$ is even. We get $4\pmod {20}$ now :P Yet again a conjecture!Motivated by Catalan's conjecture and a recent question of mine, I conjecture thatFor distinct, positive integers $a,b$, the only solution to this equation $$a^b-b^a=a+b\tag1$$ is $(a,b)=(2,5).$It is of anticipation that there will be much fewer solutions for incr...
In this question, all rings and algebras are commutative with identity. Let $R$ be a ring, and let $A$ be an $R$-algebra with an $R$-subalgebra $B$. Suppose that we have an $R$-algebra homomorphism $\phi: B\to R$; then we can form the tensor product $A\otimes_B R$. My question is: If the structure maps $R\to B$ and $R\to A$ are injective, is the map $R\to A\otimes_B R$ injective as well? My intuition says Yes: the tensor product $A\otimes_B R$ is a quotient $A / (b-\phi(b): b\in B)$, and since $\phi: B\to R$ is a ring homomorphism preserving elements of $R$, it's hard to see how this ideal could ever contain an element of $R$. But of course that's not enough to go on. I'm particularly interested in the case where $A = R[x_1,\ldots,x_n]$ and $B = R[x_1,\ldots,x_n]^G$ for some subgroup $G\subseteq S_n$, so if it would help to use the fact that $A$ is a polynomial ring, then by all means please do.
I tried: $\lim_{x \rightarrow 0^+}(e^{\frac{1}{x}}x^2) = x^2 \cdot \frac{1}{e^{-\frac{1}{x}}} = \frac{x^2}{e^{-\frac{1}{x}}} = ???$ I thought maybe I could use $y = - \frac{1}{x}$, but I don't know what to do next. I know the limit just by looking a the function: $\lim_{x \rightarrow 0^+} e^{\frac{1}{x}} = \infty $ and $\lim_{x \rightarrow 0^+} x^2 \rightarrow$ values close to 0 but greater than zero. And so the answer is $\infty$ but this looks incomplete. How do I solve this analitically?
STATEMENT OF THE PROBLEM If $q^k n^2$ is an odd perfect number with Euler prime $q$, is $\sigma(q^k)/n + \sigma(n)/q^k$ bounded from above? MOTIVATION Let $\sigma=\sigma_{1}$ denote the classical sum-of-divisors function, and denote the abundancy index of $x \in \mathbb{N}$ by $I(x)=\sigma(x)/x$. It is known that the inequality$$I(q^k) + I(n) < \frac{\sigma(q^k)}{n}+\frac{\sigma(n)}{q^k}$$holds if and only if the biconditional$$q^k < n \iff \sigma(q^k) < \sigma(n)$$is true. This biconditional is true if $\sigma(q^k)<n$, or if $\sigma(n) \leq q^k$. (I currently am not aware of any other conditions for which the biconditional holds.) Edit (August 11 2017): The biconditional is also true when $q^k < n$. (This follows from $I(q^k)<I(n)$.) Note that if $\sigma(q^k)/n + \sigma(n)/q^k < C$ for some absolute constant $C$, then $$\sqrt{\frac{8}{5}}\frac{n}{C} < q^k < Cn,$$ so that $C > \sqrt[4]{8/5}$. However, I know that $C > \sqrt[4]{8/5}$ is far from the truth, as I have recently been able to verify that either $$\frac{\sigma(q^k)}{n} < \sqrt{2} < \frac{\sigma(n)}{q^k}$$ or $$\frac{\sigma(n)}{q^k} < \sqrt{2} < \frac{\sigma(q^k)}{n}$$ is true. In the first case, $q^k < n\sqrt{2}$, while in the second case, we have $n < q^k$. Of course, trivially we have $$I(q^k) + I(n) < I(q^k) + I(n^2) < 3.$$
The Black-Scholes option pricing model provides a closed-form pricing formula $BS(\sigma)$ for a European-exercise option with price $P$. There is no closed-form inverse for it, but because it has a closed-form vega (volatility derivative) $\nu(\sigma)$, and the derivative is nonnegative, we can use the Newton-Raphson formula with confidence. Essentially, we choose a starting value $\sigma_0$ say from yoonkwon's post. Then, we iterate $$\sigma_{n+1} = \sigma_n - \frac{BS(\sigma_n)-P}{\nu(\sigma_n)}$$ until we have reached a solution of sufficient accuracy. This only works for options where the Black-Scholes model has a closed-form solution and a nice vega. When it does not, as for exotic payoffs, American-exercise options and so on, we need a more stable technique that does not depend on vega. In these harder cases, it is typical to apply a secant method with bisective bounds checking. A favored algorithm is Brent's method since it is commonly available and quite fast.
Rotational Newton's Second Law As we saw for linear motion, we can only go so far with energy conservation. If we want to analyze aspects of motion such as elapsed time and direction of motion, we need more than mechanical energy conservation to work with. In the linear case, we found that this meant that we had to use Newton's Second Law. We now seek the rotational equivalent of that law. The rotational equivalent of the Newton's Second Law must relate the reaction of the system (rotational acceleration) to an external influence (rotational force), with the degree of this effect being determined by an internal property of the system (rotational mass). That is, we need a rotational substitute for all of the participants of this formula: \[ \overrightarrow a_{cm} = \dfrac{\overrightarrow F_{net}}{m} \] We already found a rotational version of acceleration in our discussion of rotational kinematics – it is the angular acceleration. We even defined a direction for this vector using the right-hand rule. The center of mass qualification in the case above is unneeded for the rotational case, because the angular acceleration is the same about every point on a rigid object. We have also determined an appropriate candidate for the "rotational mass" – the rotational inertia. This is certainly a reasonable choice, for a couple of reasons. First, from our direct experience we know that it is easier to swing an object (e.g. a baseball bat) when holding the heavier end than when holding the lighter end, so the degree to which an extended object "resists" angular acceleration is determined by the distribution of mass. Second, if the physics is to remain consistent, why would the quantity that plays the role of mass in kinetic energy be different from the quantity that plays the role of mass for the second law? With those two quantities established, we can now get a glimpse into what the "rotational force" is by examining the units: \[ \left[\alpha\right] = \dfrac{\left[rotational\;force\right]}{\left[I\right]} \;\;\; \Rightarrow \;\;\; \left[rotational\;force\right] = \left[\dfrac{rad}{s^2}\right]\left[kg\cdot m^2\right] = \dfrac{kg \cdot m^2}{s^2} \] This is weird... These are units of energy! We'll need to chalk this up to coincidence, since clearly the vector quantity of rotational force cannot be a measure of energy. One way to see the difference is to remember the presence of radians in the numerator, even though they are not physical units. We will soon see the source of this coincidence, and it shouldn't take long before the apparent ambiguity between this quantity and energy fades away. Alert While the physical units are the same as energy, we never refer to the SI units of this quantity as "joules." Using this term implies that we are talking about energy, which we are not. Generally we stick to "Newton-meters." We can't continue calling this vector "rotational force" forever, so we will henceforth refer to it by its proper name: torque. In keeping with our tradition of using greek variables for rotational quantities, we will represent torque with \(\overrightarrow \tau\), giving as our rotational Newton's second law: \[\overrightarrow \alpha = \dfrac{\overrightarrow \tau_{net}}{I} \] Torque In the cases of acceleration and inertia, we found a direct relationship between the linear and rotational quantities, so we would expect there to be a similar relationship between force and torque. Furthermore, since the linear/rotational bridge for acceleration and inertia both require a point of reference (the pivot), we would expect the same to be true for the bridge between force and torque. The first thing we notice is that an object can experience no net force and yet still experience a nonzero rotational acceleration:
1) Whenever one has a topological vector space (TVS) $V$ over some field $\mathbb{F}$, one can construct a dual vector space $V^*$ consisting of continuous linear functionals $f:V\to\mathbb{F}$. 2) Under relative mild conditions on the topology of $V$, it is possible to turn the dual vector space $V^*$ into a TVS. One may iterate the construct of dual vector spaces, so that more generally, one may consider the double-dual vector space $V^{**}$, the triple-dual vector space $V^{***}$, etc. 3) There is a natural/canonical injective linear map $i :V\to V^{**}$. It is defined as $$i(v)(f):=f(v),\qquad\qquad v\in V, \qquad\qquad f\in V^*. $$ 4) If the map $i$ is bijective $V\cong V^{**}$, one says that $V$ is a reflexive TVS. 5) If $V$ is an inner product space (which is a particularly nice example of a TVS), then there is a natural/canonical injective conjugated linear map $j :V\to V^*$. It is defined as $$j(v)(w):=\langle v, w \rangle ,\qquad\qquad v,w\in V. $$ Here we follow the Dirac convention that the "bracket" $\langle\cdot, \cdot \rangle$ is conjugated linear in the first entry (as opposed to a lot of the math literature). 6) Riesz representation theorem (RRT) shows that $j$ is a bijection if $V$ is a Hilbert space. In other words, a Hilbert space is selfdual $V\cong V^*$. If one identifies $V$ with the set of kets, and $V^*$ with the set of bras, one may interpret RRT as saying that there is a natural/canonical one-to-one correspondence between bras and kets.
Shouldn't the electric flux through a circular disc due to a point charge kept at some finite distance from it be zero as all field lines which enter it exit it also and hence the net would be zero? Your disk isn't a closed surface, so Gauss's law doesn't apply here. The idea of field lines entering and exiting the surface relies on the idea of your field line entering on one part of the surface and exiting on the other part of the surface. At a single point on a surface you don't say the field line is entering and exiting the surface. There will be a non-zero flux through the disk since at all points on the disk the field will have components in a single direction through that disk. The only way to make the flux $0$ through the disk is if the point charge and the disk were in the same plane. This goes to show that the "entering and exiting" of field lines is a nice qualitative picture, but when you want to actually calculate flux you need to go back to the mathematical definition: $$\Phi=\int \mathbf E\cdot\text d\mathbf a$$ For your point charge example with your disk as your surface, this is obviously not $0$ since, as I mentioned earlier, all values of the integrand on the disk will have the same sign (positive or negative depending on the orientation of your surface and the charge in question). The two "surfaces" of disc are actually the "sides" of same surface. It is obvious,that field lines, entering from one side, exit from the other side of the surface. So, this would imply that flux is always zero! Change your thinking perspective.
Difference between revisions of "Linear representation theory of symmetric group:S5" (→Family contexts) (15 intermediate revisions by the same user not shown) Line 8: Line 8: ==Summary== ==Summary== + {| class="sortable" border="1" {| class="sortable" border="1" ! Item !! Value ! Item !! Value |- |- − | [[Degrees of irreducible representations]] over a [[splitting field]] || 1,1,4,4,5,5,6<br>[[maximum degree of irreducible representation|maximum]]: 6, [[lcm of degrees of irreducible representations|lcm]]: 60, [[number of irreducible representations equals number of conjugacy classes|number]]: 7, [[sum of squares of degrees of irreducible representations equals order of group|sum of squares]]: 120 + | [[Degrees of irreducible representations]] over a [[splitting field]] || 1,1,4,4,5,5,6<br>[[maximum degree of irreducible representation|maximum]]: 6, [[lcm of degrees of irreducible representations|lcm]]: 60, [[number of irreducible representations equals number of conjugacy classes|number]]: 7, [[sum of squares of degrees of irreducible representations equals order of group|sum of squares]]: 120 |- |- | [[Schur index]] values of irreducible representations || 1,1,1,1,1,1,1<br>[[maximum Schur index of irreducible representation|maximum]]: 1, [[lcm of Schur indices of irreducible representations|lcm]]: 1 | [[Schur index]] values of irreducible representations || 1,1,1,1,1,1,1<br>[[maximum Schur index of irreducible representation|maximum]]: 1, [[lcm of Schur indices of irreducible representations|lcm]]: 1 Line 23: Line 24: | Smallest size [[splitting field]] || [[field:F7]], i.e., the field of 7 elements. | Smallest size [[splitting field]] || [[field:F7]], i.e., the field of 7 elements. |} |} − + ==Family contexts== ==Family contexts== Line 29: Line 30: ! Family name !! Parameter values !! General discussion of linear representation theory of family ! Family name !! Parameter values !! General discussion of linear representation theory of family |- |- − | [[symmetric group]] || 5 || [[linear representation theory of symmetric groups]] + | [[symmetric group]] || 5|| [[linear representation theory of symmetric groups]] |- |- − | [[projective general linear group of degree two]] || [[field:F5]] || [[linear representation theory of projective general linear group of degree two]] + | [[projective general linear group of degree two]] || [[field:F5]]|| [[linear representation theory of projective general linear group of degree two ]] |} |} Line 67: Line 68: {| class="sortable" border="1" {| class="sortable" border="1" − ! Description of collection of representations !! Parameter for describing each representation !! How the representation is described !! Degree of each representation (general <math>q</math> !! Degree of each representation (<math>q = 5</math>) !! Number of representations (general <math>q</math>) !! Number of representations (<math>q = 5</math>) !! Sum of squares of degrees (general <math>q</math>) !! Sum of squares of degrees (<math>q = 5</math>) !! Symmetric group name + ! Description of collection of representations !! Parameter for describing each representation !! How the representation is described !! Degree of each representation (general <math>q</math>!! Degree of each representation (<math>q = 5</math>) !! Number of representations (general <math>q</math>) !! Number of representations (<math>q = 5</math>) !! Sum of squares of degrees (general <math>q</math>) !! Sum of squares of degrees (<math>q = 5</math>) !! Symmetric group name |- |- | Trivial || -- || <math>x \mapsto 1</math>|| 1 || 1 || 1 || 1 || 1 || 1 || trivial | Trivial || -- || <math>x \mapsto 1</math>|| 1 || 1 || 1 || 1 || 1 || 1 || trivial Line 81: Line 82: | Unclear || a nontrivial homomorphism <math>\varphi:\mathbb{F}_{q^2}^\ast \to \mathbb{C}^\ast</math>, with the property that <math>\varphi(x)^{q+1} = 1</math> for all <math>x</math>, and <math>\varphi</math> takes values other than <math>\pm 1</math>. Identify <math>\varphi</math> and <math>\varphi^q</math>. || unclear || <math>q - 1</math> || 4 || <math>(q-1)/2</math> || 2 || <math>(q-1)^3/2</math> || 32 || standard representation, product of standard and sign | Unclear || a nontrivial homomorphism <math>\varphi:\mathbb{F}_{q^2}^\ast \to \mathbb{C}^\ast</math>, with the property that <math>\varphi(x)^{q+1} = 1</math> for all <math>x</math>, and <math>\varphi</math> takes values other than <math>\pm 1</math>. Identify <math>\varphi</math> and <math>\varphi^q</math>. || unclear || <math>q - 1</math> || 4 || <math>(q-1)/2</math> || 2 || <math>(q-1)^3/2</math> || 32 || standard representation, product of standard and sign |- |- − + Total || NA || NA || NA || NA || <math>q + 2</math> || 7 || <math>q^3 - q</math> || 120 || NA + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + |} |} Latest revision as of 05:41, 16 January 2013 This article gives specific information, namely, linear representation theory, about a particular group, namely: symmetric group:S5. View linear representation theory of particular groups | View other specific information about symmetric group:S5 This article describes the linear representation theory of symmetric group:S5, a group of order . We take this to be the group of permutations on the set . Summary Item Value Degrees of irreducible representations over a splitting field (such as or ) 1,1,4,4,5,5,6 maximum: 6, lcm: 60, number: 7, sum of squares: 120 Schur index values of irreducible representations 1,1,1,1,1,1,1 maximum: 1, lcm: 1 Smallest ring of realization for all irreducible representations (characteristic zero) -- ring of integers Smallest field of realization for all irreducible representations, i.e., smallest splitting field (characteristic zero) -- hence it is a rational representation group Criterion for a field to be a splitting field Any field of characteristic not equal to 2,3, or 5. Smallest size splitting field field:F7, i.e., the field of 7 elements. Family contexts Family name Parameter values General discussion of linear representation theory of family symmetric group of degree linear representation theory of symmetric groups projective general linear group of degree two over a finite field of size , i.e., field:F5, so the group is linear representation theory of projective general linear group of degree two over a finite field Degrees of irreducible representations FACTS TO CHECK AGAINST FOR DEGREES OF IRREDUCIBLE REPRESENTATIONS OVER SPLITTING FIELD: Divisibility facts: degree of irreducible representation divides group order | degree of irreducible representation divides index of abelian normal subgroup Size bounds: order of inner automorphism group bounds square of degree of irreducible representation| degree of irreducible representation is bounded by index of abelian subgroup| maximum degree of irreducible representation of group is less than or equal to product of maximum degree of irreducible representation of subgroup and index of subgroup Cumulative facts: sum of squares of degrees of irreducible representations equals order of group | number of irreducible representations equals number of conjugacy classes | number of one-dimensional representations equals order of abelianization Note that the linear representation theory of the symmetric group of degree four works over any field of characteristic not equal to two or three, and the list of degrees is . Interpretation as symmetric group Common name of representation Degree Corresponding partition Young diagram Hook-length formula for degree Conjugate partition Representation for conjugate partition trivial representation 1 5 1 + 1 + 1 + 1 + 1 sign representation sign representation 1 1 + 1 + 1 + 1 + 1 5 trivial representation standard representation 4 4 + 1 2 + 1 + 1 + 1 product of standard and sign representation product of standard and sign representation 4 2 + 1 + 1 + 1 4 + 1 standard representation irreducible five-dimensional representation 5 3 + 2 2 + 2 + 1 other irreducible five-dimensional representation irreducible five-dimensional representation 5 2 + 2 + 1 3 + 2 other irreducible five-dimensional representation exterior square of standard representation 6 3 + 1 + 1 3 + 1 + 1 the same representation, because the partition is self-conjugate. Interpretation as projective general linear group of degree two Compare and contrast with linear representation theory of projective general linear group of degree two over a finite field Description of collection of representations Parameter for describing each representation How the representation is described Degree of each representation (general odd ) Degree of each representation () Number of representations (general odd ) Number of representations () Sum of squares of degrees (general odd ) Sum of squares of degrees () Symmetric group name Trivial -- 1 1 1 1 1 1 trivial Sign representation -- Kernel is projective special linear group of degree two (in this case, alternating group:A5), image is 1 1 1 1 1 1 sign Nontrivial component of permutation representation of on the projective line over -- -- 5 1 1 25 irreducible 5D Tensor product of sign representation and nontrivial component of permutation representation on projective line -- -- 5 1 1 25 other irreducible 5D Induced from one-dimensional representation of Borel subgroup ? ? 6 1 36 exterior square of standard representation Unclear a nontrivial homomorphism , with the property that for all , and takes values other than . Identify and . unclear 4 2 32 standard representation, product of standard and sign Total NA NA NA NA 7 120 NA Character table FACTS TO CHECK AGAINST (for characters of irreducible linear representations over a splitting field): Orthogonality relations: Character orthogonality theorem | Column orthogonality theorem Separation results(basically says rows independent, columns independent): Splitting implies characters form a basis for space of class functions|Character determines representation in characteristic zero Numerical facts: Characters are cyclotomic integers | Size-degree-weighted characters are algebraic integers Character value facts: Irreducible character of degree greater than one takes value zero on some conjugacy class| Conjugacy class of more than average size has character value zero for some irreducible character | Zero-or-scalar lemma Representation/conjugacy class representative and size (size 1) (size 10) (size 15) (size 20) (size 20) (size 24) (size 30) trivial representation 1 1 1 1 1 1 1 sign representation 1 -1 1 1 -1 1 -1 standard representation 4 2 0 1 -1 -1 0 product of standard and sign representation 4 -2 0 1 1 -1 0 irreducible five-dimensional representation 5 1 1 -1 1 0 -1 irreducible five-dimensional representation 5 -1 1 -1 -1 0 1 exterior square of standard representation 6 0 -2 0 0 1 0 Below are the size-degree-weighted characters, i.e., these are obtained by multiplying the character value by the size of the conjugacy class and then dividing by the degree of the representation. Note that size-degree-weighted characters are algebraic integers. Representation/conjugacy class representative and size (size 1) (size 10) (size 15) (size 20) (size 20) (size 24) (size 30) trivial representation 1 10 15 20 20 24 30 sign representation 1 -10 15 20 -20 24 -30 standard representation 1 5 0 5 -5 -6 0 product of standard and sign representation 1 -5 0 5 5 -6 0 irreducible five-dimensional representation 1 2 3 -4 4 0 -6 irreducible five-dimensional representation 1 -2 3 -4 -4 0 6 exterior square of standard representation 1 0 -5 0 0 4 0 GAP implementation The degrees of irreducible representations can be computed using GAP's CharacterDegrees function: gap> CharacterDegrees(SymmetricGroup(5)); [ [ 1, 2 ], [ 4, 2 ], [ 5, 2 ], [ 6, 1 ] ] This means that there are 2 degree 1 irreducible representations, 2 degree 4 irreducible representations, 2 degree 5 irreducible representations, and 1 degree 6 irreducible representation. The characters of all irreducible representations can be computed in full using GAP's CharacterTable function: gap> Irr(CharacterTable(SymmetricGroup(5))); [ Character( CharacterTable( Sym( [ 1 .. 5 ] ) ), [ 1, -1, 1, 1, -1, -1, 1 ] ), Character( CharacterTable( Sym( [ 1 .. 5 ] ) ), [ 4, -2, 0, 1, 1, 0, -1 ] ), Character( CharacterTable( Sym( [ 1 .. 5 ] ) ), [ 5, -1, 1, -1, -1, 1, 0 ] ), Character( CharacterTable( Sym( [ 1 .. 5 ] ) ), [ 6, 0, -2, 0, 0, 0, 1 ] ), Character( CharacterTable( Sym( [ 1 .. 5 ] ) ), [ 5, 1, 1, -1, 1, -1, 0 ] ), Character( CharacterTable( Sym( [ 1 .. 5 ] ) ), [ 4, 2, 0, 1, -1, 0, -1 ] ), Character( CharacterTable( Sym( [ 1 .. 5 ] ) ), [ 1, 1, 1, 1, 1, 1, 1 ] ) ]
\frac {1 - \tau^{n}}{1 + \tau^{c}},\end{array}$ which, in turn, implies that, for example, 30% labor income tax rate is as distorting as 43% consumption tax rate and so forth.1 − 30 % 1 + 0 % ≈ 1 − 0 % 1 + 43 %$\begin{array}{}\displaystyle \frac {1 - 30\text\%}{1 + 0\text\%} \approx \frac {1 - 0\text\%}{1 + 43\text\%}\end{array}$Figure 5Iso Tax Revenue Curves when τ k = 0The Iso Tax Revenue Curves in Figure 5 are plotted so that each curve to the right is at 10% lower aggregate tax revenue level. This illustrates the trade Zinsschranke? Eine empirische Untersuchung zur Identifikation der Einflussfaktoren. Zeitschrift für Betriebswirtschaft 79, 503-526.Blouin J., Huizinga H., Laeven L. and Nicodeme G., 2014. Thin Capitalization Rules and Multinational Firm Capital Structure. CentER Discussion Paper No. 2014-007.Boadway R. and Bruce N., 1984. A General Proposition on the Design of a Neutral Business Tax. Journal of Public Economics 11, 93-105.Broer M., 2009. Ziele, Wirkungsweise und Steueraufkommen der neuen Zinsschranke. Schmollers Jahrbuch 129 Konrad K. (2013), The Theory of International Tax Coordination and Competition. In Auerbach A. J., Chetty R., Feldstein M. and Saez E. (eds.) Handbook of Public Economics, Volume 5, 257-328, Elsevier.Lindhe T. and Södersten J. (2012), The Norwegian shareholder tax reconsidered. International Tax and Public Finance 19, 424-441.Lodin S.O. (2014), An Overview of the Proposal of the Swedish Government Committee on Corporate Taxation. Nordic Tax Journal 2014:2, 43-54.Lohse T., Riedel N. and Spengel C. (2012), The Increasing retrenchment in the target companies, unsustainable debt levels, compensation levels of fund managers, and, last but not least, tax issues. Cf. J. Robertson, Private Equity Funds , 14 New Political Economy 4, p. 545-555 (2009), and H. Ordower, The Regulation of Private Equity, Hedge Funds, and State Funds , 58 The American Journal of Comparative Law (supplement), p. 295-321 (2010). For a broader analysis of the development of private equity in Europe, see N. Badu & M. Montalban, Analyzing the uneven development of private equity in Europe: legal origins and diversity of Review 17 447 469Walker, M., 2013. “How far can we trust earnings numbers? What research tells us about earnings management” Accounting and Business Research 43: 445–481. 10.1080/00014788.2013.785823 Walker M. 2013 “How far can we trust earnings numbers? What research tells us about earnings management” Accounting and Business Research 43 445 481Watrin, C., Ebert, N., Thomsen, M., 2014. “One–book versus two–book system: Learnings from Europe” The Journal of the American Taxation Association 36: 55–89. 10.2308/atax-50769 Watrin C. Ebert N. Thomsen M. 2014 “One treaty. Both assumptions are founded on a misconception.The ultimate purpose when interpreting a treaty, as described in Articles 31–33 of the VCLT, is to establish the intentions of the treaty parties–or more accurately, the meaning that the parties intended the treaty to communicate. See, for example, Navigational and Related Rights (n. 18), p. 242, para. 43. The purpose of the treaty interpretation process is thus the same as the purpose of understanding just any verbal utterance. As long emphasized by linguistics, in order for a reader or a listener to , the overall impact of taxation varies a lot depending on which type of a company pays dividends to which type of a shareholder. For example, in the case of a public listed company and a natural person as shareholder, the combined tax burden is 40.4–43.12%, depending on whether the person’s capital income is below or above EUR 30,000. Let us assume that the company’s taxable profit is 100X. When the company pays 20% corporate taxes, or 20X, after this, 80X will be distributed to the shareholders. According to Income Tax Act Section 33 a, 85% of the paid dividend is residence for taxpayers other than individuals and is based on the new text of Article 4(3) of the OECD Model Tax Convention, which has been included in the 2017 version of the model treaty. The goal is to ensure that a taxpayer shall be considered a resident of only one contracting state. This shall be achieved by means of a mutual agreement procedure. Furthermore, Article 4 of the MLI provides that, if the contracting states cannot come to an agreement, the taxpayer shall not be entitled to tax relief under the treaty, except to the extent agreed upon by the competent Irresponsible?” Intertax 43 10 544 58Perrini, Francesco, Sandro Castaldo, Nicola Misani, and A. Tencati. 2010. “The Relationship Between Corporate Responsibility and Brand Loyalty in Retailing: The Mediating Role of Trust.” In Global Challenges in Responsible Business , edited by N. Craig Smith, C. B. Bhattacharya, David Vogel, David I. Levine, 191-214. Cambridge: Cambridge University Press. Perrini Francesco Sandro Castaldo Nicola Misani Tencati A. 2010 “The Relationship Between Corporate Responsibility and Brand Loyalty in Retailing: The Mediating Role Axel Hilling, Niklas Sandell and Anders Vilhelmsson a naked-in–naked-out basis.4.3The Tax Agency vs. pwc4.3.1The pwc rulingAround the time when the government demonstrated its commitment to fairness by introducing regulations against tax planning in partner-owned companies, the Tax Agency also decided to test whether the tax planning was compatible with current laws and regulations. As the pilot case, it chose the accounting firm pwc. In December 2012, it estimated higher payroll taxes for the company on the grounds that employees had acquired shares in the company at what was allegedly below market
This question make more sense when we don't limit outselves to real numbers. For any $n = d + 1 > 1$ and $u = (u_0, u_1, \ldots, u_d) \in \mathbb{C}^n$, consider following product $$\Lambda(u_0,\ldots,u_d) = \prod_{(\epsilon_1,\ldots,\epsilon_d) \in \{ \pm 1 \}^d}\left( \sqrt{u_0} + \sum_{k=1}^d \epsilon_k \sqrt{u_k} \right)\tag{*1}$$We can expand $\Lambda(\cdots)$ to a homogeneous polynomial in $\sqrt{u_k}$ of degree $2^d$ $$\Lambda(u_1,\ldots,u_d) = \sum_{(e_0,\ldots,e_d)\in \mathbb{N}^n}A_{e_0,\ldots,e_d} \prod_{k=0}^d\sqrt{u_k}^{e_k} \tag{*2}$$whose coefficients $A_{e_0,\ldots,e_d} \in \mathbb{Z}$ and vanish unless $e_0 + \ldots + e_d = 2^d$. Consider the effect of flipping the sign of $\epsilon_\ell$ for some $\ell \ge 1$ in $\Lambda(\cdots)$. In $(*1)$, this rearrange the order of product but leaves the value of $\Lambda(\ldots)$ untouched. In $(*2)$, the coefficient $A_{e_0,\ldots,e_d}$ picks up a factor $(-1)^{e_\ell}$. Since the value of product doesn't change, $A_{e_0,\ldots,e_d}$ vanishes unless $e_\ell$ is even. Since this is true for every $\ell \ge 1$ and $A_{e_0,\ldots,e_d}$ vanishes unless $e_0 + \cdots + e_d = 2^d$, $A_{e_0,\ldots,e_d}$ also vanishes unless $e_0$ is even. This implies in expansion$(*2)$, all square roots get completed. As a result, $\Lambda(\cdots)$ is a homogeneous polynomial in $u_0,\ldots, u_d$ of degree $2^{d-1}$: $$\Lambda(u_1,\ldots,u_d) = \sum_{(e_0,\ldots,e_d)\in \mathbb{N}^n}B_{e_0,\ldots,e_d} \prod_{k=0}^du_k^{e_k} \tag{*3}$$whose coefficients $B_{e_0,\ldots,e_d} \in \mathbb{Z}$ and vanish unless $e_0 + \ldots + e_d = 2^{d-1}$. If $\sqrt{u_0} \pm \sqrt{u_1} \pm \cdots \pm \sqrt{u_d} = 0$ for any choice of sign of the square roots, then by construction, $u_0, \ldots, u_d$ need to satisfy the polynomial equation $\Lambda(u_0,\ldots,u_d) = 0$. For the problem at hand, take $n = 3$ and substitute $(u_0,u_1,u_2)$ by $ (ax + \alpha, bx+\beta, cx+\gamma)$. The equation $\sqrt{a x + \alpha} \pm \sqrt{b x + \beta} \pm \sqrt{cx + \gamma} = 0$ leads to a homogeneous polynomial equation in $ax, bx, cx, \alpha, \beta, \gamma$ of degree $2$: $$\Lambda(ax+\alpha, bx+\beta, cx+\gamma) = 0$$ Expand this polynomial out against $x$, we obtain a quadratic equation in $x$: $$C(\cdots) x^2 + D(\cdots) x + E(\cdots) = 0$$ It is easy to see the coefficients $C(\cdots)$ only depends on $(a,b,c)$. By setting $x$ to $1$ and $\alpha, \beta, \gamma$ to $0$, we find$C(\cdots) = \Lambda(a,b,c)$. By setting $x$ to $0$, we find $E(\cdots) = \Lambda(\alpha,\beta,\gamma)$. Thisleads to a equation of the form: $$\Lambda(a,b,c)x^2 + D(\cdots)x + \Lambda(\alpha,\beta,\gamma) = 0$$ Now it comes to the mysterious condition $\sqrt{a} \pm \sqrt{b} \pm \sqrt{c} = 0$.When this condition is fulfilled, $\Lambda(a,b,c) = 0$. Above equation simplifiesto a linear equation in $x$. $$D(\cdots)x + \Lambda(\alpha,\beta,\gamma) = 0$$ We can determine the last unknown coefficient $D(\cdots)$ by setting $x$ to $1$.At the end, we have When $\sqrt{a} \pm \sqrt{b} \pm \sqrt{c} = 0$, then the equation $\sqrt{ax+\alpha} \pm \sqrt{bx+\beta} \pm \sqrt{cx+\gamma} = 0$ leads to a linear equation in $x$: $$\Lambda(a+\alpha,b+\beta,c+\gamma)x + \Lambda(\alpha,\beta,\gamma)(1-x) = 0$$ where $$\Lambda(u,v,w) = u^2 + v^2 + w^2 - 2(uv+vw+uw)$$ In certain sense, one can argue this equation is simple because it's dependence on $x$ is linear. Unlike the general case where $\Lambda(a,b,c) \ne 0$, the solution for $x$ no longer involves any radicals.Whether one agree this is simple is up to one's own judgement. To be honest, I don't.
I want to do a cross product involving a vector of Pauli matrices $\vec \sigma = \left( {{\sigma _1},{\sigma _2},{\sigma _3}} \right)$; for example, $\vec \sigma \times \left( {1,2,3} \right)$. s:= Table[PauliMatrix[i], {i, 1, 3}];Cross[s,{1,2,3}] The code above will not work. The only way I can think of is to use the method which I have just learned from Mr. Wizard: ReleaseHold @ Block[{PauliMatrix}, Hold @@ {Cross[s,{1, 2, 3}]}] But I feel uncomfortable writing such long code to realize such a simple cross product. Is there any better way or not? UpdateJ.M. give the method Cross[Unevaluated /@ PauliMatrix[Range[3]], {a,b,c}] But it turns out that when one of the a,b,c is zero, the code will give error. a Remedy is given by J.M in his comment. But I am asking here why it gives right answer when a b c are all nonzero while failed with a zero component?
> I'm really just getting ready to explain how they show up in logic: first in good old classical "subset logic", and then in the weird new "partition logic". I am pretty excited to read what's next! I wanted to a few puzzles I ran into a while ago related to these topics. First, a some definitions... **Definition.** A poset \\((A,\leq,\wedge,\vee)\\) with a join \\(\vee\\) and a meet \\(\wedge\\) is called a **lattice**. (Note: Lattices ***must*** obey the anti-symmetry law!) **Definiton.** The **product poset** of two posets \\((A,\leq_A)\\) and \\((B,\leq_B)\\) is \\((A \times B, \leq_{A\times B})\\) where $$ (a_1,b_1) \leq_{A\times B} (a_2,b_2) \Longleftrightarrow a_1 \leq_A a_2 \text{ and } b_1 \leq_B b_2 $$ **Definition.** Let \\((A,\leq)\\) be a poset. The **diagonal function** \\(\Delta : A \to A\times A\\) is defined: $$ \Delta(a) := (a,a) $$ --------------------------- Let \\(A\\) be a lattice. **MD Puzzle 1**: Show that \\(\Delta\\) is monotonically increasing on \\(\leq_{A\times A}\\) **MD Puzzle 2**: Find the *right adjoint* \\(r : A\times A \to A\\) to \\(\Delta\\) such that: $$ \Delta(x) \leq_{A\times A} (y,z) \Longleftrightarrow x \leq_{A} r(y,z) $$ **MD Puzzle 3**: Find the *left adjoint* \\(l : A\times A \to A\\) to \\(\Delta\\) such that: $$ l(x,y) \leq_{A} z \Longleftrightarrow (x,y) \leq_{A\times A} \Delta(z) $$ **MD Puzzle 4**: Consider \\(\mathbb{N}\\) under the partial ordering \\(\cdot\ |\ \cdot\\), where $$ a\ |\ b \Longleftrightarrow a \text{ divides } b $$ What are the adjoints \\(l\\) and \\(r\\) in this case?
name: Leon Derczynski status: Assistant Professor of Computer Science ErhvervsForsker: 2019-2022, InnovationsFonden PhD 1,07M DKK. Deep Learning Generative Models for Content Structuring. Role: PI for ITU. NLPL: 2017-2020, NordForsk. www.nlpl.eu. Nordic Language Processing Laboratory. A cross-Nordic collaboration of high-performance computing resources and natural language processing resources, between universities and e-infrastructure organisations. Role: PI for ITU. COMRADES: 2016-2018, EC H2020 IA €2.0M. www.comrades-project.eu. Collective platform for community resilience & social innovation during crises. Role: Co-I for U.Sheffield. PHEME: 2014-2017, EC FP7 CP €4.3M. www.pheme.eu. Computing Veracity – the Fourth Challenge of Big Data. Pheme builds technology for finding how true claims made online are. This timely project was rated "excellent" at final evaluation. Role: co-author, scientific co-ordinator. uComp: 2013-2016, EC CHIST-ERA €1.25M. www.ucomp.eu. Embedded Human Computation for Knowledge Extraction and Evaluation. uComp built extensive resources for crowd-sourcing and social media processing, including an easy corpus construction tool integrated with GATE. Role: named researcher. TrendMiner: 2011-2014, EC FP7 CP €3.7M. Trendminer on CORDIS. Large-scale, Cross-lingual Trend Mining and Summarisation of Real-time Media Streams. 2018.08.2x: Program co-chair at COLING 2018, Santa Fe 2018.11.01: Dimensions of Variation in User-generated Text at the Workshop on Noisy User-generated Text (W-NUT), Brussels 2018.11.08: Fake News and Troll Detection at SLTC, Stockholm 2018.11.15: Endurance Care at Connecting for Connected Health, Copenhagen 2019.Q1: Guest lecturing ML & NLP at Innopolis University, Kazan, Russian Federation 2019.05.06: Opening keynote at Nordic Disinformation conference 2019.05.23: Automatic Detection of Fake News at PET, the Danish Security and Intelligence Service 2018 COLING:. 2018. Proceedings of the 27th International Conference on Computational Linguistics ISCRAM:. 2018. Helping Crisis Responders Find the Informative Needle in the Tweet Haystack 2017 2016 2015 2014 2013 2012 Relative position of Starman (live data) EARTH:km :mi VEL:km/s :km/h :mph (origin) I'm always open to supervising motivated and capable students, for thesis or other project work. Open projects are described on a dedicated page: View open research projects. _ 2019.05 I wrote a series of data-driven articles on the Danish national election for the press (Danish): Mandag Morgen 2018.10 Dagbladenes Bureau interviewed me on using AI to hire people (Danish): Ansat af en maskine 2018.08 I was program co-chair for COLING 2018 (we had 1018 full paper submissions) 2018.07 Read an interview with me in "Alt om Data" (Danish): Nye sandheder om falske nyheder brown (generalised) \[ MI(C_i,C_j)= p(\left< C_i,C_j\right>)\ \log_2{\frac{p(\left<\ C_i,C_j\right>)}{p(\left< C_i,*\right>)\ p(\left<*,C_j\right>)}} \] \[ AMI(C) = \sum_{C_i,C_j\in C}{MI(C_i,C_j)} \] \[ C_{i\leftarrow j} = \left( C \setminus \left\{C_i,C_j\right\} \right) \cup \left\{C_i \cup C_j \right\} \] \[ 0 < a \leq ||C|| \] \[ i,j \in [1..a] \] \[ \DeclareMathOperator*{\argmax}{arg\,max} \hat{\pi}(C) = \argmax_{C_i,C_j\in C,i\neq j}{\ AMI(C) - AMI(C_{i\leftarrow j})}. \] NLP group at the ITU University of Copenhagen Three faculty Group co-ordinates the Data Science program For details and research specialisations, visit the webpage: nlp.itu.dk ITU is an agile, non-traditional, expanding university. I think of it as the startup of unis. Projects are easy to take on and novel ideas have plenty of instituional support. It achieves one of the highest rates of external funding per faculty member in the country, and the highest rate of female:male student applicants (29% of Bachelor in Software Development course applicants were female in 2018). The building is situated just between the national broadcaster, DR, and the KUA campus, and has a professional, light and spacious feel. Natural Language Processing / Text Mining Fake news, veracity & stance - how do we determine truth of claims on the web? What behaviours exist around false news? How do we know that the data your system is processing is geniune and accurate? Noisy text processing - including social media text, and clinical text (medics' notoriously bad handwriting does, sometimes, map also to keyboard skills) Clinical text mining - and pre-clinical public health. Continuing previous associations with Mayo Clinic, NHS SLaM, and Harvard Children's hospital Information extraction - who did what to whom, and where and when? This is explained in the text but harder to automatically extract and reason about. Danish NLP - Improve the places that you live in keywords: natural language processing, machine learning, veracity, social media, semantics, artificial intelligence, dansk
Quasilinear nonhomogeneous Schrödinger equation with critical exponential growth in R^n DOI: http://dx.doi.org/10.12775/TMNA.2015.029 Abstract In this paper, using variational methods, we establish the existence and multiplicity of weak solutions for nonhomogeneous quasilinear elliptic equations of the form -\Delta_n u + a(x)|u|^{n-2}u= b(x)|u|^{n-2}u+g(x)f(u)+\varepsilon h \quad \mbox{in }\mathbb{R}^n , where $n \geq 2$, $ \Delta_n u \equiv \dive(|\nabla u|^{n-2}\nabla u)$ is the $n$-Laplacian and $\varepsilon$ is a positive parameter. Here the function $g(x)$ may be unbounded in $x$ and the nonlinearity $f(s)$ has critical growth in the sense of Trudinger-Moser inequality, more precisely $f(s)$ behaves like $e^{\alpha_0 |s|^{n/(n-1)}}$ when $s\to+\infty$ for some $\alpha_0>0$. Under some suitable assumptions and based on a Trudinger-Moser type inequality, our results are proved by using Ekeland variational principle, minimization and mountain-pass theorem. Keywords Variational methods; Trudinger-Moser inequality; critical points; critical exponents; $n$-Laplacian Full Text:Full Text Refbacks There are currently no refbacks.
Getting Started¶ MathJax allows you to include mathematics in your web pages, either using LaTeX, MathML, or AsciiMath notation, and the mathematics will be processed using javascript to produce HTML, SVG or MathML equations for viewing in any modern browser. There are two ways to access MathJax: the easiest way is to use thecopy of MathJax available from a distributed network service such as cdnjs.com, but you can also download and install a copy ofMathJax on your own server, or use it locally on your hard disk(with no need for network access). All three of these are describedbelow, with links to more detailed explanations. This page gives thequickest and easiest ways to get MathJax up and running on your website, but you may want to read the details in order to customize thesetup for your pages. Using a Content Delivery Network (CDN)¶ The easiest way to use MathJax is to link directly to a public installation available through a Content Distribution Network (CDN). When you use a CDN, there is no need to install MathJax yourself, and you can begin using MathJax right away. The CDN will automatically arrange for your readers to download MathJax files from a fast, nearby server. To use MathJax from a CDN, you need to do two things: Link to MathJax in the web pages that are to include mathematics. Put mathematics into your web pages so that MathJax can display it. Warning To jump start using cdnjs, you accomplish the first step by putting <script type="text/javascript" async src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.3.0/MathJax.js?config=TeX-MML-AM_CHTML"></script> into the <head> block of your document. (It can also go in the <body> if necessary, but the head is to be preferred.) This willload the latest version of MathJax from the distributed server, andconfigure it to recognize mathematics in both TeX, MathML, and AsciiMath notation,and ask it to generate its output using HTML with CSS to display themathematics. Warning The TeX-MML-AM_CHTML configuration is one of the most general (and thus largest) combined configuration files. We list it here because it will quickly get you started using MathJax. It is probably not the most efficient configuration for your purposes and other combined configuration files are available. You can also provide additional configuration parameters to tailor one of the combined configurations to your needs or use our development tools to generate your own combined configuration file. More details about the configuration process can be found in the Loading and Configuring MathJax instructions. Note To see how to enter mathematics in your web pages, see Putting mathematics in a web page below. Installing Your Own Copy of MathJax¶ We recommend using a CDN service if you can, but you can also install MathJax on your own server, or locally on your own hard disk. To do so you will need to do the following things: Obtain a copy of MathJax and make it available on your server or hard disk. Configure MathJax to suit the needs of your site. Link MathJax into the web pages that are to include mathematics. Put mathematics into your web pages so that MathJax can display it. Obtaining and Installing MathJax¶ The easiest way to set up MathJax is to obtain the v2.1 archive fromthe MathJax download page (youshould obtain a file named something like mathjax-MathJax-v2.1-X-XXXXXXXX.zip where the X’s are randomlooking numbers and letters). This archive includes both the MathJaxcode and the MathJax webfonts, so it is the only file you need. Notethat this is different from v1.0 and earlier releases, which had thefonts separate from the rest of the code. Unpack the archive and place the resulting MathJax folder onto yourweb server at a convenient location where you can include it into yourweb pages. For example, making MathJax a top-level directory onyour server would be one natural way to do this. That would let yourefer to the main MathJax file via the URL /MathJax/MathJax.jsfrom within any page on your server. Note: While this is the easiest way to set up MathJax initially, thereis a better way to do it if you want to be able to keep your copy ofMathJax up-to-date. That uses the Git versioncontrol system, and is described in the Installing MathJax document. If you prefer using Subversion, you can also use that to get a copyof MathJax (see Installing MathJax via SVN). Once you have MathJax set up on your server, you can test it using thefiles in the MathJax/test directory. If you are putting MathJaxon a server, load them in your browser using their web addressesrather than opening them locally (i.e., use an http:// URL ratherthan a file:// URL). When you view the index.html file, aftera few moments you should see a message indicating that MathJax appearsto be working. If not, check that the files have been transferred tothe server completely and that the permissions allow the server toaccess the files and folders that are part of the MathJax directory.(Be sure to verify the MathJax folder’s permissions as well.) Checkthe server log files for any errors that pertain to the MathJaxinstallation; this may help locate problems in the permission orlocations of files. Configuring your copy of MathJax¶ When you include MathJax into your web pages as described below, itwill load the file config/TeX-AMS-MML_HTMLorMML.js (i.e., the filenamed TeX-AMS-MML_HTMLorMML.js in the config folder of themain MathJax folder). This file preloads all the mostcommonly-used components of MathJax, allowing it to processmathematics that is in the TeX or LaTeX format, or in MathML notation.It will produce output in MathML form if the user’s browser supportsthat sufficiently, and will use HTML-with-CSS to render themathematics otherwise. There are a number of other prebuilt configuration files that you canchoose from as well, or you could use the config/default.js file andcustomize the settings yourself. The combined configuration files aredescribed more fully in Common Configurations, and the configuration options are described inConfiguration Options. Note: The configuration process changed between MathJax v1.0 and v1.1, so if you have existing pages that use MathJax v1.0, you may need to modify the tag that loads MathJax so that it conforms with the new configuration process. See Installing and Configuring MathJax for more details. Putting mathematics in a web page¶ To put mathematics in your web page, you can use TeX and LaTeX notation, MathML notation, AsciiMath notation, or a combination of all three within the same page; the MathJax configuration tells MathJax which you want to use, and how you plan to indicate the mathematics when you are using TeX notation. The configuration file used in the examples above tells MathJax to look for both TeX and MathML notation within your pages. Other configuration files tell MathJax to use AsciiMath input. These three formats are described in more detail below. TeX and LaTeX input¶ Mathematics that is written in TeX or LaTeX format isindicated using math delimiters that surround the mathematics,telling MathJax what part of your page represents mathematics and whatis normal text. There are two types of equations: ones that occurwithin a paragraph (in-line mathematics), and larger equations thatappear separated from the rest of the text on lines by themselves(displayed mathematics). The default math delimiters are $$...$$ and \[...\] fordisplayed mathematics, and \(...\) for in-line mathematics. Notein particular that the $...$ in-line delimiters are not usedby default. That is because dollar signs appear too often innon-mathematical settings, which could cause some text to be treatedas mathematics unexpectedly. For example, with single-dollardelimiters, ”... the cost is $2.50 for the first one, and $2.00 foreach additional one ...” would cause the phrase “2.50 for the firstone, and” to be treated as mathematics since it falls between dollarsigns. For this reason, if you want to use single-dollars for in-linemath mode, you must enable that explicitly in your configuration: <script type="text/x-mathjax-config">MathJax.Hub.Config({ tex2jax: {inlineMath: [['$','$'], ['\\(','\\)']]}});</script><script type="text/javascript" src="path-to-mathjax/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script> See the config/default.js file, or the tex2jax configurationoptions page, for additional configurationparameters that you can specify for the tex2jax preprocessor,which is the component of MathJax that identifies TeX notation withinthe page. See the TeX and LaTeX page formore on MathJax’s support for TeX, and in particular how to deal withsingle dollar signs in your text when you have enabled singledollar-sign delimiters. Here is a complete sample page containing TeX mathematics (also available in the test/sample-tex.html file): <!DOCTYPE html><html><head><title>MathJax TeX Test Page</title><script type="text/x-mathjax-config"> MathJax.Hub.Config({tex2jax: {inlineMath: [['$','$'], ['\\(','\\)']]}});</script><script type="text/javascript" src="https://example.com/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script></head><body>When $a \ne 0$, there are two solutions to \(ax^2 + bx + c = 0\) and they are$$x = {-b \pm \sqrt{b^2-4ac} \over 2a}.$$</body></html> Since the TeX notation is part of the text of the page, there are some caveats that you must keep in mind when you enter your mathematics. In particular, you need to be careful about the use of less-than signs, since those are what the browser uses to indicate the start of a tag in HTML. Putting a space on both sides of the less-than sign should be sufficient, but see TeX and LaTeX support for details. If you are using MathJax within a blog, wiki, or other content management system, the markup language used by that system may interfere with the TeX notation used by MathJax. For example, if your blog uses Markdown notation for authoring your pages, the underscores used by TeX to indicate subscripts may be confused with the use of underscores by Markdown to indicate italics, and the two uses may prevent your mathematics from being displayed. See TeX and LaTeX support for some suggestions about how to deal with the problem. There are a number of extensions for the TeX input processor that areloaded by the TeX-AMS-MML_HTMLorMML configuration. These include: TeX/AMSmath.js, which defines the AMS math environments and macros, TeX/AMSsymbols.js, which defines the macros for the symbols in the msam10 and msbm10 fonts, TeX/noErrors.js, which shows the original TeX code rather than an error message when there is a problem processing the TeX, and TeX/noUndefined.js, which prevents undefined macros from producing an error message, and instead shows the macro name in red. Other extensions may be loaded automatically when needed. See TeX and LaTeX support for details on the other TeX extensions that are available. MathML input¶ For mathematics written in MathML notation, you mark yourmathematics using standard <math> tags, where <mathdisplay="block"> represents displayed mathematics and <mathdisplay="inline"> or just <math> represents in-line mathematics. Note that this will work in HTML files, not just XHTML files (MathJaxworks with both), and that the web page need not be served with anyspecial MIME-type. Also note that, unless you are using XHTML ratherthan HTML, you should not include a namespace prefix for your <math> tags; for example, you should not use <m:math> exceptin a file where you have tied the m namespace to the MathML DTD byadding the xmlns:m="http://www.w3.org/1998/Math/MathML" attribtueto your file’s <html> tag. Although it is not required, it is recommended that you include the xmlns="http://www.w3.org/1998/Math/MathML" attribute on all <math> tags in your document (and this is preferred to the use ofa namespace prefix like m: above, since those are deprecated inHTML5) in order to make your MathML work in the widest range ofsituations. Here is a complete sample page containing MathML mathematics (also available in the test/sample-mml.html file): <!DOCTYPE html><html><head><title>MathJax MathML Test Page</title><script type="text/javascript" src="https://example.com/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script></head><body><p>When<math xmlns="http://www.w3.org/1998/Math/MathML"> <mi>a</mi><mo>≠</mo><mn>0</mn></math>,there are two solutions to<math xmlns="http://www.w3.org/1998/Math/MathML"> <mi>a</mi><msup><mi>x</mi><mn>2</mn></msup> <mo>+</mo> <mi>b</mi><mi>x</mi> <mo>+</mo> <mi>c</mi> <mo>=</mo> <mn>0</mn></math>and they are<math xmlns="http://www.w3.org/1998/Math/MathML" display="block"> <mi>x</mi> <mo>=</mo> <mrow> <mfrac> <mrow> <mo>−</mo> <mi>b</mi> <mo>±</mo> <msqrt> <msup><mi>b</mi><mn>2</mn></msup> <mo>−</mo> <mn>4</mn><mi>a</mi><mi>c</mi> </msqrt> </mrow> <mrow> <mn>2</mn><mi>a</mi> </mrow> </mfrac> </mrow> <mtext>.</mtext></math></p></body></html> When entering MathML notation in an HTML page (rather than an XHTMLpage), you should not use self-closing tags, but should use explicitopen and close tags for all your math elements. For example, youshould use <mspace width="5pt"></mspace> rather than <mspace width="5pt" /> in an HTML document. If youuse the self-closing form, some browsers will not build the math treeproperly, and MathJax will receive a damaged math structure, whichwill not be rendered as the original notation would have been.Typically, this will cause parts of your expression to not bedisplayed. Unfortunately, there is nothing MathJax can do about that,since the browser has incorrectly interpreted the tags long beforeMathJax has a chance to work with them. The component of MathJax that recognizes MathML notation within thepage is called the mml2jax extension, and it has only a fewconfiguration options; see the config/default.js file or themml2jax configuration options page for moredetails. See the MathML page for more onMathJax’s MathML support. AsciiMath input¶ MathJax v2.0 introduced a new input format: AsciiMath notation.For mathematics written in this form, you mark your mathematicalexpressions by surrounding them in “back-ticks”, i.e., `...`. Here is a complete sample page containing AsciiMath notation (also available in the test/sample-asciimath.html file): <!DOCTYPE html><html><head><title>MathJax AsciiMath Test Page</title><script type="text/javascript" src="https://example.com/MathJax.js?config=AM_HTMLorMML-full"></script></head><body><p>When `a != 0`, there are two solutions to `ax^2 + bx + c = 0` andthey are</p><p style="text-align:center"> `x = (-b +- sqrt(b^2-4ac))/(2a) .`</p></body></html> The component of MathJax that recognizes asciimath notation within thepage is called the asciimath2jax extension, and it has only a fewconfiguration options; see the config/default.js file or theasciimath2jax configuration options page for moredetails. See the AsciiMath support page for more onMathJax’s AsciiMath support. Where to go from here?¶ If you have followed the instructions above, you should now have MathJax installed and configured on your web server, and you should be able to use it to write web pages that include mathematics. At this point, you can start making pages that contain mathematical content! You could also read more about the details of how to customize MathJax. If you are trying to use MathJax in blog or wiki software or in some other content-management system, you might want to read about using MathJax in popular platforms. If you are working on dynamic pages that include mathematics, you might want to read about the MathJax Application Programming Interface (its API), so you know how to include mathematics in your interactive pages.
I've been trying to wrap my head around the current I measured through the collector branch vs. the expected calculation for collector current and believe I have a knowledge gap in what to do. The setup of the circuit is shown above with the LED being a blue LED with a forward voltage given as 0.3 V which seems implausible to me. To find the collector current \$i_C\$ I measured the voltage drop across the 330 ohm resistor as 1.9855V and calculated \$i_C=\frac{V_{RL}}{R_{L}}=1.9855/327\approx6mA\$. From my prior calculations, I chose a base resistor that would give 20mA for my collector current with \$\beta=100\$ as a rough estimate as our beta is unknown but in the range of 100-300. I'm not sure if my formula for collector current is incorrect as I'm expecting there to be quite a bit more current. I tried another method which I'm not sure is right where I did \$i_C=\frac{V_s-V_f{_{LED}}-V_{C}}{327}=\frac{5-0.3-0.00985}{327}\approx14mA\$ Which was quite a bit higher than from my other calculation, however when I put the forward voltage of the LED at 3V and put it back into the equation, I get about 6mA. So I'm not quite sure if the forward voltage of the diode given is incorrect as 0.3 forward voltage seems extremely low for an LED or if my calculations are wrong. I haven't learnt much about diodes or transistors besides a basic overview before this experiment so I'm quite confused on where to go from here.
Let $C([0,\infty), R)$ be the canonical space of continuous functions. Assume $(\Omega, \mathcal{F}, \{\mathcal{F}_{t}\}_{t\geq 0})$ is a measurable space with a filtration. Let $P, Q$ be two probability measures on $(\Omega, \mathcal{F})$. Assume $X_{t}$ and $Y_{t}$ are two stochastic processes adapted to $\{\mathcal{F}_{t}\}_{t\geq 0}$. If for any borel set $A$ and $t$, $$ P(X_{t}\in A)= Q(Y_{t}\in A) $$ Can we conclude that the law on $C([0,\infty), R)$ induced by $(X_{t}, P)$ is the same as that of $(Y_{t}, Q)$? Any references are very appreciated. No. Try $(X_t)$ standard Brownian motion and $Y_t=\sqrt{t}\cdot Y_1$ for every $t$, where $Y_1$ is standard normal. If only the marginals match it is not true, here is a nice counter example the fake Brownian Motion Best regards
What is a Prime Number? An integer, say $ p $ , [ $ \ne {0} $ & $ \ne { \pm{1}} $ ] is said to be a prime integer iff its only factors (or divisors) are $ \pm{1} $ & $ \pm{p} $ . As? Few easy examples are: $ \pm{2}, \pm{3}, \pm{5}, \pm{7}, \pm{11}, \pm{13} $ …….etc. This list goes up to infinity & mathematicians are trying to find the larger one than the largest, because primes numbers has no distinct pattern (as any one cannot guess the next prime after one.) As of now the biggest prime number found is $ M-47 $ , called as Mersenne’s 47. This has an enormous value of $ 2^{43112609} -1 $ . It is very hard to write it on paper because it consists of $ 12978189 $ digits. »M47 was Invented in 2008. Other Large Prime Numbers • The Second Largest prime is $ M-46 $ having value of $ 2^{42643801}-1 $ with $ 12837064 $ digits in it. »Invented in 2009. •The Third Largest Prime is $ M-45$ . »Value:$ 2^{37156667}-1 $ »discovered in 2008 and having $ 11185272 $ digits. •The Fourth Largest Prime is $ M-44 =2^{32582657}-1$ »digits: $ 9808358$ »discovered in :2006. •The Fifth largest prime is $ M-43 =2^{30402457}-1$ »digits:$ 9152052$ »discovered:2005 •The Sixth Largest prime is $ M-42$ . »value:$ 2^{25964951}-1 $ »digits:$ 7816230$ »discovered:2005 •The Seventh Largest prime is $ M-41 $ . » Value:$ 2^{24036583}$ -1 »digits:$ 7235733$ »discovered:2004 •The Eighth Largest prime is $ M-40 $ . »Value: $ 2^{20996011}-1 $ »digits:$ 6320430 $ »discovered:2003 •The Ninth Largest prime is $ M-39 $ . »Value:$ 2^{13466917}-1 $ »digits:$ 4053946 $ »discovered:2001 •The Tenth Largest Known Prime Number is $ M-38 $ . »Value:$ 19249 times 2^{13018586}+1 $ »digits: $ 3918990 $ »discovered:2007 A Note for Newbie $ 2^n $ means that $ 2 $ is multiplied with $ 2 $ , $ n $ times. For example: $ 2^5 $ means $ 2 \times 2 \times 2 \times 2 \times 2 = 32 $ . Prime Numbers and the Year 2011 If we take the 11 consecutive prime numbers from 157 to 211 & sum them up; we get 2011. Feel free to ask questions, send feedback and even point out mistakes. Great conversations start with just a single word. How to write better comments?
There are proofs that treat the cases of real and non-real $\chi$ on an equal footing. One proof is in Serre's Course in Arithmetic, which the answers by Pete and David are basically about. That method is using the (hidden) fact that the zeta-function of the $m$-th cyclotomic field has a simple pole at $s = 1$, just like the Riemann zeta-function.Here is another proof which focuses only on the $L$-function of the character $\chi$ under discussion, the $L$-function of the conjugate character, and the Riemann zeta-function. Consider the product$$H(s) = \zeta(s)^2L(s,\chi)L(s,\overline{\chi}).$$This function is analytic for $\sigma > 0$, with the possible exception of a pole at $s = 1$. (As usual I write $s = \sigma + it$.) Assume $L(1,\chi) = 0$. Then also $L(1,\overline{\chi}) = 0$.So in the product defining $H(s)$, the double pole of $\zeta(s)^2$ at $s = 1$ is cancelled and $H(s)$ is therefore analytic throughout the half-plane $\sigma > 0$. For $\sigma > 1$, we have the exponential representation $$H(s) = \exp\left(\sum_{p, k} \frac{2 + \chi(p^k) + \overline{\chi}(p^k)}{kp^{ks}}\right),$$where the sum is over $k \geq 1$ and primes $p$. If $p$ does not divide $m$, then we write $\chi(p) = e^{i\theta_p}$ and find $$\frac{2 + \chi(p^k) + \overline{\chi}(p^k)}{k} = \frac{2(1 + \cos(k\theta_p))}{k} \geq 0.$$ If $p$ divides $m$ then this sum is $2/k > 0$. Either way, inside that exponential is a Dirichlet series with nonnegative coefficients, so when we exponentiate and rearrange terms (on the half-plane of abs. convergence, namely where $\sigma > 1$), we see that $H(s)$ is a Dirichlet series with nonnegative coefficients. A lemma of Landau on Dirichlet series with nonnegative coefficients then assures us that the Dirichlet series representation of $H(s)$ is valid on any half-plane where $H(s)$ can be analytically continued. To get a contradiction at this point, here are several methods. [Edit: In the answer by J.H.S., and due to Bateman, is the slickest argument I have seen, so let me put it here. The idea is to look at the coefficient of $1/p^{2s}$ in the Dirichlet series for $H(s)$. By multiplying out the $p$-part of the Euler product, the coefficient of $1/p^s$ is $2 + \chi(p) + \overline{\chi}(p)$, which is nonnegative, but the coefficient of $1/p^{2s}$ is $(\chi(p) + \overline{\chi}(p) + 1)^2 + 1$, which is not only nonnegative but in fact is greater than or equal to 1. Therefore if $H(s)$ has an analytic continuation along the real line out to the number $\sigma$, then for real $s \geq \sigma$ we have $H(s) \geq \sum_{p} 1/p^{2s}$. The hypothesis that $L(1,\chi) = 0$ makes $H(s)$ analytic for all complex numbers with positive real part, so we can take $s = 1/2$ and get $H(1/2) \geq \sum_{p} 1/p$, which is absurd since that series over the primes diverges. QED!] If you are willing to accept that $L(s,\chi)$ (and therefore $L(s,\overline{\chi})$) has an analytic continuation to the whole plane, or at least out to the point $s = -2$, then $H(s)$ extends to $s = -2$. The Dirichlet series representation of $H(s)$ is convergent at $s = -2$ by our analytic continuation hypothesis and it shows $H(-2) > 1$, or the exponential representation implies that at least $H(-2) \not= 0$.But $\zeta(-2) = 0$, so $H(-2) = 0$. Either way, we have a contradiction. There is a similar argument, pointed out to me by Adrian Barbu, that does not require analytic continuation of $L(s,\chi)$ beyond the half-plane $\sigma > 0$. If you are willing to accept that $\zeta(s)$ has zeros in the critical strip $0 < \sigma < 1$ (which is a region that the Dirichlet series and exponential representations of $H(s)$ are both valid since $H(s)$ is analytic on $\sigma > 0$), we can evaluate the exponential representation of $H(s)$ at such a zero to get a contradiction. Of course the amount of analysis that lies behind this is more substantial than what is used to continue $L(s,\chi)$ out to $s = -2$. We consider $H(s)$ as $s \rightarrow 0^{+}$. We need to accept that $H$ is bounded as $s \rightarrow 0^{+}$. (It's even holomorphic there, but we don't quite need that.) For real $s > 0$ and a fixed prime $p_0$ (not dividing $m$, say), we can bound $H(s)$ from below by the sum of the $p_0$-power terms in its Dirichlet series. The sum of these terms is exactly the $p_0$-Euler factor of $H(s)$, so we have the lower bound $$H(s) > \frac{1}{(1 - p_0^{-s})^2(1 - \chi(p_0)p_0^{-s})(1 - \overline{\chi}(p_0)p_0^{-s})} = \frac{1}{(1 - p_0^{-s})^2(1 - (\chi(p_0)+ \overline{\chi}(p_0))p_{0}^{-s} + p_0^{-2s})}$$for real $s > 0$. The right side tends to $\infty$ as $s \rightarrow 0^{+}$.We have a contradiction. QED These three arguments at some point use knowledge beyond the half-plane $\sigma > 0$ or a nontrivial zero of the zeta-function. Granting any of those lets you see easily that $H(s)$ can't vanish at $s = 1$, but that "granting" may seem overly technical. If you want a proof for the real and complex cases uniformly which does not go outside the region $\sigma > 0$, use the method in the answer by Pete or David [edit: or use the method I edited in as the first one in this answer].
Current browse context: astro-ph.CO Change to browse by: Bookmark(what is this?) Astrophysics > Cosmology and Nongalactic Astrophysics Title: Distinguishing between $Λ$CDM and modified gravity with future observations of cosmic growth rate (Submitted on 30 Dec 2015) Abstract: A probability of distinguishing between $\Lambda$CDM model and modified gravity is studied by using the future observations for the growth rate of cosmic structure (Euclid redshift survey). Adopting extended DGP model, Kinetic Gravity Braiding model, and Galileon model as modified gravity, we compare predicted cosmic growth rate by models to the mock observational data. The growth rate $f\sigma_8$ in the original DGP model is suppressed compared with the $\Lambda$CDM case, for the same value of the current density parameter of matter $\Omega_{m,0}$, because of the suppression of effective gravitational constant. In case of the kinetic gravity braiding model and the Galileon model, the growth rate $f\sigma_8$ is enhanced compared with the $\Lambda$CDM case, for the same value of $\Omega_{m,0}$, because of the enhancement of effective gravitational constant. For future observational data of the cosmic growth rate (Euclid), compatible value of $\Omega_{m,0}$ are different by models, furthermore $\Omega_{m,0}$ can be stringently constrained. Thus, we find the $\Lambda$CDM model is distinguishable from modified gravity by combining the growth rate data of the Euclid with other observations. Submission historyFrom: Koichi Hirano [view email] [v1]Wed, 30 Dec 2015 19:37:50 GMT (2911kb)
Mathematics CUI: LaTeX Resources This page contains LaTeX template of CIIT Mathematics, MSc Project and MS Thesis templates. Templates Download a zip file given below and extract it by right clicking on the file. MSc Project Template: cui-math-project.zip MS Thesis Template: cui-ms-thesis.zip Installing latex Three separte software and installation are needed to run the LaTeX on windows. MikTeX: Available at http://www.miktex.org/download PDF Viewer (especially Sumatra PDF, it works best with LaTeX) LaTeX Editor (especially TeXstudio, its our choice, you can choose any other) After installing above software, configure Sumatra PDF and TeXstudio for forward and inverse search. Tip & Tricks To activate forward and inverse search, go in TeXstudio and Option > Configure TeXstudio > Commands > External PDF viewer Replace it with following code: For 32bit Operating System "C:\PROGRA~1\SumatraPDF\SumatraPDF.exe" -reuse-instance -forward-search "?c:am.tex" @ -inverse-search """""C:\PROGRA~1\TeXstudio\texstudio.exe"""" """%%f""" -line %%l" "?am.pdf" For 64bit Operating System (if 64bit SumatraPDF is installed) "C:\PROGRA~1\SumatraPDF\SumatraPDF.exe" -reuse-instance -forward-search "?c:am.tex" @ -inverse-search """""C:\PROGRA~2\TeXstudio\texstudio.exe"""" """%%f""" -line %%l" "?am.pdf" For 64bit Operating System (if 32bit SumatraPDF is installed) "C:\PROGRA~2\SumatraPDF\SumatraPDF.exe" -reuse-instance -forward-search "?c:am.tex" @ -inverse-search """""C:\PROGRA~2\TeXstudio\texstudio.exe"""" """%%f""" -line %%l" "?am.pdf" In above code, we considered that windows in installed in C drive. If it is installed in different drive then, please use the respective code instead of C. Also it was considered that programs are installed on their default location. In the Build select “ PDF Viewer: External PDF Viewer” LaTeX Codes To write inline equation Use a $\$ $ (dollar) sign to write equation or symbols in between statements or sentences, e.g. Let $I$ be an interval in $\mathbb{R}$ and $f:I\to \mathbb{R}$ be a function Let $I$ be an interval in $\mathbb{R}$ and $f:I\to \mathbb{R}$ be a function To write an equation Use double dollar $(\$\$)$ to write dedicated equation, e.g., $$\sin^2 \theta + \cos^2 \theta =1$$ $$ \sin^2 \theta + \cos^2 \theta =1 $$ If you wish that equation number appear automatically to this equation, then write in the following way: \begin{equation} \sin^2 \theta + \cos^2 \theta =1 \end{equation} \begin{equation} \sin^2 \theta + \cos^2 \theta =1 \end{equation} For more tips and tricks related to TeXStudio, please visit: http://www.texstudio.org/#features cui Last modified: 14 months ago by Administrator
As @DeanB said, the assignment shows exactly how to do what you are asking. It seems you are just stuck on some sort of mental block, it happens to everyone! For the low-pass circuit: You are given the following schematic: And an equation that describes its transfer function, then they even solved for the cutoff frequency for you. Which comes out to be as follows:$$ \omega_c = \frac{1}{\sqrt{R_1R_2C_1C_2}}$$From here it is just some simple math to figure out what values you need for the low pass circuit elements. For the peak detection circuit: You are given the following basic circuit: This circuit is a little bit less intuitive than the low pass filter and I can see how this could be a bit confusing. Conceptually, since you have already put your signal through a low pass filter, the peak detector is looking for quick rises in the input signal. These quick rises correspond to bass hits. If you have a bigger input signal, your output capacitor C1 will charge more fully and your light wil stay on longer. Your design factor here is essentially just the decay time of the peak detection circuit. Which is determined by: $$\tau_{td} = R_1C_1$$ So if your $$\tau_{td}$$ is too small the light won't stay on very long, and if it is too big you will miss bass hits. This is where the engineering comes in and it is up to you, as the circuit designer, to play around with the values and create a useable circuit. Best of luck, and let me know if you have any more questions.
The key for producing a nice tone, is a mix of two facts: first, offering the air stream a somehow geometrical regular hole, where a stationary wave can be born with a certain frequency and other possible frequencies are filtered out, and second, a low enough stream velocity, so that no turbulences can happen and the flow is ordered (laminar). The whistle is produced as an intermediate solution between having a hole that is small enough for the precise stationary wavelengths to happen, and big enough for the air not to flow at much too high a speed. That is why people with their fingers in their mouth can whistle louder, because, by putting the fingers, they create a richer and bigger aperture than a simple, small hole and so air can flow slowly enough but, at the same time, the fingers combined with the lips maintain the involved geometrical distances conveniently small, so that the stationary waves can happen. When you blow too strongly or through an irregular-shaped hole, air flows in a disordered way, thus vibrating with thousands different modes, and that is why you hear the typical "hiss" sound, because a hiss is a mix of lots of frequencies (technically called White Noise). Physics is not physics without, at least, a little maths. So now, follow me. You surely know Newton's second law: $ma = F$ First, we assume that we take the force per unit volume, so that we use the density instead of the mass. And, at the same time, we write the acceleration as the derivative of the velocity: $\rho \frac{dv}{dt} = f$ Because, as you know, the acceleration is the amount of change of the velocity per unit time. When we deal with fluid mechanics, that amount of change is due to two terms: one deals with the change in time of the velocity in a fixed point of space, and the other is due to the change of the velocity from one point to another. That is written in this way: $\rho (\frac{\partial \mathbf{v}}{\partial t} + \mathbf{v} \cdot \nabla \mathbf{v}) = f$ (For those of you who see it for the first time, don't worry about the triangle, it is kind of a sophisticated derivative. Just follow what the terms mean) The $f$ as you know, stands for forces (due to weight, springs, etc). In fluid dynamics, we like to give a special role to two kind of forces, so that we make them appear separately in the equation: $\rho (\frac{\partial \mathbf{v}}{\partial t} + \mathbf{v} \cdot \nabla \mathbf{v}) = -\nabla p + \frac{1}{\mathrm{Re}} \nabla^2 \mathbf{v} +f$ The term with the $p$ is due to the change in pressure from one point to another. The other one with the $\mathrm{Re}$ is due to the viscosity, i.e. a measure of strong the fluid tries to avoid changes in shape (honey has a higher viscosity than water). This is the Navier-Stokes equation for an incompressible flow (the air is compressible, but in the range of speeds involved in a whistle, it can be very good approximated by this equation - for the purists: the Navier Stokes equation is experimentally found to hold, up to a good degree of approximation, in turbulent flows too). The $Re$ stands for "Reynolds Number". It is a somewhat heuristic quantity that goes proportional to the velocity but inversely proportional to viscosity. It depends too on the geometrical dimensions of the problem, so it is not straightforward to derive its value. The important fact when you whistle is that, if you blow strongly, and thus increase the velocity of air, therefore having a big Reynolds Number, then the forces due to viscosity become less important in the equation (because the viscose term has $\mathrm{Re}$ in the denominator). The viscose forces are the ones that most help maintaining the flow geometrically ordered (laminar). When their contribution is not dominant, the flow becomes disordered (turbulent). A turbulent flow has no mechanical properties that are stable enough in time for a stationary wave to establish, so that your nice whistle vanishes and is replaced by a randomly fluctuating mix of thousands frequencies, that is, the white noise of the hiss...
Filter Results: Full text PDF available (3) Publication Year 2013 2019 This year (1) Last 5 years (11) Last 10 years (12) Supplemental Content Publication Type Co-author Journals and Conferences Learn More We use persistent homology to build a quantitative understanding of large complex systems that are driven far-fromequilibrium; in particular, we analyze image time series of ow eld patterns from… (More) We present a generalization of the induced matching theorem of as reported by Bauer and Lesnick (in: Proceedings of the thirtieth annual symposium computational geometry 2014) and use it to prove a… (More) The Euler calculus—an integral calculus based on Euler characteristic as a valuation on constructible functions—is shown to be an incisive tool for answering questions about injectivity and… (More) For positive semidefinite $n\times n$ matrices $A$ and $B$, the singular value inequality $(2+t)s_{j}(A^{r}B^{2-r}+A^{2-r}B^{r})\leq 2s_{j}(A^{2}+tAB+B^{2})$ is shown to hold for $r=\frac{1}{2}, 1,… (More) We explore the chaotic dynamics of Rayleigh-B\'enard convection using large-scale, parallel numerical simulations for experimentally accessible conditions. We quantify the connections between the… (More) We present a generalization of the induced matching theorem of [1] and use it to prove a generalization of the algebraic stability theorem for R-indexed pointwise finitedimensional persistence… (More) We probe the effectiveness of using topological defects to characterize the leading Lyapunov vector for a high-dimensional chaotic convective flow field. This is accomplished using large-scale… (More)
Let $(T_t)$ be a strongly continuous semigroup of positive operators on $C(K)$, where $K$ is a compact space. Assume also that $T_t1 =1 $ for every $t\geq 0$. (This is also called a Feller Semigroup.) Since $K$ is compact we know that there exists a probability measure $\mu$ on $K$ satisfying $\mu T^*_t = \mu $ for every $t\geq 0$ (i.e. $\mu$ is invariant). My question is: to show that $\mu$ is the unique invariant probability distribution, is it sufficient to show that $(T_t)$ is irreducible? Recall that a semigroup is by defintion irreducible if the resolvent $R_\lambda=(\lambda-L)^{-1}$ ($L$ is the generator of $(T_t)$) maps for sufficiently large $\lambda$ nonnegative nonzero functions into strictly positive functions. I thought this should be true by applying some version of the Krein-Rutman theorem, but did not find a suitable reference. The closest I found is Proposition 3.5. on p. 185 of this book http://www.springer.com/mathematics/algebra/book/978-3-540-16454-8 , from which, if I understand well, I can just conclude that $\text{dim (ker } L) = 1$, but not $\text{dim (ker } L^*) = 1$.
The explicit formula for cup product on group cohomology is as simple as can be. For simplicity let's consider integer coefficients $H^*(G;\mathbb{Z})$, although this works for any coefficients as long as they're untwisted. Let's define group cohomology using inhomogeneous cochains; thus we take the abelian groups $C^n(G;\mathbb{Z}) :=$ functions from $G^n$ to $\mathbb{Z}$, endowed with a differential $d: C^n \to C^{n+1}$, and then $H^n(G;\mathbb{Z})$ is the usual cohomology $\ker d_n/\operatorname{im} d_{n-1}$. Anyway, cup product is a map from $H^k(G) \otimes H^m(G)$ to $H^{k+m}(G)$, and it comes from a map $C^k(G) \otimes C^m(G)$ to $C^{k+m}(G)$. Namely, given two cochains $f: G^n \to \mathbb{Z}$ and $g: G^m \to \mathbb{Z}$, define $$ f \wedge g: G^{k+m} \to \mathbb{Z} $$by $$ f\wedge g(x_1,...x_{k+m}) = f(x_1,...x_k)g(x_{k+1},...x_{k+m}) $$ You can check by hand that the differential interacts with this operation by $$ d(f \wedge g) = df \wedge g + (-1)^k f \wedge dg $$ Thus this "wedge product" of cochains descends to a product on group cohomology, and this is exactly cup product. This is also how cup product is defined for de Rham cohomology; differential forms have a natural wedge product which satisfies $d(f \wedge g) = df \wedge g + (-1)^k f \wedge dg$, and so this induces the cup product on $H^*(M;R)$. Topologically, cup product is the composition of $$ H^k(Y) \otimes H^m(Y) \to H^{k+m}(Y \times Y) \to H^{k+m}(Y) $$ where the first map is the Kunneth map (just pullback by the two projections $Y \times Y \to Y$), and the second map is restriction to the diagonal. Applying this perspective to group cohomology, we would first define $f \times g : (G \times G)^{k+m} \to \mathbb{Z}$ by $$ f \times g ((x_1,y_1),...(x_{k+m},y_{k+m})) = f(x_1,...x_k)g(y_{k+1},...,y_{k+m}). $$ Upon restriction to the diagonal $G < G \times G$, $f \times g$ restricts to $f \wedge g$ above.
Difference between revisions of "User:Nikita2" (→TeXing) m (One intermediate revision by the same user not shown) Line 3: Line 3: − I am Nikita Evseev + I am Nikita Evseev Novosibirsk, Russia. My research interests are in [[Mathematical_analysis | Analysis]] and [[Sobolev space|Sobolev spaces]]. My research interests are in [[Mathematical_analysis | Analysis]] and [[Sobolev space|Sobolev spaces]]. Line 54: Line 54: I'm keen on turning up articles of EoM into better appearance by rewriting formulas and math symbols in TeX. I'm keen on turning up articles of EoM into better appearance by rewriting formulas and math symbols in TeX. − Now there are ''' + Now there are '''''' (out of 15,890) articles with [[:Category:TeX done]] tag. <asy> <asy> size(0,150); size(0,150); − real tex= + real tex=; real all=15890; real all=15890; pair z0=0; pair z0=0; Latest revision as of 21:58, 17 June 2018 Pages of which I am contributing and watching Analytic function | Cauchy criterion | Cauchy integral | Condition number | Continuous function | D'Alembert criterion (convergence of series) | Dedekind criterion (convergence of series) | Derivative | Dini theorem | Dirichlet-function | Ermakov convergence criterion | Extension of an operator | Fourier transform | Friedrichs inequality | Fubini theorem | Function | Functional | Generalized derivative | Generalized function | Geometric progression | Hahn-Banach theorem | Harmonic series | Hilbert transform | Hölder inequality | Lebesgue integral | Lebesgue measure | Leibniz criterion | Leibniz series | Lipschitz Function | Lipschitz condition | Luzin-N-property | Newton-Leibniz formula | Newton potential | Operator | Poincaré inequality | Pseudo-metric | Raabe criterion | Riemann integral | Series | Sobolev space | Vitali theorem | TeXing I'm keen on turning up articles of EoM into better appearance by rewriting formulas and math symbols in TeX. Now there are 3040 (out of 15,890) articles with Category:TeX done tag. $\quad \rightarrow \quad$ $\sum_{n=1}^{\infty}n!z^n$ Just type $\sum_{n=1}^{\infty}n!z^n$. Today You may look at Category:TeX wanted. How to Cite This Entry: Nikita2. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Nikita2&oldid=38876
Current browse context: cond-mat.mes-hall Change to browse by: References & Citations Bookmark(what is this?) Condensed Matter > Mesoscale and Nanoscale Physics Title: The interplay of electron-photon and cavity-environment coupling on the electron transport through a quantum dot system (Submitted on 15 Aug 2019) Abstract: We theoretically investigate the characteristics of the electron transport through a two-dimensionala quantum dot system in the $xy$-plane coupled to a photon cavity and a photon reservoir, the environment. The electron-photon coupling, $g_{\gamma}$, and the cavity-reservoir coupling, $\kappa$, are tuned to study the system in the weak, $g_{\gamma} \leq \kappa$, and the strong coupling regime, $g_{\gamma} > \kappa$. An enhancement of current is both seen with increasing $g_{\gamma}$ and $\kappa$ in the weak coupling regime for both $x$- and $y$-polarization of the photon field. This is a direct consequence of the Purcell effect. The current enhancement is due to the contribution of the photon replica states to the electron transport in which intraband transitions play an important role. The properties of the electron transport are drastically changed in the strong coupling regime with an $x$-polarized photon field in which the current is suppressed with increasing $g_{\gamma}$, but it is still increasing with $\kappa$. This behavior of the current is related to the population of purely electronic states and depopulation of photon replica states. Submission historyFrom: Nzar Rauf Abdullah [view email] [v1]Thu, 15 Aug 2019 19:08:13 GMT (38kb)
I'd like to use as definition of "exact (forward) linear prediction" that given a finite number of the first consecutive samples of a signal, all of the following samples can be predicted with zero residual error by a linear predictor with the same finite number of coefficients. Your thinking is correct. Consider the linear discrete signal and a linear predictor: $$\begin{cases}x[k] = a_0 + a_1 k&\text{(signal)}\\x[k] = c_1 x[k - 1] + c_2 x[k - 2]&\text{(exact predictor)}\end{cases}\\\begin{align}a_0 + a_1 k &= c_1 \big(a_0 + a_1 (k - 1)\big) + c_2 \big(a_0 + a_1 (k - 2)\big)\\\Rightarrow a_0 + a_1 k &= c_1 (a_0 + a_1 k - a_1) + c_2 (a_0 + a_1 k - 2 a_1)\\\Rightarrow a_0 + a_1 k &= c_1 a_0 + c_1 a_1 k - c_1 a_1 + c_2 a_0 + c_2 a_1 k - 2 c_2 a_1\end{align}\\\Rightarrow\begin{cases}a_0 = c_1 a_0 - c_1 a_1 + c_2 a_0 - 2 c_2 a_1\\a_1 = c_1 a_1 + c_2 a_1\end{cases}\\\Rightarrow\begin{cases}c_1 = 2\\c_2 = -1\end{cases}\\\Rightarrow x[k] = 2x[k - 1] - x[k - 2]\quad\text{(exact predictor)}$$ Such a signal can be exactly predicted from the previous two samples. This is the degree 1 predictor from Laurent's answer. A sinusoid of a known frequency $\omega$ and unknown phase and amplitude can be predicted from two previous samples. The recursive part of the filter in Goertzel algorithm does this, which can be confirmed by the following, using trigonometric identities $\cos(\alpha)\cos(\beta) = \frac{\cos(\alpha - \beta) + \cos(\alpha + \beta)}{2}$ and $\cos(-\alpha) =\cos(\alpha):$ $$\begin{cases}x[k] = A\cos(\omega k + \omega_0)&\text{(signal)}\\x[k] = 2\cos(\omega)x[k-1] - x[k-2]&\text{(exact predictor)}\end{cases}\\A\cos(\omega k + \omega_0) = 2\cos(\omega) A \cos\big(\omega (k - 1) + \omega_0\big) - A \cos\big(\omega (k - 2) + \omega_0\big)\\\Rightarrow \cos(\omega k + \omega_0) = 2\cos(\omega) \cos\big(\omega (k - 1) + \omega_0\big) - \cos\big(\omega (k - 2) + \omega_0\big)\\\Rightarrow \cos(\omega k + \omega_0) = \cos\big(\omega - \omega (k - 1) - \omega_0\big) + \cos\big(\omega + \omega (k - 1) + \omega_0\big) - \cos\big(\omega (k - 2) + \omega_0\big)\\\Rightarrow \cos(\omega k + \omega_0) = \cos\big(\omega - \omega k + \omega - \omega_0\big) + \cos\big(\omega + \omega k - \omega + \omega_0\big) - \cos\big(\omega k - 2\omega + \omega_0\big)\\\Rightarrow \cos(\omega k + \omega_0) = \cos\big(- \omega k + 2 \omega - \omega_0\big) + \cos\big(\omega k + \omega_0\big) - \cos\big(\omega k - 2\omega + \omega_0\big)\\\Rightarrow \cos(\omega k + \omega_0) - \cos\big(\omega k - 2 \omega + \omega_0\big) = \cos\big(\omega k + \omega_0\big) - \cos\big(\omega k - 2\omega + \omega_0\big)\\$$ This exact predictor was also presented in Laurent's answer, recognizing that the coefficient for the previous sample is the same in both: $2 - 4\sin^2(\frac{\omega}{2}) = 2\cos(\omega),$ where $\omega = \frac{2\pi h}{T}.$ Fat32's answer mentions more general all-pole signal models that can also be exactly predicted. A sinusoid with an exponentially decaying envelope is a basic example. Wikipedia's table of common Z-transform pairs hints to that signal–predictor pair, among others: $$\begin{cases}x[k] = a^n A\cos(\omega k + \omega_0)&\text{(signal)}\\x[k] = 2a\cos(\omega)x[k - 1] - a^2 x[k - 2]&\text{(exact predictor)}\end{cases}$$ MATLAB's lpc is not exact because it assumes that the data continues beyond its start and end as zero-valued, in order to use an autocorrelation-based method. For your own arbitrary data you can use the following Octave script to find the coefficients that minimize the sum of squared residual error of predicting those data points that are preceded by enough ( N or more) data points to enable their prediction: L = 10; #Data length N = 3; #Number of prediction coefficients k = 1:L; #Index variable x = 123*k.^2 + 456*k + 789 #Test Data (2nd degree polynomial) m = x(toeplitz(N:L-1, flip(1:N))); #What we predict with v = x(N+1:L)'; #What we want to predict c = m\v #Least squares solve prediction coefficients The above example gives as output the degree 2 polynomial prediction coefficients, the same as in Laurent's answer: c = 3.0000 -3.0000 1.0000 The sum of squared prediction errors is just numerical error from rounding: >> sumsq(v - m*c) ans = 1.4128e-21 N being as small as possible for the given data to make the prediction error virtually zero, the choice of cost function (here least squares) does not matter. With larger N, there would be an extra degree of freedom in the solution that might enable the solver to minimize numerical error by the cost function, allowing its choice to affect the result. The tail of a pole-zero signal model can also be exactly predicted. Such a signal can be written in form: $$x[k] = \sum_{n=0}^M b_n \delta[k - n] - \sum_{n=1}^N a_n x[k - n],\quad\text{where }\delta[k] = \begin{cases}1&\text{if }k = 0\\0&\text{otherwise}\end{cases}$$ If $k > M,$ then $\delta[k - n]$ is always zero and each sample value will only depend on the previous $N$ sample values: $$x[k] = \sum_{n=1}^N -a_n x[k - n],\quad\text{if }k > M$$ with coefficients $-a_n$. For general signals, linear prediction alone is not enough, and the error produced by it must be corrected for in order to exactly produce the desired signal. This is called linear predictive coding. Correction is not added after predicting the complete signal, but to each prediction of a sample before doing the next prediction. This way the prediction on each sample is done based on the exact past signal and not on erroneous past predictions: $$\begin{array}{rl}\hat x[k] = \sum_{n=1}^N c_n x[k - n]&\quad\text{prediction}\\e[k] = x[k] - \hat x[k]&\quad\text{prediction error}\\x[k] = \hat x[k] + e[k] = \sum_{n=1}^N c_n x[k - n] + e[k]&\quad\text{corrected prediction}\end{array}$$ The literature seems to prefer this sign convention in the error. The script in this answer minimizes $\sum_k e^2[k]$ for other than the first $N$ samples. Let's consider again signals for which exact prediction is possible. Either a history of $N$ samples of $x[k]$ must be properly initialized, or alternatively, using the same coefficients, linear predictive coding can be used with non-zero corrections for the first $N$ samples, typically with the history of $x[k]$ set to zero. After this "warm-up", the rest of $x[k]$ will be exactly predicted without auxiliary information or correction.
Liouville equation The Liouville equation $\def\phi{\varphi}\partial_t\partial_\tau\phi(t,\tau) = e^{\phi(t,\tau)}$ or$$\phi_{t\tau} = e^\phi\tag{a1}$$is a non-linear partial differentialequation (cf.Differential equation, partial)that can be linearized and subsequently solved. Namely, it can betransformed into the linearwave equation $$u_{t\tau} = 0\tag{a2}$$by any of the following twodifferential substitutions (see[Li], formulas (4) and (2)): $$\def\ln{\mathrm{ln\;}}\phi = \ln\big(\frac{2u_t u_\tau}{u^2}\big),\quad \phi = \ln\big(\frac{2u_t u_\tau}{\cos^2 u}\big).\tag{a3}$$In other words, theformulas (a3) provide the general solution to the Liouville equation,in terms of the well-known general solution $u=f(t)+g(\tau)$ of the wave equation(a2). The Liouville equation (a1) is invariant under the infinite group of point transformations $$\bar t = \alpha(t),\ \bar\tau = \beta(\tau), \ \bar\phi = \phi - \ln \alpha'(t) - \ln \beta'(\tau)\tag{a5}$$ with arbitrary invertible differentiable functions $\alpha(t) $ and $\beta(\tau)$. The infinitesimal generator of this group is: $$X=\xi(t)\frac{\partial}{\partial t} + \eta(\tau)\frac{\partial}{\partial\tau} - (\xi'(t)+\eta'(\tau))\frac{\partial}{\partial\phi},$$ where $\xi(t)$, $\eta(\tau)$ are arbitrary functions and $\xi'(t)$, $\eta'(\tau)$ are their first derivatives. It is shown in [Li2] that the equation (a4), and in particular the Liouville equation, does not admit non-trivial (i.e. non-point) Lie tangent transformations. In addition to the transformations (a3), it is known (see, e.g., [Ib]) that the Liouville equation is related with the wave equation (a2) by the following Bäcklund transformation: $$\phi_t - u_t+ a e^{(\phi+u)/2} = 0,\quad \phi_\tau + u_\tau + \frac{2}{a} e^{(\phi-u)/2} = 0.$$ By letting $x=t+\tau$, $y=i(t-\tau)$ in (a1), (a2) and (a3), where $i = \sqrt{-1}$, one can transform the elliptic Liouville equation $\phi_{xx}+\phi_{yy} = e^\phi$ into the Laplace equation $u_{xx}+u_{yy} = 0$. References [Ib] N.H. Ibragimov, "Transformation groups applied to mathematical physics", Reidel (1985) (In Russian) MR0785566 Zbl 0558.53040 [Ib2] "CRC Handbook of Lie group analysis of differential equations" N.H. Ibragimov (ed.), 1, CRC (1994) pp. Chapt. 12.3 MR1278257 Zbl 0864.35001 [Li] J. Liouville, "Sur l'équation aux différences partielles $\frac{d^2\log\lambda}{du\; dv} \pm \frac{\lambda}{2\alpha^2} = 0\;$" J. Math. Pures Appl., 8 (1853) pp. 71–72 [Li2] S. Lie, "Discussion der Differentialgleichung $\frac{d^2z}{dx\; dy} = F(z)$" Lie Arch. VI, 6 (1881) pp. 112–124 (Reprinted as: S. Lie: Gesammelte Abhandlungen, Vol. 3, pp. 469–478) Zbl 13.0297.01 How to Cite This Entry: Liouville equation. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Liouville_equation&oldid=21581
The $GF$ in $GF(p^n)$ is not a function — it just stands for "Galois field (of $p^n$ elements)". As for what a Galois field is, it's a finite set of things (which we might represent e.g. with the numbers from $0$ to $p^n-1$), with some mathematical operations (specifically, addition and multiplication, and their inverses) defined on them that let us calculate with these things as if they were ordinary numbers, but so that the results of the calculations always stay inside the finite set. Specifically, we require the operations defined on the elements of this finite set to satisfy the field axioms, which include the usual associativity, commutativity and distributivity rules you probably learned in high school algebra class. In particular, we also require every element $a$ to have an additive inverse $-a$ and (if $a \ne 0$) a multiplicative inverse $1/a$, such that $a + (-a) = 0$ and $a \times (1/a) = 1$. Thus, as long as you just stick to algebraic manipulations using these rules, the Galois fields all look exactly like the field of real numbers you're familiar with. To learn how to do arithmetic (i.e. actual calculations with actual numbers) in Galois fields, it's probably easiest (particularly if you're already familiar with modular arithmetic) to start with the prime Galois fields $GF(p)$. Arithmetic in these fields is simply ordinary addition and multiplication of integers modulo some prime $p$. For example, in $GF(3)$, there are three numbers ($0$, $1$ and $2$) with the following addition and multiplication rules (and their commutative variants): $$\begin{aligned}0 + 0 &= 0 & 1 + 1 &= 2 & 0 \times 0 &= 0 & 1 \times 1 &= 1 \\0 + 1 &= 1 & 1 + 2 &= 0 & 0 \times 1 &= 0 & 1 \times 2 &= 2 \\0 + 2 &= 2 & 2 + 2 &= 1 & 0 \times 2 &= 0 & 2 \times 2 &= 1 \\\end{aligned}$$ The only unusual rules here are $1 + 2 = 0$, $2 + 2 = 1$ and $2 \times 2 = 1$; those are the cases where, in normal arithmetic, the result would've been equal to or greater than $3$, so it's "wrapped around" by subtracting $3$. From these rules, we can also determine the inverses. It turns out that $-0 = 0$, $-1 = 2$ and $-2 = 1$ (since $1 + 2 = 0$) and $1/1 = 1$ and $1/2 = 2$ (since $2 \times 2 = 1$), so they indeed do exist and belong to the field. (I'll leave verifying that the inverses are all unique, and that these rules indeed also satisfy all the other field axioms, as an exercise.) Why do we require $p$ to be prime, then? Well, it turns out that if a number $m$ is not prime, then some numbers won't have multiplicative inverses modulo $m$: for example, there is no integer $a$ such that $2 \times a = 1 \pmod 4$. Thus, the integers modulo $m$ don't actually form a field (but only a ring) unless $m$ is prime. However, if $m$ is a prime power — i.e. a number of the form $m = p^n$, for some prime $p$ and some positive integer $n$ — then we can still save things by changing the rules on how addition and multiplication work. This is where all that stuff about the vectors and polynomials comes in. However, what you should keep in mind is that they don't really represent any added complexity, just alternative ways of looking at things. Fundamentally, we're still dealing with a set of $p^n$ numbers and some arithmetic operations defined on them — it just turns out that, for example, the multiplication rule we'll need to use is pretty easy to describe if we identify each number in the field with a polynomial, at least if you still remember the rules for adding and multiplying (and dividing) polynomials that you may have also learned in high school. So, let's start with vectors and addition. Given a number $a \in \{0, \dotsc, p^n-1\}$, we can represent it in a natural way with $n$ base-$p$ digits $a_0, \dotsc, a_{n-1}$, such that $$a = a_0 + a_1 p + a_2 p^2 + \dotsb + a_{n-1} p^{n-1}.$$ This is exactly how you'd represent a binary number modulo $2^n$ as a string of $n$ bits. We could also call the string a vector, since that's basically what a vector is: a finite-length sequence of numbers. Now, when you normally add numbers together digit by digit, you need to keep track of carries. On the other hand, when you add two vectors together, it's simpler: you just add up the corresponding numbers in each vector separately. Now, it just turns out that the addition rule we'll need to use, to make the field axioms work in a field with $p^n$ elements, is exactly this "carryless addition". For example, in $GF(4)$ (which we often write as $GF(2^2)$ to emphasize that $4 = 2^2$ is indeed a prime power) the addition rules look like this: $$\begin{aligned}0 + 0 &= 0 & 0 + 2 &= 2 & 1 + 1 &= 0 & 1 + 3 &= 2 & 2 + 3 &= 1 \\0 + 1 &= 1 & 0 + 3 &= 3 & 1 + 2 &= 3 & 2 + 2 &= 0 & 3 + 3 &= 0 \\\end{aligned}$$ They look a lot more logical if you write them out in binary: $$\begin{aligned}00 + 00 &= 00 & 00 + 10 &= 10 & 01 + 01 &= 00 & 01 + 11 &= 10 & 10 + 11 &= 01 \\00 + 01 &= 01 & 00 + 11 &= 11 & 01 + 10 &= 11 & 10 + 10 &= 00 & 11 + 11 &= 00 \\\end{aligned}$$ Here you can see that we're just adding up the digits modulo $2$ and ignoring any carries. You might also recognize this "addition" rule as the same operation as bitwise XOR. This is not unique to $GF(4)$; the elements of $GF(2^n)$ for any $n$ can be represented as $n$-bit bitstrings, and their addition as bitwise XOR. So, now we know how to add numbers in $GF(p^n)$. What about multiplication, then? Well, this is where the polynomials come in. You see, one way to describe the multiplication rule is to imagine that the digits $a_0, \dotsc, a_{n-1}$ of the number $a$ are the coefficients of a polynomial $$a[x] = a_0 + a_1 x + a_2 x^2 + \dotsb + a_{n-1} x^{n-1}$$ with the unknown $x$. (Here, the variable $x$ is purely a formal placeholder; we'll never assign it a value, so you'll never have to worry about "what's $x$?". It's just there so that we can use the high school algebra rules for manipulating polynomials in $x$.) Then, to multiply two numbers $a$ and $b$, we just take their respective polynomials $a[x]$ and $b[x]$, multiply them together using the high school algebra rules (doing all the internal arithmetic modulo $p$), and take the coefficients of the result. This is all pretty simple: remember that polynomial multiplication is also pretty straightforward, being just like normal digit-by-digit multiplication except that, again, there are no carries. But wait! Won't the multiplication sometimes give me terms of order $x^n$ or higher? If I include them in the digit string, wouldn't that result in a number larger than $p^n-1$? Well, yes. That's why there's another step: after the multiplication, we need to reduce the result modulo a suitable polynomial (specifically, an irreducible monic polynomial of order $n$). That is to say, we take the result of the multiplication and divide it by this reducing polynomial (which we can again do with high school polynomial long division), again remembering to do all arithmetic on the coefficients modulo $p$, and keep the remainder (which will be of order $x^{n-1}$ or less). Of course, in practice it's usually more efficient to do the reduction during the multiplication, so that you don't need to store lengthy intermediate results. OK, so where does that reducing polynomial come from? Well, it turns out that we have some latitude in choosing it, since there are usually several polynomials that will work. Each of them will give a different multiplication rule, although all the fields so constructed (and, more generally, all finite fields with $p^n$ elements) are isomorphic, in the sense that, for any two Galois fields $A$ and $B$ of $p^n$ elements, there's an invertible function $f: A \to B$ mapping one field to the other so that $f(a + b) = f(a) + f(b)$ and $f(a \times_A b) = f(a) \times_B f(b)$, where $\times_A$ and $\times_B$ denote the multiplication operators of the two fields. Thus, when we only care about the general algebraic properties of the field, and not about the specific representation of the numbers, it's common to speak of "the" Galois field of order $p^n$, even though it might have multiple representations. If you're asked to calculate something in a specific Galois field, the reducing polynomial will normally be given for you. For example, if you're writing an AES implementation, it uses $GF(2^8)$ with the reducing polynomial $x^8 + x^4 + x^3 + x + 1$ (where, since $p=2$, all coefficients are either $0$ or $1$). If you get to choose your own representation, and thus your own reducing polynomial, you should generally try to pick something that makes the calculations easy, subject to the irreducibility constraints stated above. Often this means picking something with as few non-zero coefficients as possible, and with all those coefficients occurring on low-order terms (except for the $x^n$ term, which must have a coefficient of $1$ for the polynomial to be monic and of order $n$, of course). I could go into more detail on how to implement multiplication in binary Galois fields $GF(2^n)$, since what I've written above is still on a rather abstract level, and since — especially to someone coming from a programming background — the theory is often more complicated than the actual code (or at least looks that way). However, to be honest, I'm more familiar with the theoretical side of things myself, and in any case this answer is already more than long enough. Wikipedia does have a nice article on finite field arithmetic that you could start with, though. Oh, and what about Galois fields that are not of order $p^n$ for some prime $p$ and positive integer $n$? Well, it turns out that there aren't any — you just can't satisfy the field axioms if the number of elements has two distinct prime factors. So, alas, there's no such thing as $GF(6)$ or $GF(10)$.
Please assume that this graph is a highly magnified section of the derivative of some function, say $F(x)$. Let's denote the derivative by $f(x)$.Let's denote the width of a sample by $h$ where $$h\rightarrow0$$Now, for finding the area under the curve between the bounds $a ~\& ~b $ we can a... @Ultradark You can try doing a finite difference to get rid of the sum and then compare term by term. Otherwise I am terrible at anything to do with primes that I don't know the identities of $\pi (n)$ well @Silent No, take for example the prime 3. 2 is not a residue mod 3, so there is no $x\in\mathbb{Z}$ such that $x^2-2\equiv 0$ mod $3$. However, you have two cases to consider. The first where $\binom{2}{p}=-1$ and $\binom{3}{p}=-1$ (In which case what does $\binom{6}{p}$ equal?) and the case where one or the other of $\binom{2}{p}$ and $\binom{3}{p}$ equals 1. Also, probably something useful for congruence, if you didn't already know: If $a_1\equiv b_1\text{mod}(p)$ and $a_2\equiv b_2\text{mod}(p)$, then $a_1a_2\equiv b_1b_2\text{mod}(p)$ Is there any book or article that explains the motivations of the definitions of group, ring , field, ideal etc. of abstract algebra and/or gives a geometric or visual representation to Galois theory ? Jacques Charles François Sturm ForMemRS (29 September 1803 – 15 December 1855) was a French mathematician.== Life and work ==Sturm was born in Geneva (then part of France) in 1803. The family of his father, Jean-Henri Sturm, had emigrated from Strasbourg around 1760 - about 50 years before Charles-François's birth. His mother's name was Jeanne-Louise-Henriette Gremay. In 1818, he started to follow the lectures of the academy of Geneva. In 1819, the death of his father forced Sturm to give lessons to children of the rich in order to support his own family. In 1823, he became tutor to the son... I spent my career working with tensors. You have to be careful about defining multilinearity, domain, range, etc. Typically, tensors of type $(k,\ell)$ involve a fixed vector space, not so many letters varying. UGA definitely grants a number of masters to people wanting only that (and sometimes admitted only for that). You people at fancy places think that every university is like Chicago, MIT, and Princeton. hi there, I need to linearize nonlinear system about a fixed point. I've computed the jacobain matrix but one of the elements of this matrix is undefined at the fixed point. What is a better approach to solve this issue? The element is (24*x_2 + 5cos(x_1)*x_2)/abs(x_2). The fixed point is x_1=0, x_2=0 Consider the following integral: $\int 1/4*(1/(1+(u/2)^2)))dx$ Why does it matter if we put the constant 1/4 behind the integral versus keeping it inside? The solution is $1/2*\arctan{(u/2)}$. Or am I overseeing something? *it should be du instead of dx in the integral **and the solution is missing a constant C of course Is there a standard way to divide radicals by polynomials? Stuff like $\frac{\sqrt a}{1 + b^2}$? My expression happens to be in a form I can normalize to that, just the radicand happens to be a lot more complicated. In my case, I'm trying to figure out how to best simplify $\frac{x}{\sqrt{1 + x^2}}$, and so far, I've gotten to $\frac{x \sqrt{1+x^2}}{1+x^2}$, and it's pretty obvious you can move the $x$ inside the radical. My hope is that I can somehow remove the polynomial from the bottom entirely, so I can then multiply the whole thing by a square root of another algebraic fraction. Complicated, I know, but this is me trying to see if I can skip calculating Euclidean distance twice going from atan2 to something in terms of asin for a thing I'm working on. "... and it's pretty obvious you can move the $x$ inside the radical" To clarify this in advance, I didn't mean literally move it verbatim, but via $x \sqrt{y} = \text{sgn}(x) \sqrt{x^2 y}$. (Hopefully, this was obvious, but I don't want to confuse people on what I meant.) Ignore my question. I'm coming of the realization it's just not working how I would've hoped, so I'll just go with what I had before.
Since the set over which you are minimizing is compact, there exists a minimizer. Since the set is convex and the objective function is convex, the set of minimizers is convex. The KKT conditions now tell you what the form of any minimizer must be: Either $x_i = 0$ or the numbers $a_i/b_i \exp(-x_i/b_i)$ must all equal the same value $\lambda> 0$. This doesn't prove uniqueness yet, but let's see further: To find the minimizer, proceed as follows. Define $g_i(\lambda) = \max(0,- b_i \log (\frac{b_i \lambda}{a_i}))$ for $i = 1, \dots, N$. These functions are strictly decreasing for $\lambda < a_i/b_i$ and equal to 0 for $\lambda \ge a_i/b_i$. (I have simply solved each KKT condition for $x_i$.) Now consider the expression$$F(\lambda) = \sum_{i=1}^N g_i(\lambda) -x.$$This function is therefore strictly decreasing for $\lambda < \max_i \frac{a_i}{b_i}$ when it reaches the value $-x$. Also, $F(\lambda)$ is positive for $\lambda$ close to 0. Therefore, there is a unique $\lambda_0$ where $F(\lambda_0) = 0$. Set $$\boxed{x_i = g_i(\lambda_0)}$$and that is your solution. By this argument, the solution is unique. I don't see how one can get a closed form solution, but your minimization problem has now been reduced to that of finding the zero of a well-defined scalar function. Moreover it shows how the solution changes as $x$ changes: For large $x>0$, all $x_i$ are positive. As $x$ decreases, all $x_i$ decrease as well (at different rates) until they hit 0 one after the other. For very small $x$, only a single $x_i$ is positive.
Support This page provides an explanation of the main topics related to wave forecast that at a first sight may appear tough to the majority of users. What does UTC mean? UTC stands for Coordinated Universal Time. For weather and ocean forecast purposes, we can say that it is equivalent to the Greenwich Mean Time (GMT). Time zones around the world are expressed using offsets from UTC. This offset must be added to the UTC to obtain the local time of a time zone. For instance, to obtain local time in Rome, we must add one hour in winter and two hours in summer (due to the daylight saving time) to the UTC. What does the Iribarren number represent? The Iribarren number \xi_0 is one of the most important parameters in coastal oceanography. It relates the geometrical characteristics of the profile to those of the incoming waves. It can be thought as the slope of the dimensionless profile. The dimensionless profile can be obtained by diving the vertical and horizontal profile distances z and x by characteristic lengths of the wave field. The characteristic vertical length of the wave field is represented by H_s. The characteristic horizontal length is \sqrt{H_sL_0}. In mathematical terms, we have the slope of the profile \beta:\beta=\frac{z}{x}, and the slope of the dimensionless profile \xi_0:\xi_0=\frac{z/H_s}{x/\sqrt{H_sL_0}}=\frac{\beta}{\sqrt{H_s/L_0}}. The Iribarren number reminds us that everything is relative in this world. In particular it tells us that long waves on mild beaches behave in the same way as short waves on steep beaches. A widely used wave breaking classification operates with three breaker types that are defined based on Iribarren number ranges: Spilling breaking occurs for \xi_b<0.4 Plunging breaking occurs for 0.4<\xi_b<1.5 Surging breaking occurs for \xi_b>1.5 where the subscript b indicates that the significant wave height at the breaking point is used in the Iribarren number computation. How does WW3 compute the significant wave height? WaveWatch3 is a phase-averaged model and is thus unable to compute the significant wave height H_s from individual waves. What WaveWatch3 actually computes is the zeroth-order moment wave height H_{m0}: H_{m0}=4\sqrt{\int \int S(f,\theta) \, df \, d\theta,} where S(f,\theta) is the directional wave spectrum, f is the frequency and \theta is the propagation direction. Under the assumptions of narrow banded wave spectrum and that the statistical distribution of wave heights is well described by a Rayleigh distribution, it follows that the discrepancy between the significant wave height and the zeroth-order moment wave height is negligible: H_s \sim H_{m0}. We are not going too much into details here but, for the sake of simplicity, we can say that the more a sea state is represented by a clean swell outside the wave generation area, the more the two mentioned assumptions are valid.
Product . category theory Collection context $F:\mathrm{Ob}_{{\bf C}^{\bf 2}}$ range $a,b:\mathrm{Ob}_{\bf 2},\ a\neq b$ definition $\langle Fa\times Fb, \pi\rangle := \mathrm{lim}\,F$ Discussion Elaboration ${\bf C}^{\bf 2}$ is the functor category with objects being functors in ${\bf 2}\longrightarrow{\bf C}$ and the morphisms are natural transformations, i.e. families of ${\bf 2}$-indexed arrows in ${\bf C}$. So $\pi:\prod_{x:\mathrm{Ob}_{\bf 2}}\left(Fa\times Fb\right)\to Fx$ I.e. $\pi_a:(Fa\times Fb)\to Fa$ and $\pi_b:(Fa\times Fb)\to Fb$. Idea We first discuss the concept in the category of sets. Say we want to specify the binary product of $A$ and $B$ in ${\bf C}$. We can do this in the language of cones, by considering the discrete category with only two objects $a,b$ and no non-identity arrows, and define a functor $Fa:=A$ and $F:=B$. A cone is any object $N$ with two arrows $\psi_A:N\to A$ and $\psi_B:N\to B$. If there is a limit cone, let's call its tip $A\times B$, then you can put the two arrows together to define a map $u(n):=\langle\psi_A(n),\psi_B(n)\rangle$ from $N$ to $A\times B$, and then $\psi_A(n)=\pi_A(u(n))$ and $\psi_B(n)=\pi_B(u(n))$. If the objects of a category are propositions, then the product is $\land$ i.e. 'and': from $A\land B$ you can derive $A$ as well as $B$. The coproduct turns out to be $\lor$, i.e. 'or': From both $A$ or $B$, you can derive $A\lor B$. While the category ${\bf C}$ might have a billion ways to “look at” $A$ and $B$, category theory works out that these will always just be some arrow concatenated with the limit cones binoculars - that's sort of the “why” answer to why projection operators are an ubiquitous concept. Alternative definitions If ${\bf C}$ has a terminal object $T$, the product $A\times B$ is the pullback $A\times_T B$. Reference Wikipedia: Product (category theory)
(50g) Simpson's rule for f(x,y) 11-17-2018, 09:27 PM (This post was last modified: 11-18-2018 11:14 PM by peacecalc.) Post: #1 (50g) Simpson's rule for f(x,y) Hello friends, like Eddie Shore showed us HERE an algorithm for integration a function with two variables with the simpson rule and a matrix. He implemented this for the HP 71B. I do the same thing for the HP 50G but not so elegant, it is brute force: I implementated this formular: \[ F = \int_a^b\int_c^d f(t,s)dtds \\ \sim \frac{ha}{3} \left( \frac{hi}{3}\left( f(a,c) + f(b,c) + \sum_{j=1}^{k-1}\left( 2f(t_j,c) + 4f(t_j,c)\right) + f(a,d) + f(b,d) + \sum_{j=1}^{k-1}\left( 2f(t_j,d) + 4f(t_j,d)\right) + \\ \sum_{i=1}^{m-1}\left(2\left(f(a,s_i)+f(b,s_i) + \sum_{j=1}^{k-1}\left( 2f(t_j,s_i) + 4f(t_j,s_i)\right)\right) + \\ 4\left(f(a,s_i)+f(b,s_i) + \sum_{j=1}^{k-1}\left( 2f(t_j,s_i) + 4f(t_j,s_i)\right)\right)\right)\right)\right) \] That looks horrible, but I used the stack to sum up all function values and multiplied them afterwards with 2 or 4. And the indices in formular above has to be disdinguish between even or odd numbers (only values with odd indices has to be multiplied be 4 and even with 2). I used in the FOR loops no integer values but the values for the variables (the hp 50g is very happy to use a real variable in the FOR loop. For instance I used my little program for estimate antiderivatives with harmonic sphere function multiplied with a light function to geht the coeffecients.The one angle goes from 0 to pi, the other one from 0 to 2pi. With N = 15 the hp 50g has to calculate 30*60 = 180 function values and it takes 2 minutes at average. That seems to be very long, but it is faster as you take the built-in function \[ \int \]. I have the impression that the built in function works then (when you have more variables) with recursion. Code: 11-17-2018, 10:31 PM Post: #2 RE: (50g) Simpson's rule for f(x,y) . Hi, peacecalc: (11-17-2018 09:27 PM)peacecalc Wrote: like Eddie Shore showed us HERE an algorithm for integration a function with two variables with the simpson rule and a matrix. He implemented this for the HP 71B. I didn't see Eddie's post at the time but he's wrong on one count, namely (my bolding): Eddie W. Shore Wrote:On the HP 71B, matrices cannot be typed directly, elements have to be stored and recalled on element at a time. The program presented does not use modules. That's not correct. HP-71B's BASIC language allows for filling in all elements of an arbitrary size matrix at once by including the values in one or more DATA statements and then reading them all into the matrix using a single READ statement, no extra ROM modules needed. Thus, this lengthy initialization part in Eddie's code: 14 DIM I(5,5) 20 I(1,1) = 1 21 I(1,2) = 4 22 I(1,3) = 2 23 I(1,4) = 4 24 I(1,5) = 1 25 I(2,1) = 4 26 I(2,2) = 16 27 I(2,3) = 8 28 I(2,4) = 16 29 I(2,5) = 4 30 I(3,1) = 2 31 I(3,2) = 8 32 I(3,3) = 4 33 I(3,4) = 8 34 I(3,5) = 2 35 I(4,1) = 4 36 I(4,2) = 16 37 I(4,3) = 8 38 I(4,4) = 16 39 I(4,5) = 4 40 I(5,1) = 1 41 I(5,2) = 4 42 I(5,3) = 2 43 I(5,4) = 4 44 I(5,5) = 1 can be replaced by this much shorter, much faster version (OPTION BASE 1 assumed): 14 DATA 1,4,2,4,1,4,16,8,16,4,2,8,4,8,2,4,16,8,16,4,1,4,2,4,1 20 DIM I(5,5) @ READ I where the READ I fills in all data into the matrix with a single statement, no individual assignments or loops needed, thus it's much faster and uses less program memory. Notice that this also works for arbitrary numerical expressions in the DATA, i.e.: the following hipothetical code would work Ok: 10 DATA 5,-3,2.28007e20,X,2*Z+SIN(Y),FNF(X+Y,X-Y),FNZ(2+FNF(C+D),3/FNF(6,8)),43 20 DIM M(2,4) @ READ M Anyway, Simpson's rule is suboptimal for integration purposes, either one-dimensional or multi-dimensional. There are much better methods providing either significantly increased accuracy for the same number of function evaluations or the same accuracy with fewer evaluations. V. 11-19-2018, 08:33 PM Post: #3 RE: (50g) Simpson's rule for f(x,y) Hello Valentin, I second your statement: Quote:Anyway, Simpson's rule is suboptimal for integration purposes, either one-dimensional or multi-dimensional. There are much better methods providing either significantly increased accuracy for the same number of function evaluations or the same accuracy with fewer evaluations. But as I wrote with double integrals the build-in solution for the hp 50g is slower then the brute force simpson rule. User(s) browsing this thread: 1 Guest(s)
Let us now take a closer look at the hex-fractal we sliced last week. Chopping a level 0, 1, 2, and 3 Menger sponge through our slanted plane gives the following: This suggests an iterative recipe to generate the hex-fractal. Any time we see a hexagon, chop it into six smaller hexagons and six triangles as illustrated below. Similarly, any time we see a triangle, chop it into a hexagon and three triangles like this: In the limit, each triangle and hexagon in the above image becomes a hex-fractal or a tri-fractal, respectively. The final hex-fractal looks something like this (click for larger image): Now we are in a position to answer last week’s question: how can we compute the Hausdorff dimension of the hex-fractal? Let d be its dimension. Like last week, our computation will proceed by trying to compute the “ d-dimensional volume” of our shape. So, start with a “large” hex-fractal and tri-fractal, each of side-length 1, and let their d-dimensional volumes be h and t respectively. [1] Break these into “small” hex-fractals and tri-fractals of side-length 1/3, so these have volumes \(h/3^d\) and \(t/3^d\) respectively (this is how “ d-dimenional stuff” scales). Since $$\begin{gather*}(\text{large hex}) = 6(\text{small hex})+6(\text{small tri}) \quad \text{and}\\ (\text{large tri}) = (\text{small hex})+3(\text{small tri}),\end{gather*}$$ we find that \(h=6h/3^d + 6t/3^d\) and \(t=h/3^d+3t/3^d\). Surprisingly, this is enough information to solve for the value of \(3^d\). [2] We find \(3^d = \frac{1}{2}(9+\sqrt{33})\), so $$d=\log_3\left(\frac{9+\sqrt{33}}{2}\right) = 1.8184\ldots,$$ as claimed last week. As a final thought, why did we choose to slice the Menger sponge on this plane? Why not any of the (infinitely many) others? Even if we only look at planes parallel to our chosen plane, a mesmerizing pattern emerges: More Information It takes a bit more work to turn the above computation of the hex-fractal’s dimension into a full proof, but there are a few ways to do it. Possible methods include mass distributions [3] or similarity graphs [4]. This diagonal slice through the Menger sponge has been proposed as an exhibit at the Museum of Math. Sebastien Perez Duarte seems to have been the first to slice a Menger sponge in this way (see his rendering), and his animated cross section inspired my animation above. Thanks for reading! Notes We’re assuming that the hex-fractal and tri-fractal have the same Hausdorff dimension. This is true, and it follows from the fact that a scaled version of each lives inside the other. [↩] There are actually two solutions, but the fact that hand tare both positive rules one out. [↩] Proposition 4.9 in: Kenneth Falconer. Fractal Geometry: Mathematical Foundations and Applications.John Wiley & Sons: New York, 1990. [↩] Section 6.6 in: Gerald Edgar. Measure, Topology, and Fractal Geometry(Second Edition). Springer: 2008. [↩]
Search Now showing items 1-5 of 5 Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV (Springer, 2015-09) We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ... Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2015-07-10) The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ...
Just to expand on the comment of @Myself above... Adjoining algebraics as matrix maths Sometimes in mathematics or computation you can get away with adjoining an algebraic number $\alpha$ to some simpler ring $R$ of numbers like the integers or rationals $\mathbb Q$, and these characterize all of your solutions. If this number obeys the algebraic equation $$\alpha^n = \sum_{k=0}^{n-1} q_k \alpha^k$$ for all $q_k \in R,$ we call the above polynomial equation $Q(\alpha) = 0$, and then we can adjoin this number by using polynomials of degree $n - 1$ with coefficients from $R$ and evaluated at $\alpha$: the ring is formally denoted $R[\alpha]/Q(\alpha),$ "the quotient group of the polynomials with coefficients in $R$ of some parameter $\alpha$ given their equivalence modulo polynomial division by $Q(\alpha).$" If you write down the action of the multiplication $(\alpha\cdot)$ on the vector in $R^n$ corresponding to such a polynomial in this ring, it will look like the matrix$$\alpha \leftrightarrow A = \begin{bmatrix}0 & 0 & 0 & \dots & 0 & q_0\\1 & 0 & 0 & \dots & 0 & q_1 \\0 & 1 & 0 & \dots & 0 & q_2 \\\dots & \dots & \dots & \dots & \dots & \dots \\0 & 0 & 0 & \dots & 0 & q_{n-2} \\0 & 0 & 0 & \dots & 1 & q_{n-1} \\\end{bmatrix},$$and putting such a matrix in row-reduced echelon form is actually so simple that we can immediately strengthen our claim to say that the above ring $R[\alpha]/Q(\alpha)$ is a field when $R$ is a field and $q_0 \ne 0.$ The matrix $\sum_k p_k A^k$ is then a matrix representation of the polynomial $\sum_k p_k \alpha^k$ which implements all of the required operations as matrix operations. A simple example before \$#!& gets real. For example, there is a famous "O(1)" solution to generating the $k^\text{th}$ Fibonacci number which comes from observing that the recurrence $F_k = F_{k-1} + F_{k-2}$ can be solved by other functions for other boundary conditions than $F_{0,1} = 0,1$, and one very special set of solutions looks like $F_k = \phi^k$ for some $\phi$. Plugging and chugging we get the algebraic numbers $\phi^2 = \phi + 1,$ which we can solve for the golden ratio $\varphi = (1 + \sqrt{5})/2$ and its negative reciprocal $\bar\varphi = -\varphi^{-1}= (1-\sqrt{5})/2.$ However since the Fibonacci recurrence relation is linear, this means that any linear combination $$F_n = A \varphi^n + B \bar\varphi^n$$obeys the Fibonacci recurrence, and we can actually just choose $A = \sqrt{1/5},\; B = -\sqrt{1/5}$ to get the standard $F_{0,1} = 0,1$ starting points: this is the Fibonacci sequence defined purely in terms of exponentiation. But, there is a problem with using this on a computer: the Double type that a computer has access to has only finite precision and the above expressions will round off wildly. What we really want is to use our arbitrary-precision Integer type to calculate this. We can do this with matrix exponentiation in a couple of different ways. The first would be to adjoin the number $\sqrt{5}$ to the integers, solving $\alpha^2 = 5.$ Then our ring consists of the numbers $a + b \sqrt{5}$ which are the matrices: $$\begin{bmatrix}a & 5b\\b & a\end{bmatrix}.$$ And that's easy-peasy to program. Your "unit vectors" can similarly be chosen as $1$ and $\varphi$ however, and that leads to the "no-nonsense" matrix $$a + b \varphi = \begin{bmatrix}a & b\\ b & a + b\end{bmatrix}$$ which I'm calling "no-nonsense" because for $a = 0, b = 1$ this is actually the Fibonacci recurrence relation on vectors $[F_{n-1}, F_{n}],$ which is a way to get to this result without going through the above hoops. There is also an interesting "symmetric" version where $\varphi$ and $\bar\varphi$ are our "unit vectors" and the matrix is (I think) $a \varphi + b \bar\varphi \leftrightarrow [2a-b,\; -a+b;\;a - b,\; -a + 2b].$ (In any case, it turns out that the supposedly "O(1)" algorithm is not: even when we exponentiate-by-squaring we have to perform $\log_2(k)$ multiplications of numbers $m_i = F_{2^i}$ which are growing asymptotically like $F_k \approx \varphi^k/\sqrt{5},$ taking some $O(k^2)$ time, just like adding up the numbers directly. The big speed gain is that the adding-bignums code will "naturally" allocate new memory for each bignum and will therefore write something like $O(k^2)$ bits in memory if you don't specially make it intelligent; the exponentiation improves this to $O(k~\log(k))$ and possibly even to $O(k)$ since the worst of these happen only at the very end.) Going off of the algebraics into complex numbers Interestingly, we don't need to restrict ourselves to real numbers when we do the above. We know that in $\mathbb R$ there is no $x$ satisfying $x^2 = -1,$ so the above prescribes that we extend our field to the field $$a + b \sqrt{-1} \leftrightarrow \begin{bmatrix}a & -b\\b & a\end{bmatrix}.$$When we replace $a$ with $r\cos\theta$ and $b$ with $r\sin\theta$ we find out that in fact these "complex numbers" are all just scaled rotation matrices:$$ r (\cos\theta + i~\sin\theta) = r \begin{bmatrix}\cos\theta & -\sin\theta\\\sin\theta & \cos\theta\end{bmatrix} = r~R_\theta,$$giving us an immediate geometric understanding of a complex number as a scaled rotation (and then analytic functions are just the ones which locally look like a scaled rotation.) Going way off the algebraics into infinitesimals. Another interesting way to go with this is to consider adjoining a term $\epsilon$ which is not zero, but which squares to zero. This idea formalizes the idea of an "infinitesimal" with no real effort, although as mentioned before, the resulting algebra is doomed to be a ring. (We could adjoin an inverse $\infty = \epsilon^{-1}$ too but presumably we'd then have $\infty^2 = \infty$ which breaks associativity, $(\epsilon\cdot\infty)\cdot\infty \ne \epsilon\cdot(\infty\cdot\infty),$ unless we insert more infinities to push the problem out to infinity.) Anyway, we then have the matrix: $$a + b \epsilon \leftrightarrow a I + b E = \begin{bmatrix}a & 0\\ b & a\end{bmatrix}.$$ It's precisely the transpose of what you were looking at. Following the rules, $(a + b \epsilon)^n = a^n + n~a^{n-1}~b~\epsilon$ with all of the other terms disappearing. Expanding it out we find by Taylor expansion that $f(x + \epsilon) = \sum_n f^{(n)}(x) \epsilon^n = f(x) + f'(x) \epsilon,$ and this is the property that you have seen in your own examination. We can similarly keep infinitesimals out to second order with a 3x3 matrix $$a + b \epsilon + c \epsilon^2 \leftrightarrow \begin{bmatrix}a & 0 & 0\\b & a & 0\\c & b & a\end{bmatrix}$$Then $f(x I + E) = f(x) I + f'(x) E + f''(x) E^2 / 2$ straightforwardly.
In last week’s discussion of proofs by contradiction and nonconstructive proofs, we showed: Theorem: There exist irrational numbers \(x\) and \(y\) with the property that \(x^y\) is rational. However, our proof was nonconstructive: it did not pinpoint explicit values for \(x\) and \(y\) that satisfy the condition, instead proving only that such numbers must exist. Would a more constructive proof be more satisfying? Let’s see! I claim \(x=\sqrt{2}\) and \(y=\log_2 9\) work, because \(\sqrt{2}\) we already know to be irrational, \(y=\log_2 9\) can be similarly proved to be irrational (try this!), and $$x^y = \sqrt{2}^{\log_2 9} = \sqrt{2}^{\log_{\sqrt{2}}3}=3,$$ which is rational. Let’s further discuss why last week’s proof was less satisfying. The following rephrasing of this proof may help shed some light on the situation: Proof: Assume the theorem were false, so that any time \(x\) and \(y\) were irrational, \(x^y\) would also be irrational. This would imply that \(\sqrt{2}^{\sqrt{2}}\) would be irrational, and by applying our assumption again, \(\left(\sqrt{2}^{\sqrt{2}}\right)^{\sqrt{2}}\) would also be irrational. But this last number equals 2, which is rational. This contradiction disproves our assumption and thereby proves the theorem, QED. So perhaps this argument seems less satisfactory simply because it is, at its core, a proof by contradiction. It does not give us evidence for the positive statement “\(x\) and \(y\) exist”, but instead only for the negative statement “\(x\) and \(y\) don’t not exist.” (Note the double negative.) This distinction is subtle, but a similar phenomenon can be found in the English language: the double negative “not bad” does not mean “good” but instead occupies a hazy middle-ground between the two extremes. And even though we don’t usually think of such a middle-ground existing between logic’s “true” and “false”, proofs by contradiction fit naturally into this haze. In fact, these ideas motivate a whole branch of mathematical logic called Constructive logic that disallows double negatives and proofs by contradiction, instead requiring concrete, constructive justifications for all statements. But wait; last week’s proof that \(\sqrt{2}\) is irrational used contradiction, and therefore is not acceptable in constructive logic. Can we prove this statement constructively? We must show that \(\sqrt{2}\) is not equal to any rational number; what does it even mean to do this constructively? First, we turn it into a positive statement: we must show that \(\sqrt{2}\) is unequal to every rational number. And how do we constructively prove that two numbers are unequal? By showing that they are measurably far apart. So, here is a sketch of a constructive proof: \(\sqrt{2}\) is unequal to every rational number \(a/b\) because $$\left|\sqrt{2} – \frac{a}{b}\right| \ge \frac{1}{3b^2}.$$ See if you can verify this inequality! [1] PS. In case you are still wondering whether \(\sqrt{2}^{\sqrt{2}}\) is rational or irrational: It is irrational (moreover, transcendental), but the only proof that I know uses a very difficult theorem of Gelfond and Schneider.
The nearshore is defined as that part of the ocean where the depth is so small that wind-generated waves are influenced by the bottom. It is usual to identify the depth where the waves start to “feel” the presence of the bottom as the depth of approximately half the wave length. In deep waters the profile of swell waves is nearly sinusoidal showing smooth crests and troughs. On the contrary, nearshore waves adjust the profile shape according to the bottom variations. In fact, as swell waves approach the shore, they steepen, sharpen, pitch forward and eventually break in shallow waters. The process of steepening is related to the increase in wave height and decrease in wave length and celerity. The wave height variation is usually referred to as shoaling. Moreover, waves propagating in shallower depths sharpen and pitch forward losing the symmetric shape, with respect to both the horizontal and vertical axes, that characterizes deep water swell waves. In fact, in the nearshore wave crests become progressively sharp while wave troughs flatten. In addition, shoaling waves show a steep forward face and relatively gently sloping rear face. Finally, the wave front becomes increasingly steeper until the wave turns unstable, the crest tumbles forward and the wave breaks. Wave breaking occurs when the fluid velocity at the upper part of the crest exceeds the wave celerity (the velocity at which the wave is travelling). Breaking waves inject fluid at the surface developing a turbulent and air-entrained front, the roller, in which water is tumbling down toward the trough. The size of vortices generated at the surface controls the turbulent penetration and are determined by the breaker type. The wide range of breaking forms that occur in coastal areas are usually classified according to a visual impression of the process. The Iribarren number \xi_b is a surf similarity parameter widely used to describe wave breaking. It is defined as the ratio between the bottom slope \beta and the root square of the breaking wave steepness H_b/L_0: \xi_b=\frac{\beta}{\sqrt{H_b/L_0}}. Three main breaker types are identified: spilling breakers occur on gently sloping beaches during high energy sea states (\xi_b<0.4). Breaking starts in a small scale as a plume of water and air bubbles form at the crest and slides down the front. Plunging breakers occur for steeper slopes (0.4<\xi_b<1.5) and are characterized by a overturning jet in which the water on the crest curls over and plunges down in a free fall, hitting the trough ahead. Surging breakers occur close to the shoreline on the beach itself (\xi_b>1.5). The toe of the wave surges up to the beach face while the crest collapses. On most beaches, wave breaking takes place when the breaking index (the ratio between the wave height and the water depth) approaches the value of 0.8. However, this value is not fixed but it can change depending on the Iribarren number. In general terms, the larger the Iribarren number, the shallower the depth at which wave breaking occurs. Thus, plunging and surging breakers show larger breaking indexes than spilling breakers. This result is mainly related to the shoaling processes, described at the beginning of this post, that require time and space to develop. In other terms, given an incoming wave (with height H and period T) and a water depth outside the surf zone, the steeper the beach the more symmetric the wave at that depth since the shoaling proccesses (steepening, sharpening,…) have not had space to develop. Therefore, steep slopes don’t provide enough space (and time) for waves to adjust their shape thus delaying the wave breaking. Besides beach slope and wave steepness (combined in the Iribarren number), also the wind plays a role in the initiation of breaking. Onshore winds blowing on the rear face of the waves enhance wave instability causing the breaking to occur on deeper waters than those expected in absence of wind. You may have noticed that the word friction has not even been mentioned up to now. In fact, friction has absolutely no effect on wave breaking. Friction at the bottom only causes a small energy loss on shoaling waves. However, bottom-induced energy dissipation in the nearshore is small and plays only a secondary role in the energy budget. The main agent for wave energy dissipation in shallow water is breaking that ultimately converts mechanical energy into turbulence. This strong dissipation happens in a relatively narrow region if compared to the long distance traveled by waves from their generation in the open sea. The process of wave breaking makes the surf zone the most dynamic region of the ocean.
I've got the follow question which drives me almost crazy as the answer seems to be simple. Given a morphism $p:V\to S$ of schemes of finite type over some base field. Assume that $p$ has all the properties of a vector space over $S$ that is, there should be morphisms $+:V\times_S V \to V, \cdot: A^1_S \times_S V \to V, 0:S\hookrightarrow V$ and $-:V\to V$ over $S$ satisfying the axioms of a module over the ring object $A^1_S$. If $p:V\to S$ is locally trivial, $p$ is called a vector bundle, and the dimensions of the fibers are (locally) constant. My question is whether or not the converse also holds under the assumption that $S$ is smooth, that is, if $S$ is smooth and if the dimensions of the fibers of $p$ are (locally) constant, then $p$ is a vector bundle. What is know: If $p$ is affine, then $V=Spec_S( Sym^* E)$ is the relative spectrum of the symmetric algebra of some coherent sheaf $E$ on $S$. The fibers of $E$ must be constant by assumption, and since $S$ is smooth, $E$ and hence $p$ is locally free. For general $p$ there is a $A^1_S$-linear morphism $f:V\to Spec_S(p_\ast \mathcal{O}_V)$ over $S$ with $p_\ast \mathcal{O}_V\cong Sym^* E$ for $E$ being the subsheaf of fiberwise linear functions. If $p$ is flat, $f$ is an isomorphism on fibers, and $E$ must be locally trivial again by assumption. In particular, $f$ is a bijection on points, and using some version of Zariski's main theorem, one should be able to show that $f$ is an isomorphism. But how can we deduce flatness of $p$ form the assumption on the fiber dimension? I would be very grateful if anyone can provide some solution. Best wishes, Sven