text
stringlengths
256
16.4k
Multitype pair correlation function (cross-type) Calculates an estimate of the cross-type pair correlation function for a multitype point pattern. Usage pcfcross(X, i, j, ..., r = NULL, kernel = "epanechnikov", bw = NULL, stoyan = 0.15, correction = c("isotropic", "Ripley", "translate"), divisor = c("r", "d")) Arguments X The observed point pattern, from which an estimate of the cross-type pair correlation function \(g_{ij}(r)\) will be computed. It must be a multitype point pattern (a marked point pattern whose marks are a factor). i The type (mark value) of the points in Xfrom which distances are measured. A character string (or something that will be converted to a character string). Defaults to the first level of marks(X). j The type (mark value) of the points in Xto which distances are measured. A character string (or something that will be converted to a character string). Defaults to the second level of marks(X). … Ignored. r Vector of values for the argument \(r\) at which \(g(r)\) should be evaluated. There is a sensible default. kernel Choice of smoothing kernel, passed to density.default. bw Bandwidth for smoothing kernel, passed to density.default. stoyan Coefficient for default bandwidth rule; see Details. correction Choice of edge correction. divisor Choice of divisor in the estimation formula: either "r"(the default) or "d". See Details. Details The cross-type pair correlation function is a generalisation of the pair correlation function pcf to multitype point patterns. For two locations \(x\) and \(y\) separated by a distance \(r\), the probability \(p(r)\) of finding a point of type \(i\) at location \(x\) and a point of type \(j\) at location \(y\) is $$ p(r) = \lambda_i \lambda_j g_{i,j}(r) \,{\rm d}x \, {\rm d}y $$ where \(\lambda_i\) is the intensity of the points of type \(i\). For a completely random Poisson marked point process, \(p(r) = \lambda_i \lambda_j\) so \(g_{i,j}(r) = 1\). Indeed for any marked point pattern in which the points of type i are independent of the points of type j, the theoretical value of the cross-type pair correlation is \(g_{i,j}(r) = 1\). For a stationary multitype point process, the cross-type pair correlation function between marks \(i\) and \(j\) is formally defined as $$ g_{i,j}(r) = \frac{K_{i,j}^\prime(r)}{2\pi r} $$ where \(K_{i,j}^\prime\) is the derivative of the cross-type \(K\) function \(K_{i,j}(r)\). of the point process. See Kest for information about \(K(r)\). The command pcfcross computes a kernel estimate of the cross-type pair correlation function between marks \(i\) and \(j\). If divisor="r"(the default), then the multitype counterpart of the standard kernel estimator (Stoyan and Stoyan, 1994, pages 284--285) is used. By default, the recommendations of Stoyan and Stoyan (1994) are followed exactly. If divisor="d"then a modified estimator is used: the contribution from an interpoint distance \(d_{ij}\) to the estimate of \(g(r)\) is divided by \(d_{ij}\) instead of dividing by \(r\). This usually improves the bias of the estimator when \(r\) is close to zero. There is also a choice of spatial edge corrections (which are needed to avoid bias due to edge effects associated with the boundary of the spatial window): correction="translate" is the Ohser-Stoyan translation correction, and correction="isotropic" or "Ripley" is Ripley's isotropic correction. The choice of smoothing kernel is controlled by the argument kernel which is passed to density. The default is the Epanechnikov kernel. The bandwidth of the smoothing kernel can be controlled by the argument bw. Its precise interpretation is explained in the documentation for density.default. For the Epanechnikov kernel with support \([-h,h]\), the argument bw is equivalent to \(h/\sqrt{5}\). If bw is not specified, the default bandwidth is determined by Stoyan's rule of thumb (Stoyan and Stoyan, 1994, page 285) applied to the points of type j. That is, \(h = c/\sqrt{\lambda}\), where \(\lambda\) is the (estimated) intensity of the point process of type j, and \(c\) is a constant in the range from 0.1 to 0.2. The argument stoyan determines the value of \(c\). Value Essentially a data frame containing columns the vector of values of the argument \(r\) at which the function \(g_{i,j}\) has been estimated the theoretical value \(g_{i,j}(r) = 1\) for independent marks. See Also Mark connection function markconnect. Aliases pcfcross Examples # NOT RUN { data(amacrine) p <- pcfcross(amacrine, "off", "on") p <- pcfcross(amacrine, "off", "on", stoyan=0.1) plot(p)# } Documentation reproduced from package spatstat, version 1.59-0, License: GPL (>= 2)
I would like to use CKM matrix in Wolfenstein parameterization because I want to keep an imaginary part. I get $A, \lambda, \bar{\eta}, \bar{\rho}$ from PDG and these values have uncertainty. I have no problem with calculating uncertainty for the value which is real but I don't know how to calculate one with imaginary part. PS. I use Mathematica to calculate the uncertainty for values. Thanks.
Introduction Energy and Power Basic Operations Practice Problems Transformation of signals defined piecewise Even and Odd Signals Commonly encountered signals Motivating question Consider a simple circuit where a DC voltage source with $v$ volts is connected to a 1 Ohm resistor as shown in the figure in the left panel. How much power is dissipated by the resistor? The power dissipated is given by $v^2/R = v^2$ watts or joules/sec. How much energy is dissipated in the resistor over a 1 minute time duration? Since energy is the integral of power, the energy dissipated is $v^2 \times 60 = 60 v^2$ Joules. Now consider the same circuit but with a voltage source whose voltage varies with time as shown in the panel on the right, i.e., the voltage at time $t$ is $x(t)$. Let us now consider the question of how much energy is dissipated from the resistor over the entire time interval from $-\infty,\infty$. At any given time $t$, the (instantaneous) power is given by $x^2(t)$ and the overall energy is given by $\int_{-\infty}^{\infty} x^2(t) \ dt$. Notice that we said that the power dissipated at time $t$ is $x^2(t)$, but can we define one value for the power dissipated when the voltage source is $\underline{x}(t)$? This would represent the average energy dissipated per unit time when the voltage signal is $x(t)$. Indeed, we define such a quantity below. We refer to the energy and power dissipated by the resistor as the energy and power associated with the voltage or signal $x(t)$. Definition of Energy and Power The energy and power of a CT signal $\underline{x}(t)$ and DT signal $\underline{x}[n]$ are defined as(1) These defintions apply to both real and complex signals $x(t)$ and $x[n]$. Power of periodic signalsConsider a periodic CT signal $\underline{x}(t)$ with time period $T_0$ such as the example shown in the figure below. For such a signal, the energy of the signal given by $\int_{-\infty}^{\infty} |x(t)|^2 dt$ is infinite. Since the power is the average energy per period, the power is given by where $t_0$ is any arbitrary time instant starting from which we measure the time period. Since the signal is periodic, $t_0$ can be arbitrary i.e., regardless of which time interval we choose to measure the energy over, as long as we are measuring the energy over a time interval equal to one time period, the result is identical since the signal is periodic. If $x(t)$ is a periodic DT signal with time period $N_0$, the power of a periodic signal is defined as(3) Just like in the CT case, $n_0$ can be arbitrary and the choice of $n_0$ does not affect the result. Energy as the strength of a signal Even though we used the circuit example as a motivation to define the energy of a signal, the definition of energy is not confined only to signals which can be interpreted as a voltage waveform. Rather, the energy of a signal can be used as a measure of the strength of a signal. Often, we encounter situations where we would like to measure the strength of a signal or compare the strengths of two signals and the energy of the signal provides a quantitative measure of the strength of the signal. The above definition of energy to measure the strength of a signal is indeed only one of many possible choices and there are other ways to define the energy or strength of a signal. For example, one can look at the maximum value taken by the signal as one measure of strength, one can look at the sum of the absolute values of a DT signal as another choice. All these measures are meaningful and depending on the decision that we would like to make, we must choose the measure. The energy of a signal defined as in (1) is commonly used and in this course, this will be our default definition of energy. The definition of energy is closely related to what is called in mathematics as the norm of a vector Energy type and power type signals $\underline{x}(t)$ is an energy type signal if $0<E_x<\infty$ $\underline{x}(t)$ is a power type signal if $0<P_x<\infty$ Clearly, for any periodic signal $E_x$ is not bounded and hence, periodic signals cannot be energy type signals. If the energy within one period of the signal is bounded, then the power will be bounded and hence, such signals will be power type signals. Example Problems Example 2: Let $x(t) = A \cos\left(\omega_0t+ \theta\right)$? Is this a power signal or energy signal? Example 3: What is the power of the signal $x(t) = e^{j\omega_0t}$? where $(T_0=\frac{2\pi}{\omega_0})$ Example 4: Compute the energy of the signal $x[n]$ given by(8)
Volume of a Tetrahedron \[V= \frac{1}{3}Base \: Area \times Height\] The diagram shows a trapezium with sides of length \[2x\]. The base is an equilateral triangle of area \[\frac{1}{2} 2x \times 2x \times sin 60=x^2 \sqrt{3}\] To find the height, first find the distance from a vertex of the base to the centre of the base. Divide the base into three equal triangles by drawing lines from the centre to the vertices. The triangle formed with have an angle of 120 degrees opposite a side of \[2x\]. Using the Cosine Rule gives \[(2x)^2=d^2+d^2-2d \times d \times cos 120=2d^2-2d^2 \times - \frac{1}{2} = 3d^2 \rightarrow d = \frac{2x}{\sqrt{3}}\] Now form the right angled triangle as shown and use Pythagoras Theorem to find the height. The height is \[\sqrt{(2x)^2 - (\frac{2x}{\sqrt{3}})^2}= \frac{2x \sqrt{2}}{\sqrt{3}}\] The volume is then \[\frac{1}{3} Base \: Area \times Height = \frac{1}{3} \times x^2 \sqrt{3} \times \frac{2x \sqrt{2}}{\sqrt{3}} = \frac{2x^3 \sqrt{2}}{3}\]
There are 2 observers both facing the same direction (let's assume positive y-axis in a 3D system). Initially, the observation vectors are parallel. The second observer spots the target. So, in terms of spherical co-ordinates, the observer tilts by angle $\phi$, pans by angle $\theta$ and measures the range of the target from it's position to be $R$. it Now, the first observer is only aware of the following information, the relative position of the second observer from the first observer $(\Delta x, \Delta y, \Delta z)$ and the observation information from the second observer $(R, \theta, \phi)$. For an example, he first observer is at the origin $(0, 0, 0)$ and the second observer is at the position $(2,0,2)$ and the target is at $(1,1,1)$. Initially, the observers are both looking at the +y-axis. The second observer tilts down by 45$\circ$ ($\phi=-\pi/4$), and pans left by 45$\circ$ ($\theta=-\pi/4$). The distance between the two points is measured ($R=\sqrt3$). The first observer is given the following inputs: $(\Delta x, \Delta y, \Delta z)$ = $(2, 0, 2)$ $(R, \theta, \phi)$ = $(\sqrt3, -\pi/4, -\pi/4)$ Based on this information alone, how can I calculate the required pan($\theta^F$) and tilt($\phi^F$) of the first observer's observation vector so that it would be looking directly at the target found by the second observer?
Previous abstract Next abstract Session 37 - First Results from the Solar and Heliospheric Observatory (SOHO). Display session, Tuesday, June 11 Tripp Commons, We present the first solar EUV spectral atlas in the wavelength range 500 -- 1600 ÅThe spectra were recorded with the Solar Ultraviolet Measurements of Emitted Radiation (SUMER) which is part of the ESA/NASA Solar and Heliospheric Observatory (SOHO). The solar spectrum below 1200 Åis not very well known. Thus, the present spectral atlas, and SUMER observations in general, represents a new important diagnostic tool to study essential physical parameters of the solar atmosphere. It includes emission from atoms and ions in the temperature range 10^4 to 2 \times 10^6 K. Thus, emission lines and continua emitted from the lower chromosphere to the lower corona can be studied. The atlas is also useful as a planning tool for SUMER studies to determine useful dwell times, possible blends, and to select proper data extraction windows. The angular resolution of SUMER is close to 1 arcsec, but the atlas presented here represents an average along part of the 1-arcsec wide slit, typically 30 arcsec. The spectral resolving power of the instrument is \lambda/\Delta \lambda = 17770-38300. For more details about the SUMER instrument we refer to Wilhelm et al. (Solar Physics, 162, 189, 1995). The spectral data in this atlas were obtained with the spectrometer slit positioned at the center of the solar disk with a dwell time of 300 s to bring up weak lines and continua. The full spectral range was put together from a number of exposures each covering approximately 20 Åin 1st order on the coated, and therefore most sensitive, part (KrB) of the detector. 1st and 2nd order spectra are superimposed. The spectral atlas is available in a computer readable format together with a IDL program to read and display the data using a widget interface. The atlas and the programs can be obtained via the World Wide Web (http://hydra.mpae.gwdg.de/mpae_projects/SUMER/sumer.html) or by contacting one of the authors. Program listing for Tuesday
This is a natural question which confused me a lot. I think generally it is true but I have no idea how to prove it. Also, can anyone raise any counter-example? The question follows from a problem from Topics in Algebra by Herstein. The problem is show that $\sqrt{2}+\sqrt[3]{5}$ is algebraic over $Q$ of degree 6. If I can prove the proposition $[F(\alpha+\beta):F(\beta)]=[F(\alpha):F(\beta)]$, then I can solve this question. However, now it seems impossible to do so.
In a tech savvy world where electronics are part of nearly every aspect of modern life, society has become increasing energy-conscious when it comes to powering off to conserve as much energy as possible. But what if people learned that their devices are still energy vacuums when they are not in use? Unplugging household electronic devices can save small amounts of energy per household, which adds up to substantial quantities if the entire country adapted these good practices. The goal of this task is to determine exactly how many watts of energy would be saved in the average American household if the owners unplugged their devices when not in use. According to a study conducted by the National Resources Defense Council in 2015 it was estimated that there are about 65 devices in the average American home, with an average of 53 plugged in devices and about 12 permanently connected devices such as furnaces. The vast majority of these devices are constantly consuming energy, even when they are thought to be turned off. The EPA has made large efforts to helps consumers make environmentally and energy conscious decisions by providing them with the most information possible about the most “green” products on the market today. Not all energy consumption should be treated equally and not all models of products are on a level playing field as well. For example, one of the products from Energy Star’s 2017 list of the most efficient washers is a large Samsung front load washer (model WF45M51**A*). Energy Star estimates that if the average household does approximately 6 loads of laundry per week, or about 295 loads per year, the average sized load of laundry is expected to use 80 kWh of energy per year. Other less efficient washers can used anywhere from 500 to 1300 watts. For the purposes of this comparison, an average, not very efficient washer may use 900 watts. If the same household did about 6 loads of laundry per week, or about 295 loads per year, they would use roughly 131 kWh of energy per year. One household alone could save about 50 kWh of energy per year, and when multiplied by approximately 126 million U.S. households, that is an estimate of \[ 6.5 \times 10^9 \text{ kWh of energy saved in a single year}\] This shows that the efficiency of a washing machine model makes a bigger impact that unplugging it ever would. Although saving significant quantities of energy may be a big enough incentive for some people to start practicing eco-friendly behaviors, other might need a monetary enticement for their practices to seem worthwhile. According to the National Resources Defense Council issue paper of 2015, “always-on energy use by inactive devices translates to approximately $19 billion a year—about $165 per U.S. household on average—and 50 large (500-megawatt) powerplants’ worth of electricity”. Using 2015 census information of about 117 million U.S. households, it is approximated that \[117 \text{ million households} \times \ $165 \text{ per household on average} \approx $19 \text{ billion}\] The survey conducted in Northern California by The Natural Resource Defense Council found that idle electricity accounted for a significant 23% of total household energy consumption. Huge fractions of energy like this should make communities and the country as a whole want to step back and reflect on where they are going wrong in energy consumption. You may be wondering how it is possible that so much energy is going to waste. We live in a society where people want their things to be ready to use instantaneously at the touch of their fingers. This has resulted in the development of many items having the option of “standby” mode or putting an item to “sleep,” but not shutting down. These different modes still draw power even when they are completely inactive. This type of energy consumption does not affect the consumer whatsoever, but rather adds to the large total of 1,375 billion kilowatt-hours used in U.S. homes annually, amounting to 15% of all U.S. greenhouse gas emissions. If the majority of U.S. households made an effort to “fully turn off devices, unplug, and avoid using standby mode when devices are not in use, it would save users a total of $8 billion on their utility bills annually, 64 billion kilowatt-hours of electricity use per year, and prevent 44 million metric tons of CO2 emissions from being put into the atmosphere.” In one year alone this is a substantial amount of CO2 going into the atmosphere, so it is amazing to think how humans have the power to change the direction of the planet’s climate by collectively practicing good electronic habits. It is important to note that not all energy consumption should be weighted equally. Consumer electronics, such as TVs, computers, printers, and gaming consoles accounted for more than half of all wasted household energy. Other electronics account for far less energy. The “Mathematics for Sustainability” textbook notes how misleading multiplication can lead people to draw conclusions about adding together very many small quantities to get what seems like a substantial amount of something (Mathematics 61). The notion that “if everyone unplugged their cell phone charger when not in use, there would be enough extra electricity to power half a million homes,” is misleading because the percent of energy saved is extremely minimal per household, but can seem substantial on a national scale. \[ \frac {500,000}{130,000,000} \times 100 \text{ percent} = \text{ only } 0.4 \text{ percent of American household energy use}\] This is not to say that small actions do not add up, but some have a bigger impact than others. It is important to be aware of the large amount of energy that is wasted from keeping electronics on standby versus completely powered off. Much of the progress that is taking place to improve efficiency is through the manufacturer. Groups like Energy Star help promote environmentally advantageous products to the consumer while encouraging manufacturers to advance in their technology as well. We are living in a time where there more options for clothes washers, dishwashers, fridges, and TVs than ever, so the consumer has the option to choose the most efficient products on the market. The more people are aware of ways to curb energy use, the better people can reduce the amount of energy wasted, money spent, and CO2 emissions that result from energy being drained from electronics in idle mode. Sources: https://www.energystar.gov/most-efficient/me-certified-clothes-washers/ https://www.nrdc.org/sites/default/files/home-idle-load-IP.pdf http://aip.scitation.org/doi/full/10.1063/1.3558870 https://energy.gov/energysaver/estimating-appliance-and-home-electronic-energy-use https://pdfs.semanticscholar.org/935d/2551560a7dcfe40e23eb281aba4b746a373c.pdf The Mathematics for Sustainability online textbook by John Roe, Russell deForest, Sara Jamshidi
In Frank Schorfheide's class notes on likelihood functions of DSGE models, he expresses the value of the likelihood function for a given vector of parameters $\theta$, and time series $Y^T$ as: $$p(Y^{T}|\theta)=(2\pi)^{-nT/2}\left(\prod_{t=1}^{T}\left|F_{t|t-1}\right|\right)^{-1/2}exp\{-\frac{1}{2}\sum_{t=1}^{T}v_{t}F_{t|t-1}v_t\prime\}$$ where $v_t$ is the innovation in $y$ $$v_t=y_t-\hat{y}_{t|t-1}$$ and the marginal distribution of $y_t$ is $$y_t|Y^{t-1}\sim\mathcal{N}\left(\hat{y}_{t|t-1}, F_{t|t-1}\right)$$ I've just got a few questions about what these terms look like. First, does anyone have an idea what $n$ is in the exponent of the first term in the first equation? I think it might be a misprint, but I'm not sure. Second, what does $F_{t|t-1}$ look like? For an $n\times 1$ vector $y$ I'm picturing an $n\times n$ matrix, but what would the values of $F_{j,k}$ be equal to? I'm picturing the covariance between $y_{t,j}$ and $\hat{y}_{t|t-1,k}$ - is that correct? Lastly, I'm assuming from the results of my code that the value the likelihood function returns is a scalar, but it doesn't look like the formula produces one -- for an $n\times 1$ vector $y$, wouldn't the second term in the first equation be $n\times n$? Or do you think it's meant to be the determinant of $F$?
Radical expressions can also be written without using the radical symbol. We can use rational (fractional) exponents. The index must be a positive integer. If the index [latex]n[/latex] is even, then [latex]a[/latex] cannot be negative. We can also have rational exponents with numerators other than 1. In these cases, the exponent must be a fraction in lowest terms. We raise the base to a power and take an nth root. The numerator tells us the power and the denominator tells us the root. All of the properties of exponents that we learned for integer exponents also hold for rational exponents. Example 11: Rational Exponents Rational exponents are another way to express principal nth roots. The general form for converting between a radical expression with a radical symbol and one with a rational exponent is How To: Given an expression with a rational exponent, write the expression as a radical. Determine the power by looking at the numerator of the exponent. Determine the root by looking at the denominator of the exponent. Using the base as the radicand, raise the radicand to the power and use the root as the index. Example 11: Writing Rational Exponents as Radicals Write [latex]{343}^{\frac{2}{3}}[/latex] as a radical. Simplify. Solution The 2 tells us the power and the 3 tells us the root. [latex]{343}^{\frac{2}{3}}={\left(\sqrt[3]{343}\right)}^{2}=\sqrt[3]{{343}^{2}}[/latex] We know that [latex]\sqrt[3]{343}=7[/latex] because [latex]{7}^{3}=343[/latex]. Because the cube root is easy to find, it is easiest to find the cube root before squaring for this problem. In general, it is easier to find the root first and then raise it to a power. [latex]{343}^{\frac{2}{3}}={\left(\sqrt[3]{343}\right)}^{2}={7}^{2}=49[/latex] Try It 11 Write [latex]{9}^{\frac{5}{2}}[/latex] as a radical. Simplify. Example 12: Writing Radicals as Rational Exponents Write [latex]\frac{4}{\sqrt[7]{{a}^{2}}}[/latex] using a rational exponent. Solution The power is 2 and the root is 7, so the rational exponent will be [latex]\frac{2}{7}[/latex]. We get [latex]\frac{4}{{a}^{\frac{2}{7}}}[/latex]. Using properties of exponents, we get [latex]\frac{4}{\sqrt[7]{{a}^{2}}}=4{a}^{\frac{-2}{7}}[/latex]. Try It 12 Write [latex]x\sqrt{{\left(5y\right)}^{9}}[/latex] using a rational exponent. Example 13: Simplifying Rational Exponents Simplify: [latex]5\left(2{x}^{\frac{3}{4}}\right)\left(3{x}^{\frac{1}{5}}\right)[/latex] [latex]{\left(\frac{16}{9}\right)}^{-\frac{1}{2}}[/latex] Solution [latex]\begin{array}{cc}30{x}^{\frac{3}{4}}{x}^{\frac{1}{5}}\hfill & \text{Multiply the coefficients}.\\hfill \\ 30{x}^{\frac{3}{4}+\frac{1}{5}}\hfill & \text{Use properties of exponents}.\hfill \\ 30{x}^{\frac{19}{20}}\hfill & \text{Simplify}.\hfill \end{array}[/latex] [latex]\begin{array}{cc}{\left(\frac{9}{16}\right)}^{\frac{1}{2}}\hfill & \text{ }\text{Use definition of negative exponents}.\hfill \\ \sqrt{\frac{9}{16}}\hfill & \text{ }\text{Rewrite as a radical}.\hfill \\ \frac{\sqrt{9}}{\sqrt{16}}\hfill & \text{ }\text{Use the quotient rule}.\hfill \\ \frac{3}{4}\hfill & \text{ }\text{Simplify}.\hfill \end{array}[/latex] Try It 13 Simplify [latex]{\left(8x\right)}^{\frac{1}{3}}\left(14{x}^{\frac{6}{5}}\right)[/latex].
I have the following partial differential equation: I'm asked to prove that if $f\equiv 0$, then the total energy (kinetic energy + potential energy) of the system decreases with time. What is the expression for the energy of this system? I know what the expression of energy is for parabolic or hyperbolic partial differential equations. But this, clearly, is neither. UPDATE: If we define the energy to be $\frac{1}{2}(u_t)^2+\frac{1}{2}\sum\limits_{ij}a^{ij}u_{x_i}u_{x_j}$, then it seems that $\frac{dE}{dt}=-\int{d(u_t)^2}$. I don't quite understand how one gets this final expression
Concerning Electromagnetism, textbooks often refer to the Duality Theorem. Sometimes it is presented like this: «Consider the Maxwell's Equations (with phasors) and a known field $\mathbf{E}_1$, $\mathbf{H}_1$: $\nabla \times \mathbf{E}_1 = - j \omega \mu \mathbf{H}_1$ $\nabla \times \mathbf{H}_1 = j \omega \epsilon \mathbf{E}_1$ If $\mathbf{E}_1$ is replaced with $\mathbf{H}_2$ (the magnetic field of another electromagnetic field: $\mathbf{E}_2$, $\mathbf{H}_2$) and $\mathbf{H}_1$ is replaced with $-\mathbf{E}_2$, $\mu$ is replaced with $\epsilon$ and $\epsilon$ with $\mu$, then the above equations become respectively $\nabla \times \mathbf{H}_2 = j \omega \epsilon \mathbf{E}_2$ $\nabla \times \mathbf{E}_2 = - j \omega \mu \mathbf{H}_2$» They are valid Maxwell's Equations too. But now what does follow from this substitution? 1) Should be $\mathbf{E}_1 = \mathbf{H}_2$ and $\mathbf{H}_1 = -\mathbf{E}_2$? It is dimensionally incorrect. 2) I read also $\mathbf{E}_1 = \eta \mathbf{H}_2$ and $\eta \mathbf{H}_1 = -\mathbf{E}_2$. My question is double: what is the advantage of Duality Theorem and which is the correct form between the two just written? Thank you anyway!
Pg 171 of "Tensors, Relativity and Cosmology" The non-relativistic limit of the metric in a static gravitational field is defined as $$ds^2=\left(1+\frac{2 \phi}{c^2}\right)(dx^0)^2+g_{\alpha \beta}dx^\alpha dx^\beta \tag{1}$$ where $\alpha, \beta=1,2,3$ In the non-relativistic limit in the static gravitational field, with the approximate metric given by (1), the only non-trivial component of the Ricci Tensor is the one with k=n=0 $$R_{00}=\partial_0 \Gamma^j_{0j}-\partial_j\Gamma^j_{00}+\Gamma^p_{0j}\Gamma^j_{p0}-\Gamma^p_{00}\Gamma^j_{pj} \tag{2}$$ But why? I understand that in a static gravitational field the components of the metric tensor $(g_{kn})$ (k,n=1,2,3,4) are independent of the time coordinate (i.e. $\partial_0 g_{kn}=0)$ (and could this be related to the answer of my question?) I tried to verify this with other components like $R_{\alpha \beta}$ (spatial) by expressing the Christoffel symbols in terms of the metric tensors but none of them seem to cancel each other out completely.
I am able to prove the iff in the forward direction. But, I am having trouble proving the statement in other direction. I am trying to use the definition of dense, but I am not getting anywhere with it. Let $U \subseteq X$ be an open set and let $S \subseteq X$ be the set in question. Denseness is equivalent to saying that it intersects every open set non-trivially, hence $U \cap \bar{S}^c \neq \emptyset$. Considering properties of the complement, this is the same as saying $U$ is not a subset of $\bar{S}$. Since the closure of $S$ does not contain an open set, it has empty interior and so $S$ is nowhere dense. Let $S^{c}$ denote the complement of $S$ in $X$ and $cl(S)$ the closure of $S$ in $X$. To prove that $S$ is nowhere dense, any open set $U \subseteq cl(S)$ must be empty. Let $U$ be such a set. The complements are included in the opposite order, that is $cl(S)^{c} \subseteq U^{c}$. Taking the closures of both sides and noting that $U^{c}$ is closed, this gives $cl( cl(S)^{c})) \subseteq U^{c}$. But $cl(S)^{c}$ is by assumption dense, which (by definition) means that $cl( cl(S)^{c})) = X$. We thus got ourselves a relation $X \subseteq U^{c}$. This is possible only if $U = \emptyset$. Hence any open subset of $cl(S)$ must be empty and thus $S$ is nowhere dense. Let $A^c$ be complement of $A$, $\overline{A}$ be closure of $A$ and $A^o$ be interior of $A$. We define: 1). $A$ is dense iff $\overline{A}=X$. 2). $A$ is nowhere dense iff $(\overline{A})^o=\varnothing$. Edit: We will use the facts that $(A^c)^c=A$ (Involution) in the following proof. First we prove following lemma $$ \overline{A}=((A^c)^o)^c\tag1 $$ By definition, $A^o$ (interior set of $A$) is the largest open contained in $A$. So $(A^c)^o$ is the largest open contained in $A^c$, i.e $(A^c)^o\subset A^c$. Thus $A=(A^c)^c\subset ((A^c)^o)^c$. Since $(A^c)^o$ is open, $((A^c)^o)^c$ is closed. So this means that $((A^c)^o)^c$ is the smallest closed set containing $A$. By definition of closure, $(1)$ follows. Now if $A$ be nowhere dense, then $(\overline{A})^o=\varnothing$. Since by $(1)$ and Involution $$ \overline{(\overline{A})^c}=((((\overline{A})^{c})^c)^o)^c=((\overline{A})^o)^c=\varnothing^c=X $$ i.e. $(\overline{A})^c$ is dense. Second if $(\overline{A})^c$ is dense, then $\overline{(\overline{A})^c}=X$ and $$ ((\overline{A})^o)^c=((((\overline{A})^{c})^c)^o)^c=\overline{(\overline{A})^c}=X $$ So $(\overline{A})^o=\varnothing$, i.e $A$ is nowhere dense. Assume that complement of (clA) is dense. Then cl[complement of (clA)] = X. So complement of (int clA) = X. Thus int clA is empty. Hence A is nowhere dense.
In the $S'$ frame, your variables are $x' = x - t\cdot u \cos\theta $ and $y' = y - t\cdot u \sin\theta$. If you do the change of variable, you get that the motion now is described by $$x' = 0$$$$y' = -\frac{g}{2}t^2$$ So in your new frame of reference you have vertical free fall from rest. This is not very helpful in finding out when or where does the projectile hits the ground, but is very relevant if you want to know where will the projectile be after releasing it from a plane moving at constant velocity: right below it all the time. Disregarding air resistance, of course. EDITThe system with a prime is moving with velocity $(u \cos\theta, u\sin\theta)$, so if you have a velocity in the unprimed system, to convert it to the primed system, you have to substract the velocity of the origin: $$\vec{v'} = \vec{v} - (u \cos\theta, u\sin\theta)$$ Integrating this, you can get the relation for the position vector: $$\vec{r'} = \vec{r} - (u \cos\theta, u\sin\theta)t + \vec{r}_0$$ where $\vec{r}_0$ is the position of the origin of the primed system for $t=0$. Both systems share origin for $t=0$, so $\vec{r}_0=\vec{0}$. Now replace $\vec{r'}=(x',y')$ and $\vec{r}=(x,y)$ and you will get the equations above.
Let $h[n]$ be the impulse response of a linear and time-invariant(LTI) system. If the signal $x[n]$ is input to the system, the output signal from the system is given by:$y[n]=\sum_{k=-\infty}^{\infty}x[k]h[n-k]$ This operation is called convolution and we say that the signal $y[n]$ is the convolution of the signal $x[n]$ and the signal $h[n]$ and we denote this by $y[n] = x[n]* h[n]$. Thus, To compute the signal $y[n]$, perform the following steps: Think of $x[n]$ and $h[n]$ as signals $x[k]$ and $h[k]$ respectively, i.e., with the independent variable being $k$ instead of $n$. Flip $h[k]$ about the Y -axis to obtain the signal $h[-k]$. To compute the signal $y[n]$ for a fixed value of $n$, shift the signal $h[-k]$ by n units to the right to obtain the signal $h[n-k]$. When n is negative, this amounts to shifting the signal $h[-k]$ to the left, but mathematically it is equivalent to shift right by a negative number. Compute $w_n[k] = x[k]h[n-k]$, i.e., $w_n[k]$ is the product of the signals $x[k]$ and $h[n-k]$. Compute $y[n] =\sum_{k}w_n[k] =\sum_{k}x[k]h[n-k]$ by summing the values of $w_n[k]$ for all values of $k$. This will give you the value of the signal $y[n]$ for one value of $n$. Repeat this procedure for every integer value of $n$, i.e., $n \in$ […,-3,-2,-1,0,1,2,3,…] to obtain the full signal $y[n]$. In practice, it will be easy to start with large negative values of $n$ and increase $n$.
( : This is the case $a=\frac16$ of ${_2F_1\left(a ,a ;a +\tfrac12;-u\right)}=2^{a}\frac{\Gamma\big(a+\tfrac12\big)}{\sqrt\pi\,\Gamma(a)}\int_0^\infty\frac{dx}{(1+2u+\cosh x)^a}.\,$ There is also $a=\frac13$ and $a=\frac14$.) Note After investigating $a=\frac13$ and $a=\frac14$, I wondered if there was for $a=\frac16$. And happily there was, $$\frac{1}{\color{blue}{432}^{1/4}\,K(k_3)}\,\int_0^1 \frac{dx}{\sqrt{1-x}\,\sqrt[6]{x^5+\tfrac{125}3x^6}}=\,_2F_1\big(\tfrac16,\tfrac16;\tfrac23;-\tfrac{125}{3})=\frac{2}{3^{5/6}}$$ $$\frac{1}{\color{blue}{432}^{1/4}\,K(k_3)}\,\int_0^1 \frac{dx}{\sqrt{1-x}\,\sqrt[6]{x^5+2^7\phi^9\, x^6}}=\,_2F_1\big(\tfrac16,\tfrac16;\tfrac23;-2^7\phi^9)=\frac{3}{5^{5/6}}\phi^{-1}$$ The first was found by computer search and, from previous posts, the denominator was enough to give me a clue that $\tau=\frac{1+3\sqrt{-3}}2$ was involved. After fiddling around with some equations, a third conjecture can be made, that there is an family of algebraic numbers $\alpha$ and $\beta$ such that, infinite $$_2F_1\left(\frac16,\frac16;\frac23;-\alpha\right)=\beta$$ Conjecture:"Let $\tau = \frac{1+p\sqrt{-3}}{2}$ with integer $p>1$. Then $\alpha$ is the root of an analogous quadratic, $$16\cdot\color{blue}{432}\,\alpha(1+\alpha)=-j(\tau)$$ with j-function$j(\tau)$. And if odd $p=3k\pm1$ is a prime, then $\alpha$ and $\beta^6$ are algebraic numbers of degree $k$." $$\begin{array}{|c|c|c|c|c|}\hlinep&\tau&\alpha(\tau)&\beta(\tau)&\text{Deg}\\\hline3&\frac{1+3\sqrt{-3}}2&\frac{125}3& \large\frac2{3^{5/6}} &1\\5&\frac{1+5\sqrt{-3}}2&2^7\phi^9& \large\frac3{5^{5/6}}\phi^{-1} &2\\7&\frac{1+7\sqrt{-3}}2&\Big(\frac{129 + 29\sqrt{21}}2\Big)^3& \large\frac47 \frac1{U_{21}^{1/2}} &2\\11&\frac{1+11\sqrt{-3}}2& x_1 & \large\frac6{11} x_2 &4 \\13&\frac{1+13\sqrt{-3}}2& y_1 & \large\frac7{13} y_2 &4 \\\hline\end{array}$$$U_{21}=\frac{5+\sqrt{21}}2$ is a fundamental unit, while $x_i,y_i$ are roots of quartics which are rather tedious to write down. And so on. Q:How do we prove this conjecture? (And the other two?)
I have some questions about the notations used in Section 9.2 Lack of Inherent Superiority of Any Classifier in Duda, Hart and Stork's Pattern Classification. First let me quote some relevant text from the book: For simplicity consider a two-category problem, where the training set $D$ consists of patterns $x^i$ and associated category labels $y_i = ± 1$ for $i = 1,..., n$ generated by the unknown target function to be learned, $F(x)$, where $y_i = F(x^i)$. Let $H$ denote the (discrete) set of hypotheses, or possible sets of parameters to be learned. A particular hypothesis $h(x) \in H$ could be described by quantized weights in a neural network, or parameters 0 in a functional model, or sets of decisions in a tree, and so on. Furthermore, $P(h)$ is the prior probability that the algorithm will produce hypothesis $h$ after training; note that this is not the probability that $h$ is correct. Next, $P(h|D)$ denotes the probability that the algorithm will yield hypothesis $h$ when trained on the data $D$. In deterministic learning algorithms such as the nearest-neighbor and decision trees, $P(h|D)$ will be everywhere zero except for a single hypothesis $h$. For stochastic methods (such as neural networks trained from random initial weights), or stochastic Boltzmann learning, $P(h|D)$ can be a broad distribution. Let $E$ be the error for a zero-one or other loss function. The expected off-training-set classification error when the true function is $F(x)$ and the probability for the $k$th candidate learning algorithm is $P_k(h(x)|D)$ is given by $$ \mathcal{E}_k(E|F,n) = \sum_{x\notin D} P(x) [1-\delta(F(x), h(x))] P_k(h(x)|D) $$ Theorem 9.1. (No Free Lunch)For any two learning algorithms $P_1 (h |D)$ and $P_2(h|D)$, the following are true, independent of the sampling distribution $P(x)$ and the number $n$ of training points: Uniformly averaged over all target functions $F$, $\mathcal{E}_1 (E|F, n) — \mathcal{E}_2(E|F, n) = 0$ For any fixed training set $D$, uniformly averaged over $F$, $\mathcal{E}_1 (E|F, D) — \mathcal{E}_2(E|F, D) = 0$ Part 1 is actually saying $$\sum_F \sum_D P(D|F) [\mathcal{E}_1 (E|F, n) — \mathcal{E}_2(E|F, n)] = 0$$ Part 2 is actually saying $$\sum_F [\mathcal{E}_1 (E|F, D) — \mathcal{E}_2(E|F, D)] = 0$$ My questions are In the formula of $\mathcal{E}_k(E|F,n)$, i.e. $$ \mathcal{E}_k(E|F,n) = \sum_{x\notin D} P(x) [1-\delta(F(x), h(x))] P_k(h(x)|D), $$ can I replace $P_k(h(x)|D)$ with $P_k(h|D)$ and move it outside the sum $\sum_{x \notin D}$, because it is really a distribution of $h$ over $H$ given $D$ for the $k$th stochastic learning algorithm? Given that the $k$th candidate learning algorithm is a stochastic method, why in the formula of $\mathcal{E}_k(E|F,n)$, there is no sum over $h$, i.e. $\sum_{h \in H}$? How are $\mathcal{E}_i (E|F, D)$ and $\mathcal{E}_i (E|F, n)$ different from each other? Does $\mathcal{E}_i (E|F, D)$ mean the off-training error rate given a training set $D$? Does $\mathcal{E}_i (E|F, n)$ mean the off-training error rate, average over all training set given a training size $n$? If yes, why does part 1 in NFL theorem average $\mathcal{E}_i (E|F, n)$ over training sets again by writing $\sum_D$, and why in the formula for $\mathcal{E}_k(E|F,n) $, there is no average over all training set given a training size $n$? In part 1 of NFL theorem, does $\sum_D$ mean summing over all training sets with a fixed training size $n$? If further summing over all possible values in $\mathbb{N}$ of training size $n$ in part 1, the result is still 0, right? In the formula of $\mathcal{E}_k(E|F,n)$, if I change $\sum_{x \notin D}$ to $\sum_x$, i.e. $x$ is not necessarily restricted to be outside the training set, will both parts in NFL theorem still be true? If the true relation between $x$ and $y$ are not assumed to be a deterministic function $F$ as $y=F(x)$, but instead conditional distributions $P(y|x)$, or a joint distribution $P(x,y)$ which is equivalent to knowing $P(y|x)$ and $P(x)$ (also see my another question), then I can change $\mathcal{E}_k (E|F,n)$ to be $$ \mathcal{E}_k(E|P(x,y),n) = \mathcal{E}_{x,y} [1-\delta(y, h(x))] P_k(h(x)|D) $$ (with the strange $P_k(h(x)|D)$ pointed out in part 1 and 2). Are the two parts in NFL theorem still true? Thanks and regards!
Consider the variable coefficient, real valued wave equation $$ u_{tt} - \nabla \cdot (c^2 \nabla u) + qu = 0, \quad u(x,0) = \phi(x), \quad u_t(x, 0) = \phi(x), $$ where $c, q \geq 0$ depend only on $x$. Then we define the total energy at time $f$ of a $C^2$ solution $u$ as $$ E(t) = \frac{1}{2}\int_\Omega (u_t^2 + c(x)^2|\nabla u|^2 + q(x)u^2) \, dx. $$ The goal is to show that the total energy is constant given some boundary conditions. To do this, we differentiate $E$ with respect to $t$, but I have some questions about the presented derivation. The notes I'm following claim that $$ \frac{dE}{dt}(t) = \frac{1}{2}\int_\Omega (u_tu_{tt} + c(x)^2 \nabla u \cdot \nabla u_t + q(x)uu_t)\, dx. $$ However, my calculations have that, e.g.: $$ \frac{d}{dt} u_t^2 = 2\left(\frac{d}{dt}u_t\right)\left(\frac{d}{dt}u\right) = 2u_{tt}u_t.$$ Where does the extra factor of $2$ appear in my derivation? With the $\nabla$ term, I'm having even more trouble recovering the solution's expression. I expand as: $ \begin{align*} \frac{\partial}{\partial t}|\nabla u|^2 &= \frac{\partial}{\partial t}\left(\sum_{i=1}^n\left(\frac{\partial u}{\partial x_i}\right)^2 + \left(\frac{\partial u}{\partial t}\right)^2\right)\\ &= \sum_{i=1}^n 2\frac{\partial^2 u}{\partial t \partial x_i} \frac{\partial u}{\partial t} + 2 \frac{\partial^2 u}{\partial^2 t}\frac{\partial u}{\partial t} \\ \end{align*} $ which isn't the form in the notes. Any guidance in how the notes get their final expression would be much appreciated.
I am trying to get the probability distribution function of $Z=X-Y$. Given that $f_X(x)$ and $f_Y(y)$ are known, and both variables are chi-square distributed, $X\in\mathbb{R}$, $X\ge 0$, and similarly $Y\in\mathbb{R}$, $Y\ge 0$. So how can I get $f_Z(z)$. Let's assume that $X$ and $Y$ are independent, and both follow $\chi^2$-distribution with $\nu$ degrees of freedom. Then $Z = X-Y$ follows symmetric about the origin variance-gamma distribution with parameters $\lambda = \frac{\nu}{2}$, $\alpha=\frac{1}{2}$, and $\beta=0$ and $\mu = 0$. The best way to see this is through the moment-generating function: $$ \mathcal{M}_X(t) = \mathcal{M}_Y(t) = \left(1-2 \, t\right)^{-\nu/2} $$ Then $$ \mathcal{M}_Z(t) = \mathcal{M}_X(t) \mathcal{M}_Y(-t) = \left( 1-4 t^2 \right)^{-\nu/2} = \left( \frac{1/4}{1/4-t^2}\right)^{\nu/2} $$ We now this that it match the moment generating function of the variance gamma distribution: $$ \mathcal{M}_{\rm{V.G.}(\lambda,\alpha,\beta,\mu)}(t) = \mathrm{e}^{\mu t} \left( \frac{\alpha^2 -\beta^2}{\alpha^2 - (\beta+t)^2 } \right)^\lambda $$ For the said parameters, $\mu=\beta=0$, $\alpha=\frac{1}{2}$ and $\lambda=\frac{\nu}{2}$, the density has the following form: $$ f_Z(z) = \frac{1}{2^{\nu/2}\sqrt{\pi}} \frac{1}{\Gamma\left(\frac{\nu}{2}\right)} \vert z \vert^{\tfrac{\nu-1}{2}} K_\tfrac{\nu-1}{2}\left(\vert z \vert \right) $$ The function $f_Z(z)$ is continuous at $z=0$ for $\nu > 1$, with $$ \lim_{z \to 0} f_Z(z) =\frac{1}{4 \sqrt{\pi }} \frac{\Gamma \left(\frac{\nu }{2}-\frac{1}{2}\right)}{ \Gamma \left(\frac{\nu }{2}\right)} $$
What is the best way to simulate the short rate $r(t)$ in a simple one factor Hull White process? Suppose I have $$ dr(t) = (\theta(t)-\alpha r(t))dt+\sigma dW_t $$ where $\theta(t)$ is calibrated to swap curve, constants $\alpha$ and $\sigma$ are calibrated to caps using closed form solution for zero-coupon bond options. The best way I can think to do it is an Euler discretisation, that is: $$ r(t+\Delta t) = r(t) + \theta(t)\Delta t - \alpha r(t) \Delta t + \sigma \sqrt {\Delta t} Z $$ where $Z \sim N(0,1)$. In this case, I need $t$ to go from 0 to 10 years, ideally in 0.25 increments. But with Euler, I'd need to use small $\Delta t$, so perhaps 0.025 or less? Once I have string of $r(t)$, I can easily calculate $P(t,T)$ zero coupon bonds. Appreciate any other ideas or if someone could point me in the right direction. I'm quite new to rates modelling!
I need to estimate baseline hazard function $\lambda_0(t)$ in a time dependent Cox model $\lambda(t) = \lambda_0(t) \exp(Z(t)'\beta)$ While I took Survival course, I remember that the direct derivative of cumulative hazard function ($\lambda_0(t) dt = d\Lambda_0(t)$) would not be a good estimator because Breslow estimator gives a step function. So, is there any function in R that I could use directly ? Or any reference on this topic ? I am not sure if it is worth to open another question, so I just add some background why baseline hazard function is important for me. The formula below estimates the probability that the survival time for one subject is larger than another,. Under a Cox model setting, baseline hazard function $\lambda_0(t)$ is required. $P(T_1 > T_2 ) = - \int_0^\infty S_1(t) dS_2(t) = - \int_0^\infty S_1(t)S_2(t)\lambda_2(t)dt $
The problem has nothing in particular to do with logarithms -- it is a general phenomenon that we cannot always represent the true result of a computation exactly on paper, and so we have to settle for approximations. For example we have $\log_{10}(13) = 1.1139433523...$. The true, mathematical value of $\log_{10}(13)$ is an exact, precise number which is the number such that 10 to that power is 13 exactly. There's nothing approximate that, except that writing down that number with decimals is not possible; we would need to write an infinity of digits to do that. But that is not much different from, say, square roots, which I assume you will have seen already. We have $\sqrt{13} = 3.6055512754...$ -- this is again an exact number that we cannot write down exactly, so for practical calculations we have to settle for approximations. Even plain old arithmetic shows this, for example in division: $1\div 13 = 0.0769230769...$. Here it so happens that the decimals repeat, so we can write exactly $1\div 13 = 0.\overline{076923}$ -- but that is not a very useful representation for further calculations of the number, so in practice we will just choose to cut it off after some number of significant digits. Taking a ten digit approximation, as I've done in the above examples, gives more than pretty good precision for most everyday purposes. In other words the approximations work quite well for the purposes they're made for:$$ \begin{align} 10^{1.1139433523} &= 12.999999999795350... & (\text{pretty darn close to }13) \\3.6055512754^2 &= 12.999999999538566... & (\text{pretty darn close to }13) \\0.0769230769\times 13 &= \phantom{1}0.9999999997 & (\text{pretty darn close to }1)\phantom{1}\end{align} $$ In most cases, as you see here, we get a similar number of correct digits when we undo the operation that produced our approximation as there is in the approximation itself. In the particular case of logarithms, a good rule of thumb is that the number of digits after the decimal point in the approximation of the logarithm is about the same as the total number of correct digit in the antilogarithm. In higher (abstract) mathematics, the use of approximations is untidy and distracting, just like you have noticed. We usually deal with that by computing as few actual numbers as possible while we're manipulating formulas. Generally, we prefer to give the exact formula for the answer we've produced, such that the reader can decide for himself how many digits he wants to compute it to, if he's interested in the numeric value. So "the result is $\log_{10}(13)$" is a better exact answer to a question than "the result is $1.1139433523...$", and one is usually expected to leave it in that form unless there's a specific reason to want a decimal representation.
This is a simple variation on the so called "Twin Paradox", which is not a paradox in the logical sense (i.e. not a logical contradiction). Each cycle of the oscillator's motion is like the journey of the spacefaring twin. One possible cycle on a spacetime diagram is drawn below (source: Wikipedia "Twin Paradox" Article with my own additions). The idealized path, where the oscillator accelerates at infinite acceleration to change its velocity from 0 to $+V$ then from $+V$ to $-V$ is shown in black. The actual path would be more like the green one, but the principles are the same, namely, that, because the motion is accelerated, there is a different Lorentz transforms between the Earthbound twin's inertial frame and the frames momentarily comoving with the accelerating twin at each point on the path, possibly a different transformation at each point. In the idealized case, there are only two momentarily comoving frames, that moving at $+V$ relative to the stationary observer and that moving at $-V$ thereto. On the green curve, the spacefarer would smoothly pass through all intermediate momentarily comoving frames as well. When the oscillator re-unites with the stationary observer so that they can compare clocks, the oscillator's clock shows a shorter time to have passed for the oscillator than for the stationary observer. In summary, no one Lorentz transformation transforms between the two observers' co-ordinates; a different one applies at each point on the oscillator's world line. The OP Asks .... Also I am not arguing which frame is moving wrt which, as is the case with twin paradox. I believe that it is easier to be realized as real experiment. I am sure that any experiment will show that the oscillating clock will delay and this delay will not depend from the frequency of oscillation but only from velocity. You asked which velocity is used in the Lorentz transformation, and the answer is that there is not one transformation but a smoothly varying set of Lorentz transformations between the at rest observer and every inertial frame momentarily comoving with oscillator at each point on its world line. If you want to know the time dilation to check the plausibility of an experiment, then that's easy: if the path is sinusoidal such that $z = a \sin(\omega\,t)$ then the line element is $$\mathrm{d}\tau^2 = \mathrm{d}t^2 - \mathrm{d}z^2 = \mathrm{d}t^2 \left(1-\frac{a^2\,\omega^2}{c^2}\right)$$ whence when the oscillator completes a half cycle and has returned to the stationary observer, the oscillator's clock shows: $$\tau = \int\limits_0^{\frac{\pi}{\omega}} \sqrt{1-\frac{a^2\,\omega^2}{c^2}\,\cos^2(\omega\,t)}\,\mathrm{d}t \approx \frac{\pi}{\omega}\left(1-\frac{a^2\,\omega^2}{4\,c^2}\right)$$ (You can easily do this integral with complete elliptic integrals if you need to broaden it to near light speeds). So let's say you have an atomic clock oscillating back and forth such that $a=1{\rm m}$, $\omega = \pi{\rm m s^{-1}} \approx \sqrt{g}{\rm m s^{-1}}$, equivalent to a period of about $\frac{1}{2}{\rm Hz}$ (Inertially induced stresses on the instrument of $1 g$ seem reasonably manageable so that we can get rid of or account for stress induced errors in the clock system). Then each second (half period), the oscillator's clock "loses" $\pi^2/(4\,c^2)$ seconds, amounting to $2.7\times 10^{-17}$ seconds. This is equivalent to about a nanosecond every fourteen months. So it would be well measurable with today's technology; let's assume we're really good experimentalists and we can design a clock which would withstand the stresses from $10\,g$; in this case, we'd cut that figure down one hundredfold, so the difference is 1 nanosecond every four days. Given these figures, I'd be surprised if the experiment had not already been done.
prettify-symbols-mode is a recent feature of Emacs and it’s very nice. And it looks like it can replace TeX-fold-mode in the future. But, at the time of writing, prettify-symbols-mode doesn’t seem to work well with AUCTeX unless you enable two workarounds together. Tested versions: AUCTeX 11.89.7 (latest version from GNU ELPA) GNU Emacs 25.1 Usual way to enable pretty symbols If you want to enable it for elisp buffers, you can add: (add-hook 'emacs-lisp-mode-hook 'prettify-symbols-mode) Then something like (lambda () (blah)) in elisp buffers should display as (λ () (blah)). If you want to enable it also for other lisp buffers, scheme mode buffers etc, you can adjust the following code: (dolist (mode '(scheme emacs-lisp lisp clojure)) (let ((here (intern (concat (symbol-name mode) "-mode-hook")))) ;; (add-hook here 'paredit-mode) (add-hook here 'prettify-symbols-mode))) If you want to enable for all buffers, you can add: (global-prettify-symbols-mode 1) And then for major modes of your interest, you may want to adjust the buffer-local prettify-symbols-alist accordingly, following the simple example code you can find from the documentation for prettify-symbols-mode. Expected way to use with AUCTeX Following code may be expected to work: (add-hook 'TeX-mode-hook 'prettify-symbols-mode) If it works, then \alpha, \beta, \leftarrow and so on should display as α, β, ←, … for TeX file buffers. I do not doubt that it will just work fine in future versions of AUCTeX, and if you are reading this as an old article, it is possible that just upgrading your AUCTeX package may be enough to make that line work as you expected. If it doesn’t work, then try making the following two changes. First change Instead of adding to the hook directly, try adding a delayed version, like so: (defun my-delayed-prettify () (run-with-idle-timer 0 nil (lambda () (prettify-symbols-mode 1)))) (add-hook 'TeX-mode-hook 'my-delayed-prettify) This way, (prettify-symbols-mode 1) is guaranteed to run after the style hooks and not before. I don’t know what style hooks do, but it looks like they may reset/erase font-lock stuff you have set up. If pretty symbols still don’t show up in AUCTeX buffers, then try adding the following change, in addition to the above change. Second change This one isn’t really about adding something. It is about removing. Remove the following line from your dotemacs if any: (require 'tex) tex.el will load anyway if you visit a TeX file with Emacs. This is a strange change to make, indeed. You should also remove the following line that is commonly used with miktex users: (require 'tex-mik) tex-mik.el is a good small library but tex-mik.el loads tex.el. Feel free to copy parts of tex-mik.el and paste to your dotemacs if you want. You can ensure you have removed every call of (require ‘tex) from your dotemacs by appending the following line to the end of dotemacs and then restarting Emacs to see if the warning message shows up: (if (featurep 'tex) (warn "(require 'tex) is still somewhere!")) If there was some code in dotemacs that relied on the fact that (require ‘tex) was called before, then you have to wrap that code with the with-eval-after-load macro, like this: (with-eval-after-load 'tex (add-to-list 'TeX-view-program-selection '(output-pdf "SumatraPDF")) (add-to-list 'TeX-view-program-list `("SumatraPDF" my--sumatrapdf)))
Hyperbolic tangent function \({\rm tanh}\) is often used to generate the stretched structured grid. In this blog post, I will introduce some examples I have found in the references. Example #1 [1] \begin{equation} y_j = \frac{1}{\alpha}{\rm tanh} \left[\xi_j {\rm tanh}^{-1}\left(\alpha\right)\right] + 1\;\;\;\left( j = 0, …, N_2 \right), \tag{1} \end{equation} with \begin{equation} \xi_j = -1 + 2\frac{j}{N_2}, \tag{2} \end{equation} where \(\alpha\) is an adjustable parameter of the transformation \((0<\alpha<1)\) and \(N_2\) is the grid number of the direction. As shown in the following figure, the grids are more clustered towards the both ends as the parameter \(\alpha\) approaches 1. Example #2 [2] \begin{equation} y_j = 1 -\frac{{\rm tanh}\left[ \gamma \left( 1 – \frac{2j}{N_2} \right) \right]}{{\rm tanh} \left( \gamma \right)}\;\;\;\left( j = 0, …, N_2 \right), \tag{3} \end{equation} where \(\gamma\) is the stretching parameter and \(N_2\) is the number of grid points of the direction. Grid Images Coming soon. References [1] H. Abe, H. Kawamura and Y. Matsuo, Direct Numerical Simulation of a Fully Developed Turbulent Channel Flow With Respect to the Reynolds Number Dependence. J. Fluids Eng 123(2), 382-393, 2001. [2] J. Gullbrand, Grid-independent large-eddy simulation in turbulent channel flow using three-dimensional explicit filtering. Center for Turbulence Research Annual Research Briefs, 2003.
I have variables $x \in \{0,1,\dots,5\}$ and $y \in \{0,1\}$, where $$y = \begin{cases} 0 & \text{if } x = 5\\ 1 & \text{if } x \neq 5\end{cases}$$ My problem is to maximize $y$. How can I use linear constraints for this? I tried certain cases like Cast to boolean, for integer linear programming but that won't work if the problem is to maximize $y$.
The proof of coherence in monoidal categories in CWM is based on theexistence of a monoidal category free over a singleton. Denoting thiscategory by $\mathcal{W}=\left(\mathcal{W}_{0},\square,e_{0},\hat{\alpha},\hat{\lambda},\hat{\rho}\right)$it can be observed that $\mathcal{W}_{0}$ is a thin groupoid. Itsobjects are so-called 'binary words'. For every pair $u,v\in\mathcal{W}_{0}$the homset $\mathcal{W}_{0}\left(u\square v,v\square u\right)$ containsexactly one arrow, and denoting it with $\hat{\gamma}_{u,v}$ it seemsto me that $\left(\mathcal{W}_{0},\square,e_{0},\hat{\alpha},\hat{\lambda},\hat{\rho},\hat{\gamma}\right)$can be recognized as a commutative monoidal category. My questionsare: 1) Can $\left(\mathcal{W}_{0},\square,e_{0},\hat{\alpha},\hat{\lambda},\hat{\rho},\hat{\gamma}\right)$ be classified as a commutative monoidal category free over a singleton? 2) If the answer on the first question is 'yes' then can coherence in commutative monoidal categoriesbe proved the same way (used in CWM) as in the proof for monoidal categories? I think that I am overlooking complications, because the proof in CWM of coherence for monoidal categories appears to be more complex.
Prove compact subsets of metric spaces are closed Note, this question is more of analyzing an incorrect proof of mine rather than supplying a correct proof. My Attempted Proof Suppose $X$ is a metric space. Let $A \subset X$ be a compact subset of $X$ and let $\{V_{\alpha}\}$ be an open cover of $A$. Then there are finitely many indices $\alpha_{i}$ such that $A \subset V_{\alpha_{1}} \cup \ ... \ \cup V_{\alpha_{n}}$. Now let $x$ be a limit point of $A$. Assume $x \not\in A$. If $x \not\in A$ put $\delta = \inf \ \{\ d(x, y) \ | \ y \in A\}$. Take $\epsilon = \frac{\delta}{2}$, then $B_d(x, \epsilon) \cap A = \emptyset$ so that a neighbourhood of $x$ does not intersect $A$ asserting that $x$ cannot be a limit point of $A$, hence $x \in A$ so that $A$ is closed. $\square$. Now there must be something critically wrong in my proof, as I don't even use the condition that $A$ is compact anywhere in the contradiction that I establish. The above proof would assert that every subset of a metric space is closed. I think my error must be in the following argument : $\delta = \inf \ \{\ d(x, y) \ | \ y \in A\}$. For if we take $X = \mathbb{R}$ and $A = (0, 1) \subset \mathbb{R}$, then $\delta = 0$ if $x = 1$ or $x = 0$. Am I correct in analyzing this aspect of my proof?
I am just wondering if I know enough prerequisites! A basic understanding of what's going on gain be gained just using Kepler's laws and Newtonian mechanics. A simple way of dealing with multiple gravitational sources is to selectively ignore all but one of them. This is the patched conic approximation. Which gravitating body is in play? That depends on whether the spacecraft is inside the gravitational sphere of influence of one of the planets. If that is the case, you ignore the Sun and all the other planets. If the spacecraft is outside all planetary spheres of influence, you ignore all of the planets. With this treatment, the spacecraft is always subject to one and only one gravitating body. The spacecraft's trajectory is a piecewise continuous set of Keplerian segments. Suppose a spacecraft is on an elliptical orbit about the Sun that brings the spacecraft inside of a planet's sphere of influence. At the point where the planet crosses that sphere, the trick is to switch from looking at the trajectory as an elliptical orbit about the Sun to a hyperbolic orbit about the planet. This is a reference frame change. The spacecraft's position and velocity with respect to the planet are the vector differences between the spacecraft's and planet's heliocentric positions and velocities. The hyperbolic trajectory that results will soon carry the spacecraft out of the planet's sphere of influence. This trajectory will preserve the magnitude of the spacecraft's planet-centered velocity, but not the direction. Another change of reference frames is performed as the spacecraft exits the sphere of influence, but this time back to heliocentric coordinates. While the planetary encounter doesn't change the magnitude of the planet-centered velocity, it does change the magnitude of the Sun-centered velocity. (Update) Details I'll first give a brief overview of the Keplerian orbit of a test mass about a central mass. The mass of the test mass is many, many orders of magnitude smaller than that of the central body. A spacecraft orbiting a planet, for example, qualifies as a test mass (mass ratio is 10 -20 or smaller). Key concepts: $\mu$ - The central body's gravitational parameter, conceptually $\mu = GM$ but generally $\mu$ is known to much greater precision than are $G$ and $M$. $\vec r$ - The position of the test mass relative to the central body. $\vec v$ - The velocity of the test mass relative to the central body. $\vec h = \vec r \times \vec v$ - The specific angular momentum of the test mass. $\nu$ - The true anomaly of the test mass, measured with respect to the periapsis point. $e$ - The eccentricity of the orbit of the test mass about the central body. $r$ - The magnitude of $\vec r$. $v$ - The magnitude of $\vec v$. $a$ - The semi-major axis length of the test mass's orbit about the central body. $r = \frac {a(1-e^2)}{1+e\cos\nu}$ - Kepler's first law. $r_p = a(1-e)$ - Periapsis distance; closest approach of the test mass and central body. $v_\infty = \sqrt{\frac \mu {-a}}$ - Hyperbolic orbit excess velocity; the speed as $r\to\infty$. $\frac {v^2}{\mu} = \frac 2 r - \frac 1 a$ - The vis vivaequation, which provides a mechanism for computing $a$. $\vec e = \frac {\vec v \times \vec h}{\mu} - \frac {\vec r}{r}$ - The test mass's eccentricity vector relative to the central body. $\hat x_o = \frac {\vec e}{e}$ - The x-hat axis of the orbit, which points from the central body toward the periapsis point. $\hat z_o = \frac {\vec h}{h}$ - The z-hat axis of the orbit, which points away from the orbital plane, in the direction of positive angular momentum. $\hat y_o = \hat z_o \times \hat x_o$ - The y-hat axis of the orbit, defined to complete an xyzright-handed coordinate system. In the limit $r\to \infty$, the test mass will suffer no change in speed but will be subject to a change in velocity given by $\Delta \vec v = 2v_\infty \cos \nu_m \, \hat x_o$, where $\nu_m$ is the maximum true anomaly given by $1+e\cos \nu_m = 0$. Thus $\cos \nu_m = -1/e = -1/(1-r_p/a) = -1/(1+v_\infty^2 r_p/\mu)$. The change in velocity is thus given $\Delta \vec v = -v_\infty/(1+v_\infty^2 r_p / \mu) \, \hat x_0$. Note that this says that too low or too high of a hyperbolic excess velocity both result in a small $\Delta v$. The largest $\Delta v$ for a given periapsis distance results when $v\infty = \sqrt{\mu/r_p}$ (in which case the deflection angle is 60°). Suppose the spacecraft enters a planet's sphere of influence (a sphere of radius $r_{\text{soi}} = a_p (m_p/m_\odot)^{2/5}$ about the planet) at a position $\vec r_0$ with respect to the planet and with some velocity $\vec v_0$ with respect to the planet. The semi-major axis length, specific angular momentum vector, and eccentricity vector of the spacecraft's hyperbolic orbit about the planet can be calculated given this initial state and the planet's gravitational parameter. Per the patched conic approximation, each of these will be a constant of motion. (Note that the "length" in "semi-major axis length" is a bit of a misnomer; it will be negative in the case of a hyperbolic orbit.) The periapsis distance can then be calculated. Some of the above calculations simplify with the additional assumption that the initial velocity is very close to the hyperbolic excess velocity. (What's one more simplifying assumption on top of the huge assumption of patched conics?) With one more simplifying assumption, that the time spent inside the planet's sphere of influence is small, the $\Delta v$ from the flyby can be approximated as impulsive. These key simplifying assumptions give mission planning programs something that can be dealt with. This is a very large and complex search space, and some of the optimization parameters are difficult to express numerically. No matter how good a plan is in terms of low Earth departure velocity, a plan that involves a correction burn on July 4 and a planetary encounter on Christmas Day is not a good plan. No matter how good a plan is in terms of nominally low Earth departure velocity, if the plan is extremely sensitive to errors it is not a good plan. People are still better than machines with regard to weeding out plans that get in the way of people doing what they are wont to do (e.g., taking the Fourth of July off, along with not working from a day or two before Christmas to a day or two after New Years) and people are still better than machines with regard to weeding out plans that are ultra-sensitive to errors. Mission planners still like their porkchop plots. Unfortunately, porkchop plots are computationally expensive to produce. Multiple porkchop plots that string together (i.e., a planetary gravitational assist) are extremely expensive to produce. Multiple planetary encounters (e.g., Cassini) means stringing together a lot of porkchop plots. Hence all the simplifying assumptions. All those simplifying assumptions mean that the nominal plan is ultimately flawed. Not badly flawed, but flawed nonetheless. A solver that doesn't make all those simplifying assumptions is needed. Unfortunately, there are no practical, generic closed-form solutions to the N-body problem. The only way around this is numerical propagation. Now we can throw all kinds of kinks at the solver: multiple gravitational bodies, some of which have a non-spherical gravity field, relativistic effects, and so on. This is not something that can be done from the start. It is something that can be done to polish up the solutions from an overly-simplified mission planning perspective. Note: I am not disparaging those mission planning efforts. The mission planning search space is so large that simplifying assumptions are an absolute necessity lest we have to wait until the next millennium (985 years away) for a solution.
Title Global calibrations for the non-homogeneous Mumford-Shah functional Publication Type Journal Article Year of Publication 2002 Authors Morini, M Journal Ann. Sc. Norm. Super. Pisa Cl. Sci. (5) 1 (2002) 603-648 Abstract Using a calibration method we prove that, if $\\\\Gamma\\\\subset \\\\Omega$ is a closed regular hypersurface and if the function $g$ is discontinuous along $\\\\Gamma$ and regular outside, then the function $u_{\\\\beta}$ which solves $$ \\\\begin{cases} \\\\Delta u_{\\\\beta}=\\\\beta(u_{\\\\beta}-g)& \\\\text{in $\\\\Omega\\\\setminus\\\\Gamma$} \\\\partial_{\\\\nu} u_{\\\\beta}=0 & \\\\text{on $\\\\partial\\\\Omega\\\\cup\\\\Gamma$} \\\\end{cases} $$ is in turn discontinuous along $\\\\Gamma$ and it is the unique absolute minimizer of the non-homogeneous Mumford-Shah functional $$ \\\\int_{\\\\Omega\\\\setminus S_u}|\\\\nabla u|^2 dx +{\\\\cal H}^{n-1}(S_u)+\\\\beta\\\\int_{\\\\Omega\\\\setminus S_u}(u-g)^2 dx, $$ over $SBV(\\\\Omega)$, for $\\\\beta$ large enough. Applications of the result to the study of the gradient flow by the method of minimizing movements are shown. URL http://hdl.handle.net/1963/3089 Global calibrations for the non-homogeneous Mumford-Shah functional Research Group:
The unitriangular group $UT_n(\Bbb Z)$ is the group of all $n \times n$ invertible triangular matrices with the identity on each entry of the main diagonal, and integer entries everywhere else in the triangle. Show that this group is nilpotent, and that its nilpotence class is $n$. Definition( upper central series): For any group $G$ define the following subgroups inductively: $$Z_0(G) = 1, \qquad Z_1(G) = Z(G)$$ and $Z_{i+1}(G)$ is the subgroup of $G$ containing $Z_i(G)$ such that $$Z_{i+1}(G)/Z_i(G) = Z(G/Z_i(G)).$$ The chain of subgroups $$Z_0(G) \leq Z_1(G) \leq Z_2(G) \leq \cdots$$ is called the upper central series of $G$. Definition( nilpotent): A group $G$ is called nilpotentif $Z_c(G) = G$ for some $c \in \Bbb Z$. The smallest such $c$ is called the nilpotence classof $G$. To show that it is nilpotent, I think it is suffient to show that it is a $p-$group; i.e. $|UT_n(\Bbb Z)| = p^{\alpha}$ where $p$ is a prime number and $\alpha$ is a positive integer. I feel like there must be some sort of algorithm to calculate how many possible matrices we can get, similar to the formula for finding the order of $GL_n(\Bbb F)$, the general linear group. I tried Googling, but I can't find a formula for the unitriangular group $UT_n(\Bbb Z)$. To show the nilpotence class is $n$, I have to prove that $Z_n(G) = G$, and that $n$ is the smallest such integer. So by the given definition above, I know that $Z_n(G)/Z_{n-1}(G) = Z(G/Z_n(G))$. How can I manipulate this to arrive at $Z_n(G) = G$, and also show that $n$ is the smallest such integer?
This is the first in a series of posts about the computational aeroacoustics that is abbreviated as CAA. Aeroacoustics is mainly concerned with the generation of sound or noise through a fluid flow. Familiar examples are They can be studied by both experimental and computational methods. In an experimental approach, the aerodynamically generated sound is measured in the anechoic chamber that is designed to prevent reflections of sound on the wall, making the room echo-free. The following video created by Microsoft will help us to understand the structure of it. The computational techniques for simulating flow-generated noise can be classified into two broad categories: Direct Approaches The governing equations of the compressible fluid flow is the compressible Navier-Stokes equations and they also describe the generation and propagation of acoustic noise, so we can solve the computational aeroacoustics problems by solving the transient compressible Navier-Stokes equations both in the source region where the flow disturbances generate noise and propagation region where the generated acoustic waves propagate. Acoustic waves have to be resolved in both regions so that the noise can be accurately simulated at observation locations. However, the solution of the Navier–Stokes equations with fine mesh over large domains to determine farfield noise is computationally very expensive. As is the case in the experimental approaches, it is essential to prevent the reflection of the acoustic waves on the artificially truncated boundary (i.e. it does not stretch to infinity) of the computational domain in order to obtain an accurate result. A variety of numerical techniques have been developed for this purpose: Navier-Stokes Characteristic Boundary Conditions (NSCBC) Artificial dissipation and damping in an absorbing zone Grid stretching and numerical filtering in a “sponge layer” or “exit zone” Perfectly matched layer (PML) Hybrid Approaches (Acoustic Analogy) David P. Lockard and Jay H. Casper [1] states that: The physics-based, airframe noise prediction methodology under investigation is a hybrid of aeroacoustic theory and computational fluid dynamics (CFD). The near-field aerodynamics associated with an airframe component are simulated to obtain the source input to an acoustic analogy that propagates sound to the far field. The acoustic analogy employed within this current framework is that of Ffowcs Williams and Hawkings, who extended the analogies of Lighthill and Curle to the formulation of aerodynamic sound generated by a surface in arbitrary motion. Lighthill’s analogy Lighthill derived the following wave equation \eqref{eq:Lighthill} from the compressible Navier-Stokes equations: \begin{align} \frac{\partial^2 \rho}{\partial t^2} – c_0^2 \nabla^2 \rho = \frac{\partial^2 T_{ij}}{\partial x_i \partial x_j}, \tag{1} \label{eq:Lighthill} \end{align} where the so-called Lighthill (turbulence) stress tensor is expressed as \begin{align} T_{ij} = \rho u_i u_j + \left( p-c_0^2 \rho \right)\delta_{ij} -\mu \left(\frac{\partial u_i}{\partial x_j} + \frac{\partial u_j}{\partial x_i} – \frac{2}{3}\delta_{ij}\frac{u_k}{x_k} \right), \tag{2} \label{eq:Tij} \end{align} \(\delta_{ij}\) is the Kronecker delta and \(c_0\) is the speed of sound in the medium in its equilibrium state. Curle’s analogy The existence of the objects (solid walls) is not considered in the Lighthill’s theory and Curle developed the theory so that it can be dealt with. The density variation at the observer location \(\boldsymbol{x}\) is calculated from the following equation \eqref{eq:Curle} \begin{align} \acute{\rho}(\boldsymbol{x}, t) &= \rho(\boldsymbol{x}, t)\;- \rho_0 \\ &=\frac{1}{4 \pi c_0^2} \frac{\partial^2}{\partial x_i \partial x_j} \int_{V}\frac{[T_{ij}]}{r}dV – \frac{1}{4 \pi c_0^2} \frac{\partial}{\partial x_i}\int_{S}\frac{[P_i]}{r}dS, \tag{3} \label{eq:Curle} \end{align} where \begin{align} P_i = -n_j \{ \delta_{ij}p -\mu \left(\frac{\partial u_i}{\partial x_j} + \frac{\partial u_j}{\partial x_i} – \frac{2}{3}\delta_{ij}\frac{u_k}{x_k} \right) \}, \tag{4} \label{eq:Pi} \end{align} \(r\) is the distance between the receiver \(\boldsymbol{x}\) and the source position \(\boldsymbol{y}\) in \(V\) (or on \(S\)) and the operator [] denotes evaluation at the retarded time \(t-r/c_0\). Ffowcs-Williams and Hawkings (FW-H) analogy References (English) [1] Permeable Surface Corrections for Ffowcs Williams and Hawkings Integrals [2] CFD Online – Acoustic Solver with openfoam [3] WIKIBOOKS – Engineering Acoustics/Analogies in aeroacoustics References (Japanese) [4] 加藤千幸,大規模数値流体解析による流体音の予測 [5] 飯田明由,送風機(ファン)の騒音発生メカニズムと低減・静粛化対策とその事例 [6] 大嶋拓也,流れ中の柱状物体列から発生する空力音の数値予測に関する研究 [7] Newsletters Nagare Dec. issue 2010
We consider an oscillatory motion with small amplitude in a compressible fluid as shown in the following picture. It shows the distribution of the sound pressure ( acoustic pressure) that is defined as the local pressure deviation from the equilibrium pressure \(p_0\) caused by the sound wave propagating from left to right direction. Since we now consider small oscillations, we can write the local pressure \(p\) and density \(\rho\) in the form \begin{align} p &= p_0 + p^{´}, \tag{1a} \label{eq:pressure} \\ \rho &= \rho_0 + \rho^{´}, \tag{1b} \label{eq:density} \end{align} where \(p_0\) and \(\rho_0\) are the constant equilibrium pressure and density and \(p^{´}\) and \(\rho^{´}\) are their variations in the sound wave (\(p^{´} \ll p_0, \rho^{´} \ll \rho_0\)), so the above figure is the contour of \(p^{´}\). We hereafter ignore the fluid viscosity so that only the effect of compressibility is taken into account. Then, the governing equations of the fluid flow is the continuity equation \begin{align} \frac{\partial \rho}{\partial t} + \nabla \cdot (\rho \boldsymbol{u}) = 0 \tag{2} \label{eq:continuity} \end{align} and the Euler’s equation \begin{align} \frac{\partial \boldsymbol{u}}{\partial t} + (\boldsymbol{u} \cdot \nabla)\boldsymbol{u} + \frac{1}{\rho}\nabla p = 0 \tag{3} \label{eq:euler} \end{align} where \(\boldsymbol{u}\) is the velocity field. Substituting the eqns. \eqref{eq:pressure} and \eqref{eq:density} into the governing equations and neglecting small quantities of the second order, we get \begin{align} \frac{\partial \rho^{´}}{\partial t} + \rho_0 \nabla \cdot \boldsymbol{u} = 0, \tag{4} \label{eq:continuity2} \end{align} and \begin{align} \frac{\partial \boldsymbol{u}}{\partial t} + \frac{1}{\rho_0} \nabla p^{´} = 0. \tag{5} \label{eq:euler2} \end{align} We notice that a sound wave in an ideal fluid is adiabatic and the following relationship holds between the small changes in the pressure and density \begin{align} p^{´} = \left(\frac{\partial p}{\partial \rho} \right)_s \rho^{´} \tag{6} \label{eq:adiabatic} \end{align} where the subscript s denotes that the partial derivative is taken at constant entropy. Substituting it into \eqref{eq:continuity2}, we get \begin{align} \frac{\partial p^{´}}{\partial t} + \rho_0 \left(\frac{\partial p}{\partial \rho} \right)_s \nabla \cdot \boldsymbol{u} = 0. \tag{7} \label{eq:continuity3} \end{align} If we introduce the velocity potential \(\boldsymbol{u} = \nabla \phi\), we can derive the relationship between \(p^{´}\) and the potential \(\phi\) from \eqref{eq:euler2} \begin{align} p^{´} = -\rho_0 \frac{\partial \phi}{\partial t}. \tag{8} \label{eq:pandphi} \end{align} We then obtain the following wave equation from \eqref{eq:continuity3} \begin{align} \frac{\partial^2 \phi}{\partial t^2} – c^2 \Delta \phi = 0 \tag{9} \label{eq:waveEqn} \end{align} where \(c\) is the speed of sound in an ideal fluid \begin{align} c = \sqrt{\left(\frac{\partial p}{\partial \rho} \right)_s}. \tag{10} \label{eq:soundSpeed} \end{align} Applying the gradient operator to \eqref{eq:waveEqn}, we find that the each of the three components of the velocity \(\boldsymbol{u}\) satisfies an equation having the same form, and on differentiating \eqref{eq:waveEqn} with respect to time we see that the pressure \(p^{´}\) (and therefore \(\rho^{´}\)) also satisfies the wave equation. – Landau and Lifshitz, Fluid Mechanics In a travelling plane wave, we find \begin{align} u_x = \frac{p^{´}}{\rho_0 c}. \tag{11} \label{eq:uxandp} \end{align} Substituting here from \eqref{eq:adiabatic} \(p^{´} = c^2 \rho^{´}\), we find the relation between the velocity and the density variation: \begin{align} u_x = \frac{c \rho^{´}}{\rho_0}. \tag{12} \label{eq:uxandrho} \end{align} – Landau and Lifshitz, Fluid Mechanics This book is freely accessible from the link. The following picture shows the velocity distribution \(u_x\) calculated using a compressible solver in OpenFOAM at the time corresponding to the pressure variation shown in the above picture. We can calculate the velocity using \eqref{eq:uxandp} \begin{align} u_x = \frac{10}{1.2 \times 340} = 0.0245 [m/s] \end{align} and it is in good agreement with the result obtained using OpenFOAM.
Note : In this question I speak more from a calculation/operational point of view, as opposed to a more theoretical (Analysis) point of view. When studying Differential Calculus, I found that there was very little that I had to memorize. Virtually all calculation aspects, such as finding derivatives etc., and some theorems, could all be derived on the spot through basic methods. As examples, through basic implicit differentiation, one could prove the inverse function theorem, within a few lines. $$\text{Inverse Function Theorem}\ \ \ \ (f^{-1})'(x) = \frac{1}{f'(f^{-1}(x))}$$ Or if I wanted to find $\dfrac{d}{dx}\ \tan^{-1}(x)$, I could use the inverse function theorem and with the help of a trigonometric identity find the derivative quite easily. I didn't have to memorize $\dfrac{d}{dx}\ \tan^{-1}(x) = \dfrac{1}{1+x^2}$. In fact, apart from the derivatives of $\sin(x)$, $\sinh(x)$ and $\cos(x)$, $\cosh(x)$, I didn't memorize any of the other derivatives for trigonometric functions, I would just re-derive them using basic differentiation rules each time. However I noticed that when studying Integral Calculus, there tends to be a lot more that one just has to commit to memory. For example if I wanted to evaluate the following integral $$\int \dfrac{1}{1+x^2}\ dx$$ The only way I could ever evaluate the integral, would be if I knew $\dfrac{d}{dx}\ \tan^{-1}(x) = \dfrac{1}{1+x^2}$, which would require that I had memorized the derivative (something I tried my best not to do when studying differential calculus). When studying Mathematics, for the most part (and within reason of course) I try my best never to memorize what I can re-derive/prove. I've found that this approach helps improve my skills, and pushes me to search for the deepest possible understanding. But it seems that there are some things, that just have to be committed to memory to be able to make any sort of progress, and this troubles me quite a bit, as I'm not sure as to what I should be just memorizing, and what I should really be working to get the best understanding on. Furthermore Integration is a very heuristic process, whereas Differentiation is a more algorithmic process. Generally we try to get integrals into forms we know of already so that we can evaluate them (with the exception of the Risch algorithm), or it would be impossible to evaluate them by any other means. Wouldn't that require one to memorize the various types of possible integrals? First off, am I looking at this wrong? Are there ways one can reprove results, or evaluate integrals, in a manner that doesn't require one to just memorize and recall a list of formula's like a parrot? What aspects of Integral Calculus would you say, just have to be memorized, i.e. what results in Integral Calculus are close to impossible to re-derive or prove on the spot? Where does one draw the line, between what should be looked at long and hard for the deepest possible understanding, and what should just be memorized? Lastly, correct me if I'm wrong, but as one makes the transition into higher mathematics (analysis and beyond), that there are some things that you just have to commit to memory, to be able to make any sort of progress?
Demonstrating the Periodic Spectrum of a Sampled Signal Using the DFT One of the basic DSP principles states that a sampled time signal has a periodic spectrum with period equal to the sample rate. The derivation of can be found in textbooks [1,2]. You can also demonstrate this principle numerically using the Discrete Fourier Transform (DFT). The DFT of the sampled signal x(n) is defined as: $$X(k)=\sum_{n=0}^{N-1}x(n)e^{-j2\pi kn/N} \qquad (1)$$ Where X(k) = discrete frequency spectrum of time sequence x(n) n = time index k = frequency index N = number of samples of x(n) The time and frequency variables are related to n and k as follows: t = nT s (2) f= k/(NT s) = kf s/N (3) where T s is the sample time interval in seconds and f s = 1/T s is the sampling frequency in Hz. While n has a range of 0 to N-1, the range of k depends on the frequency range over which we want to compute X(k). For example, if we let k = 0 to N-1, Equation 3 yields a frequency range of f = 0 to f s(N-1)/N, which is the usual range used for the DFT. For our demonstration, we’ll evaluate X(k) over the wider range of k= -2N to 2N-1, which gives a frequency range of f = -2fs to f s(2N-1)/N. The Appendix lists Matlab code that uses Equation 1 to compute X(k) for an example real-valued time sequence of length N= 32. Running the code generates Figure 1, which shows the time sequence, the magnitude of the DFT, and the dB-magnitude of the DFT. As advertised, the spectrum is periodic, with period f s. Looking at Equation 1, we see that the value of the complex exponent repeats every time k crosses a multiple of +/-N, which coincides with frequencies that are multiples of +/-f s. Now that we have demonstrated the periodicity of X(k), it’s obvious why the DFT is normally evaluated over only N values of k: all of the information about the spectrum is contained in that range. However, we have a choice over the particular N values of k that we use, as shown in Figure 2. The top plot shows our demonstration X(k). The middle plot shows just the samples of X(k) for the usual range of k = 0 to N-1, while the bottom plot shows just the samples for k = -N/2 to N/2-1, an equally valid range. Let’s look a little closer at the DFT evaluated over k = -N/2 to N/2-1. Figure 3 shows the real part, imaginary part, and magnitude of the DFT. The top two plots illustrate another property of the DFT: for a real time sequence, the DFT has a real part that is an even function and an imaginary part that is an odd function. This property also holds for the DFT evaluated over k = 0 to N-1; but in that case, the even and odd properties are defined with respect to f s/2 Hz, instead of 0 Hz. Finally, we should note that Equation 1, while useful for our demonstrations, is not the most efficient way to find the DFT. For efficient computation, we would use the Fast Fourier Transform (FFT) algorithm [3]. Figure 1. Periodic Spectrum of a sampled time signal. Top: sampled time signal Middle: Spectrum magnitude Bottom: Spectrum dB-magnitude Figure 2. DFT magnitude |X(k)| for different frequency ranges. Top: k = -2N to 2N-1 results in f = -2f s to f s(2N-1)/N Middle: Conventional DFT. k = 0 to N-1 results in f = 0 to f s(N-1)/N Bottom: DFT centered at f = 0. k= -N/2 to N/2-1 results in f = -f s/2 to f s(N/2-1)/N Figure 3. DFT evaluated over k = -N/2 to N/2-1 or f = -f s/2 to f s(N/2-1)/N Top: Real part, showing even symmetry. Middle: Imaginary part, showing odd symmetry. Bottom: Magnitude. Appendix Matlab Code to Evaluate DFT over several periods of its spectrum For this example, the time signal is a Hann, or Hanning, pulse [4, 5] with the formula x(n) = K*(1 – cos(2πn/P)), where P = 9 and n= 0: P. This pulse is then padded with leading and following zeros to length N= 32. % extended_dft.m 3/4/19 Neil Robertson % compute dft using its definition % k = -2*N:2*N-1f = -2fs:2fs % fs= 100; % Hz sample frequency (arbitrary value) N= 32; % number of time samples % sampled time signal of length N x= [0 0 0 0 7 24 43 55 55 43 24 7 0 0 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0]/258; % Compute DFT M= 4*N; % number of frequency samples for k= -M/2:M/2-1 % frequency index f(k+ M/2+ 1)= k*fs/N; % frequency vector. index range is 1:M sum_n= 0; for n= 0:N-1 % time index sum_n= sum_n + x(n+1)*exp(-j*2*pi*k*n/N); % DFT sum for X(k) end X(k+ M/2+ 1)= sum_n; % DFT vector. index range is 1:M end Xreal= real(X); Ximag= imag(X); Xmag= sqrt(Xreal.^2 + Ximag.^2); % DFT magnitude vector XdB= 20*log10(Xmag); % DFT dB-magnitude vector % % % % plot x, Xmag, and XdB subplot(311),stem(0:N-1,x),grid axis([0 N-1 0 .25]),xlabel('n') subplot(312),stem(f,Xmag,'markersize',4),grid axis([-2*fs 2*fs 0 1.1]) xticks([-2*fs -3*fs/2 -fs -fs/2 0 fs/2 fs 3*fs/2 2*fs]) xticklabels({'-2fs','-3fs/2','-fs','-fs/2','0','fs/2','fs','3fs/2','2fs'}) subplot(313),plot(f,XdB,'.-','markersize',7),grid axis([-2*fs 2*fs -50 5]) xticks([-2*fs -3*fs/2 -fs -fs/2 0 fs/2 fs 3*fs/2 2*fs]) xticklabels({'-2fs','-3fs/2','-fs','-fs/2','0','fs/2','fs','3fs/2','2fs'}) ylabel('dB') References Oppenheim, Alan V. and Shafer, Ronald W., Discrete-Time Signal Processing, Prentice Hall, 1989, section 3.2. Rice, Michael, Digital Communications, a Discrete-Time Approach, Pearson Prentice Hall, 2009, section 2.6.1. Lyons, Richard G. , Understanding Digital Signal Processing, 2., Prentice Hall, 2004, Chapter 4. ndEd Mathworks website, https://www.mathworks.com/help/signal/ref/hann.html Oppenheim and Shafer, Op. Cit., p. 447. March 2019 Neil Robertson Previous post by Neil Robertson: Compute the Frequency Response of a Multistage Decimator Next post by Neil Robertson: IIR Bandpass Filters Using Cascaded Biquads Hi Neil, That is neat and useful. By the way I happened to compare DFT equation Vs iDFT. Apart from scaling,they look exactly same except for j/-j operator. Or am I wrong? So I tried both equations on a single tone @ + 0.1 Fs and can see DFT gives single line @ +0.1Fs as expected while iDFT gives same but @ -0.1Fs. I wonder if the sign of (j) operator is actually chosen on arbitrary basis between DFT and iDFT. Moreover, I find it a bit hard to imagine how come one way the equation converts to frequency domain while the other way it converts to time domain. Thanks Kaz Hi Kaz, I'm glad you liked the article. The continuous fourier transform is the integral of f(t)exp(-jwt)dt. The inverse is the integral of 1/(2pi) * F(w)exp(jwt)dt. If I ever knew how this arises, I have forgotten. Here is an article on computing the IDFT, by Rick Lyons: Thanks Neil, swapping Re/Im at input/output to fft, or inverting (Im) in effect forces +f of input to -f then back to +f at output... and this agrees with my test above. In practice we do rewiring of Re/Im as it costs nothing. Other three methods are costly but interesting. Thus it confirms my conclusion that DFT and iDFT are almost same apart from transpose of Re/im input/outputs or cos/sin terms. I can imagine DFT correlates set of cos/sin and picks up maximum as new samples indexed from zero to N-1. Hence it measures frequency content. iDFT does same but using transposed cos/sin. How come this means un-corelating and converting from frequency domain to time domain. I can't "imagine". Regards Kaz Hi kaz. Both the DFT the iDFT perform correlations. and Hi Rick, Thanks, yes the equations are very much same concept but this by itself raises the question. Imagine a single tone, do fft and we get a single line. if we now think of fft as conversion from time domain to frequency domain then all makes sense as we are correlating with a set of bin frequencies and end up with single line as expected. No problem in concept. If we do ifft on this single line we expect single tone and we get it. However for ifft we are correlating same way as if we entered a single line to fft. The ifft equations should conceptually convert from frequency domain back to time domain but it uses same equation that we used to move from time domain to frequency domain. so we are still trying to find out how much is there from each bin frequencies as if we are not in frequency domain but convert from time domain to frequency domain. I can see that mathematically ifft inverts the fft but I believe as such the concept of time domain versus frequency domain is just a convenient byproduct. Kaz The choice of sign for which is forward and which is backward is by convention. So is the normalization factor. The reason that the convention of using a negative sign is a little preferable is how the arg of the bin value relates to the phase of the signal. This is clearest in the complex case: $$ x[n] = e^{ \omega n + \phi } $$ Suppose the DFT frame is a whole number of cycles, and $ k = \omega \cdot \frac{N}{2\pi} $, then $$ X[k] = (value) \cdot e^{\phi} $$ By using the negative sign in the forward DFT the signs on $\phi$ align, that make sense. What I have an issue with is the normalization factor convention. I can understand a factor of "1" in library routines to save a multiply, but when you are doing the math, it is less sensible. The "true" normalization factor should be $ \frac{1}{\sqrt{N}} $, but that is a pain to deal with in practicality. It is the "true" one because it applies to the forward and inverse DFT making the operations "multiplications of unitary matrices" in Linear Algebra terminology. The common convention of using "1" for the forward and "1/N" for the inverse is backward in my opinion. It should be the reverse. If you use the reverse, as I advocate, then the $(value)$ in the above equation becomes "1" for complex signals and "1/2" for real ones, making the interpretation of bin values independent of the sample count. This is much more sensible and why I stubbornly use a "1/N" in all my articles even though it is immaterial for frequency calculations since those are amplitude independent. In addition, in the cases when your DFT frame covering a whole number of cycles of a periodic signal, then the coefficients for the Fourier Series for the continuous function can be read straight from the DFT bin values. This should be the clincher. Ced Thanks Ced, For FPGA/ASIC platforms as opposed to math or soft platforms we got commercial fft/ifft cores that scale by sqrt(N) for either direction. Still not practical due to bit growth limitation. I normally scale output back by 1/sqrt(N) using precomputed single constant plus one multiplier. I am still not sure how to mentally explain the ifft equation... Kaz Conceptually (and literally), the bin values are the coefficients to the Sine and Cosine functions that reconstruct your function. The inverse DFT is that reconstruction. Hi Neil. Your interesting blog brings up a philosophical question that deserves contemplation. That is, "Is it possible for a discrete sampled sine wave to have a frequency more positive than Fs/2 Hz?" Rick, I assume you mean a real sine wave. Let its frequency = f0. According to all the textbooks, it has an infinite number of frequency pairs at +/-f0, fs+/-f0, 2*fs+/-f0... You could approach this in the real world by sampling a sine wave with a pulse train having very narrow pulses. The amplitude of the resulting spectral lines would roll off slowly vs f, and the amplitude would be proportional to the energy in the pulse. If the pulse width were, say, 1 ns, it would be good to have a large pulse amplitude (100 volts?) Then you could put a bpf centered at the frequency of one of the spectral lines and you should get a sine out. Neil One detail: when I refer to sampling the sinewave with a pulse train with narrow pulses, this could either be a switch or a multiplier. If it is a multiplier, the amplitude of the pulse train obviously affects the output amplitude. The final filtered amplitude would then depend on the pulse width and pulse amplitude. Neil when converted through DAC you can exploit the faster copies at multiples of that provided you filter accordingly. Kaz I guess the philosophical part is when you decide that the ADC output samples are impulses. This is what makes the math work. Then, when you get to the DAC, you have to convert the perfect impulses into something else -- typically square pulses, which cause the higher frequency copies to follow the sinx/x curve. Neil @ Neil. Hi. For me the philosophical question is: "Is it possible to generate a discrete sampled sine wave (a sequence of numbers) whose frequency is more positive than +Fs/2 Hz?" And while you at it, can you tell me how many angels can fit on the head of a pin? @ kaz. Hi. Your 1st sentence implies to me that you believe that no discrete sine wave can have a frequency more positive than +Fs/2 Hz. Am I correct? Hi Rick, Yes the discrete domain cannot have frequencies outside Nyquist rule. Any frequency outside this range cannot exist in discrete domain. However I noticed some programmers computationally enter high frequencies in the generation equation that will appear as alias within the legal range. If we sample high frequency signal from real world at ADC it will alias into legal domain. If we want DAC to produce higher frequencies outside the legal range then yes you can exploit copies at DAC output. I am sure you know that but curious what is in your mind. Kaz @Neil. Hi. This signal you refer to that can be applied to a bandpass filter, is that signal an analog signal or a discrete sequence of numerical samples (a discrete signal)? Rick, My signal was an analog signal that is the product of a sinewave and a pulse train. So it does not fit your description of a discrete signal. Any analog sinewave above fs/2 will produce an alias below fs/2 when discretized. I guess we can believe in the higher frequency images or not, as we prefer. Neil hi I have a quastion, i dont underestand why do we consider the amount of k,(k=N/2) ?what is the problem whitH k=N? Hi Saraktb, There is no problem with letting k go from 0 to N-1. However, since the spectrum of a real signal is symmetrical, all the information is contained in the range k = 0 to N/2-1. The range of k= N/2 to N-1 is just an image. Note the matlab fft function returns all N points of the DFT, so you can choose to keep all N points or just look at N/2 points. regards, Neil To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments. Registering will allow you to participate to the forums on ALL the related sites and give you access to all pdf downloads.
Sine-cubed function This article is about a particular function from a subset of the real numbers to the real numbers. Information about the function, including its domain, range, and key data relating to graphing, differentiation, and integration, is presented in the article. View a complete list of particular functions on this wiki For functions involving angles (trigonometric functions, inverse trigonometric functions, etc.) we follow the convention that all angles are measured in radians. Thus, for instance, the angle of is measured as . Contents Definition For brevity, we write or . Key data Item Value Default domain all real numbers, i.e., all of range the closed interval , i.e., absolute maximum value: 1, absolute minimum value: -1 period , i.e., local maximum values and points of attainment All local maximum values are equal to 1, and they are attained at all points of the form where varies over integers. local minimum values and points of attainment All local minimum values are equal to -1, and they are attained at all points of the form where varies over integers. points of inflection (both coordinates) All points of the form , as well as points of the form and where where varies over integers. derivative second derivative antiderivative important symmetries odd function (follows from composite of odd functions is odd, and the fact that the cube function and sine function are both odd) half turn symmetry about all points of the form mirror symmetry about all lines . Identities We have the identity: Graph Here is the basic graph, drawn on the interval : Here is a more close-up graph, drawn on the interval . The thick black dots correspond to local extreme values, and the thick red dots correspond to points of inflection. Differentiation First derivative To differentiate once, we use the chain rule for differentiation. Explicitly, we consider the function as the composite of the cube function and the sine function, so the cube function is the outer function and the sine function is the inner function. We get: [SHOW MORE] Integration First antiderivative: standard method We rewrite and then do integration by u-substitution where . Explicitly: Now put . We have , so we can replace by , and we get: By polynomial integration, we get: Plugging back , we get: . Here, is an arbitrary real constant. First antiderivative: using triple angle formula An alternate method for integrating the function is to use the identity: We thus get: This answer looks superficially different from the other answer. However, using the identity , we can verify that the antiderivatives are exactly the same. Repeated antidifferentiation The antiderivative of involves cos^3 and cos, both of which can be antidifferentiated, and this now involves sin^3 and sin. We can thus antidifferentiate (i.e., integrate) the function any number of times, with the antiderivative expression alternating between a cubic function of sine and a cubic function of cosine. Power series and Taylor series Computation of power series We can use the identity: We have the power series: Failed to parse (syntax error): \! \sin x = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \dots = \sum_{k=0}^\infty \frac{(-1)^kx^{2k+1}}(2k+1)!} We thus get the power series: Plugging into the formula, we get: The first few terms are as follows:
How to prove that : $$ \frac{\textrm{d}}{\textrm{d}x}\int^{g(x)}_{h(x)}f(t)\textrm{d}t =f(g(x))g'(x)-f(h(x))h'(x). $$ Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community How to prove that : $$ \frac{\textrm{d}}{\textrm{d}x}\int^{g(x)}_{h(x)}f(t)\textrm{d}t =f(g(x))g'(x)-f(h(x))h'(x). $$ Let us first assume that $f$ has a primitive, which we shall refer to as $F$. By the fundamental theorem of calculus, we have: $$\int_{h(x)}^{g(x)}{f(t)\:dt}=F(g(x))-F(h(x))$$ By the chain rule, we have: $$\frac{d}{dx}\left(f\circ g\right)=f'(g(x))g'(x)$$ As we know that $\frac{d}{dx}F(x)=f(x)$, we have: $$\frac{d}{dx}\left(F(g(x))-F(h(x))\right)=F'(g(x))g'(x)-F'(h(x))h'(x)\\=f(g(x))g'(x)-f(h(x))h'(x)$$ Which means that: $$\frac{d}{dt}\int_{h(x)}^{g(x)}f(t)\:dt=f'(g(x))g'(x)-f(h(x))h'(x)$$ Q.E.D. Use the Fundamental Theorem of Calculus and the Chain Rule. More informally, in order to find our definite integral, we find an antiderivative $F(t)$ of $f(t)$, and then "plug in." Our indefinite integral is equal to $$F(g(x))-F(h(x)).$$ Differentiate. We get $g'(x)F'(g(x))-h'(x)F'(h(x))$. But $F'(t)=f(t)$. To add some variety to these answers, I propose to split the integral into two, by taking advantage of the additivity over disjoint unions of interval. What I mean is: $$\frac{d}{dx}\left(\int\limits_{h(x)}^{g(x)}f(t)dt\right)=\frac{d}{dx}\left(\int\limits_{h(x)}^{x_0}f(t)dt+\int\limits_{x_0}^{g(x)}f(t)dt\right)=\frac{d}{dx}\left(\int\limits_{h(x)}^{x_0}f(t)dt\right)+\frac{d}{dx}\left(\int\limits_{x_0}^{g(x)}f(t)dt\right).$$ Then we swap the extremes of integration in the second integral and get a minus. Now, I assume we know how to calculate those derivatives, and to prove they are $f(g(x))g'(x)$ and $f(h(x))h'(x)$, for which you can still use the other answers. SO this approach is building on a simpler case: the one with one extreme fixed and the other one moving with $x$.
Really stumped on this one. I would really like an example or situation where an estimator B would be both consistent and biased. The simplest example I can think of is the sample variance that comes intuitively to most of us, namely the sum of squared deviations divided by $n$ instead of $n-1$: $$S_n^2 = \frac{1}{n} \sum_{i=1}^n \left(X_i-\bar{X} \right)^2$$ It is easy to show that $E\left(S_n^2 \right)=\frac{n-1}{n} \sigma^2$ and so the estimator is biased. But assuming finite variance $\sigma^2$, observe that the bias goes to zero as $n \to \infty$ because $$E\left(S_n^2 \right)-\sigma^2 = -\frac{1}{n}\sigma^2 $$ It can also be shown that the variance of the estimator tends to zero and so the estimator converges in mean-square. Hence, it is also convergent in probability. A simple example would be estimating the parameter $\theta > 0$ given $n$ i.i.d. observations $y_i \sim \text{Uniform}\left[0, \,\theta\right]$. Let $\hat{\theta}_n = \max\left\{y_1, \ldots, y_n\right\}$. For any finite $n$ we have $\mathbb{E}\left[\theta_n\right] < \theta$ (so the estimator is biased), but in the limit it will equal $\theta$ with probability one (so it is consistent). Consider any unbiased and consistent estimator $T_n$ and a sequence $\alpha_n$ converging to 1 ($\alpha_n$ need not to be random) and form $\alpha_nT_n$. It is biased, but consistent since $\alpha_n$ converges to 1. From wikipedia: Loosely speaking, an estimator $T_n$ of parameter $\theta$ is said to be consistent, if it converges in probability to the true value of the parameter: $$\underset{n\to\infty}{\operatorname{plim}}\;T_n = \theta.$$ Now recall that the bias of an estimator is defined as: $$\operatorname{Bias}_\theta[\,\hat\theta\,] = \operatorname{E}_\theta[\,\hat{\theta}\,]-\theta $$ The bias is indeed non zero, and the convergence in probability remains true. In a time series setting with a lagged dependent variable included as a regressor, the OLS estimator will be consistent but biased. The reason for this is that in order to show unbiasedness of the OLS estimator we need strict exogeneity, $E\left[\varepsilon_{t}\left|x_{1},\, x_{2,},\,\ldots,\, x_{T}\right.\right] $, i.e. that the error term, $\varepsilon_{t} $, in period $t $ is uncorrelated with all the regressors in all time periods. However, in order to show consistency of the OLS estimator we only need contemporanous exogeneity, $E\left[\varepsilon_{t}\left|x_{t}\right.\right] $, i.e. that the error term, $\varepsilon_{t} $, in period $t $ is uncorrelated with the regressors, $x_{t} $ in period $t $. Consider the AR(1) model: $y_{t}=\rho y_{t-1}+\varepsilon_{t},\;\varepsilon_{t}\sim N\left(0,\:\sigma_{\varepsilon}^{2}\right)$ with $x_{t}=y_{t-1} $ from now on. First I show that strict exogeneity does not hold in a model with a lagged dependent variable included as a regressor. Let's look at the correlation between $\varepsilon_{t} $ and $x_{t+1}=y_{t} $ $$E\left[\varepsilon_{t}x_{t+1}\right]=E\left[\varepsilon_{t}y_{t}\right]=E\left[\varepsilon_{t}\left(\rho y_{t-1}+\varepsilon_{t}\right)\right] $$ $$=\rho E\left(\varepsilon_{t}y_{t-1}\right)+E\left(\varepsilon_{t}^{2}\right) $$ $$=E\left(\varepsilon_{t}^{2}\right)=\sigma_{\varepsilon}^{2}>0 \ (Eq. (1)).$$ If we assume sequential exogeneity, $E\left[\varepsilon_{t}\mid y_{1},\: y_{2},\:\ldots\ldots,y_{t-1}\right]=0 $, i.e. that the error term, $\varepsilon_{t} $, in period $t $ is uncorrelated with all the regressors in previous time periods and the current then the first term above, $\rho E\left(\varepsilon_{t}y_{t-1}\right) $, will dissapear. What is clear from above is that unless we have strict exogeneity the expectation $E\left[\varepsilon_{t}x_{t+1}\right]=E\left[\varepsilon_{t}y_{t}\right]\neq0 $. However, it should be clear that contemporaneous exogeneity, $E\left[\varepsilon_{t}\left|x_{t}\right.\right] $, does hold. Now let's look at the bias of the OLS estimator when estimating the AR(1) model specified above. The OLS estimator of $\rho $, $\hat{\rho} $ is given as: $$\hat{\rho}=\frac{\frac{1}{T}\sum_{t=1}^{T}y_{t}y_{t-1}}{\frac{1}{T}\sum_{t=1}^{T}y_{t}^{2}}=\frac{\frac{1}{T}\sum_{t=1}^{T}\left(\rho y_{t-1}+\varepsilon_{t}\right)y_{t-1}}{\frac{1}{T}\sum_{t=1}^{T}y_{t}^{2}}=\rho+\frac{\frac{1}{T}\sum_{t=1}^{T}\varepsilon_{t}y_{t-1}}{\frac{1}{T}\sum_{t=1}^{T}y_{t}^{2}} \ (Eq. (2))$$ Then take conditional expectation on all previous, contemporaneous and future values, $E\left[\varepsilon_{t}\left|y_{1},\, y_{2,},\,\ldots,\, y_{T-1}\right.\right] $, of $Eq. (2)$: $$E\left[\hat{\rho}\left|y_{1},\, y_{2,},\,\ldots,\, y_{T-1}\right.\right]=\rho+\frac{\frac{1}{T}\sum_{t=1}^{T}\left[\varepsilon_{t}\left|y_{1},\, y_{2,},\,\ldots,\, y_{T-1}\right.\right]y_{t-1}}{\frac{1}{T}\sum_{t=1}^{T}y_{t}^{2}} $$ However, we know from $Eq. (1)$ that $E\left[\varepsilon_{t}y_{t}\right]=E\left(\varepsilon_{t}^{2}\right) $ such that $\left[\varepsilon_{t}\left|y_{1},\, y_{2,},\,\ldots,\, y_{T-1}\right.\right]\neq0 $ meaning that $\frac{\frac{1}{T}\sum_{t=1}^{T}\left[\varepsilon_{t}\left|y_{1},\, y_{2,},\,\ldots,\, y_{T-1}\right.\right]y_{t-1}}{\frac{1}{T}\sum_{t=1}^{T}y_{t}^{2}}\neq0 $ and hence $E\left[\hat{\rho}\left|y_{1},\, y_{2,},\,\ldots,\, y_{T-1}\right.\right]\neq\rho $ but is biased: $E\left[\hat{\rho}\left|y_{1},\, y_{2,},\,\ldots,\, y_{T-1}\right.\right]=\rho+\frac{\frac{1}{T}\sum_{t=1}^{T}\left[\varepsilon_{t}\left|y_{1},\, y_{2,},\,\ldots,\, y_{T-1}\right.\right]y_{t-1}}{\frac{1}{T}\sum_{t=1}^{T}y_{t}^{2}}=\rho+\frac{\frac{1}{T}\sum_{t=1}^{T}E\left(\varepsilon_{t}^{2}\right)y_{t-1}}{\frac{1}{T}\sum_{t=1}^{T}y_{t}^{2}}=$$\rho+\frac{\frac{1}{T}\sum_{t=1}^{T}\sigma_{\varepsilon}^{2}y_{t-1}}{\frac{1}{T}\sum_{t=1}^{T}y_{t}^{2}} $. All I assume to show consistency of the OLS estimator in the AR(1) model is contemporanous exogeneity, $E\left[\varepsilon_{t}\left|x_{t}\right.\right]=E\left[\varepsilon_{t}\left|y_{t-1}\right.\right]=0 $ which leads to the moment condition, $E\left[\varepsilon_{t}x_{t}\right]=0 $ with $x_{t}=y_{t-1} $. As before, we have that the OLS estimator of $\rho $, $\hat{\rho} $ is given as: $$\hat{\rho}=\frac{\frac{1}{T}\sum_{t=1}^{T}y_{t}y_{t-1}}{\frac{1}{T}\sum_{t=1}^{T}y_{t}^{2}}=\frac{\frac{1}{T}\sum_{t=1}^{T}\left(\rho y_{t-1}+\varepsilon_{t}\right)y_{t-1}}{\frac{1}{T}\sum_{t=1}^{T}y_{t}^{2}}=\rho+\frac{\frac{1}{T}\sum_{t=1}^{T}\varepsilon_{t}y_{t-1}}{\frac{1}{T}\sum_{t=1}^{T}y_{t}^{2}} $$ Now assume that $plim\frac{1}{T}\sum_{t=1}^{T}y_{t}^{2}=\sigma_{y}^{2} $ and $\sigma_{y}^{2} $ is positive and finite, $0<\sigma_{y}^{2}<\infty $. Then, as $T\rightarrow\infty $ and as long as a law of large numbers (LLN) applies we have that $p\lim\frac{1}{T}\sum_{t=1}^{T}\varepsilon_{t}y_{t-1}=E\left[\varepsilon_{t}y_{t-1}\right]=0 $. Using this result we have: $$\underset{T\rightarrow\infty}{p\lim\hat{\rho}}=\rho+\frac{p\lim\frac{1}{T}\sum_{t=1}^{T}\varepsilon_{t}y_{t-1}}{p\lim\frac{1}{T}\sum_{t=1}^{T}y_{t}^{2}}=\rho+\frac{0}{\sigma_{y}^{2}}=\rho $$ Thereby it has been shown that the OLS estimator of $p $, $\hat{\rho} $ in the AR(1) model is biased but consistent. Note that this result holds for all regressions where the lagged dependent variable is included as a regressor.
Let $ E $ be an algebraic extension $ F $ and $ x \in E $ and $ \sigma: E \to E $ be an automorphism of $ E $ fixing $ F. $ Prove that $ \sigma(x) $ and $ x $ are conjugate over $ F. $ I am starting to learn about extension field, automorphism and Galois theory now and there's a lot of stuffs that confuse me so any help for this question is really appreciated. In order to prove that $ \sigma(x) $ and $ x $ are conjugate over $ F, $ I need to find an irreducible polynomial $ p(x) \in F[x] $ such that $ p(x) = p(\sigma(x)) = 0. $ If $ x \in F, $ then the proof completes, but I still stuck on the case when $ x\notin F. $ There is a theorem in my book which says that if $ F $ is a field and $ \alpha $ and $ \beta $ are algebraic over $ F $ with $ deg(\alpha, F) = n. $ The the map $$ \psi_{\alpha, \; \beta}:(c_{0} + c_{1}\alpha + \dots + c_{n - 1}\alpha^{n - 1}) = c_{0} + c_{1}\beta + \dots + c_{n - 1}\beta^{n - 1} $$ is an isomorphism of $ F(\alpha) $ onto $ F(\beta) $ if and only if $ \alpha $ and $ \beta $ are conjugate over $ F. $ I attempt this approach but fail to prove that $ \psi_{\sigma(x), \; x} $ is an isomorphism.
I came across the following proof in my textbook that was used as a end of chapter review. How can I prove the following algebraically? $$\left( \begin{array}{c} 2n \\ 2\ \end{array} \right) = 2 \left( \begin{array}{c} n \\ 2\ \end{array} \right) + n^2$$ using $\left( \begin{array}{c} n \\ 2\ \end{array} \right) = \dfrac{n(n-1)}{2}$? $$\left( \begin{array}{c} 2n \\ 2\ \end{array} \right) = \frac{2n(2n-1)}{2} = 2n(n-1) = 2 \frac{n(n-1)}{2} + n^2 = 2\left( \begin{array}{c} n \\ 2\ \end{array} \right) + n^2 $$ Is this proof correct? if not how do I fix it?
The divergence (Gauss-Green) theorem can be used to define the improper integral of the divergence of (weakly) singular vector fields $\mathbf{F}$ with isolated singular points $\mathbf{p}_o=(x_0,y_o,z_0)\in V$. Customarily, the definition goes as follows$$\begin{split}\int\limits_{V}\nabla\cdot\mathbf{F}(x,y,z)\,\mathrm{d}V& \triangleq \lim_{R\to 0} \Bigg[\,\int\limits_{V\setminus B(\mathbf{p}_o,R)} \nabla\cdot\mathbf{F}(x,y,z)\, \mathrm{d}V - \int\limits_{\partial B(\mathbf{p}_o,R)} \mathbf{F}(x,y,z) \cdot \hat{n}\ dS\Bigg]\\\\&\triangleq \int\limits_{\partial V} \mathbf{F}(x,y,z) \cdot \hat{n}\ dS,\end{split}\label{1}\tag{1}$$where, for the small volume $\delta$, a small ball $B(\mathbf{p}_o,R)$ with radius $R>0$ centered on the singular point of $\mathbf{F}$ is customarily chosen. The definition is clearly consistent if and only if the limits of the two integrals in formula \eqref{1} exist and are finite. An example. The most famous example of use of \eqref{1} as a definition is perhaps the calculation of the integral of the divergence of the following field:$$\begin{split}\mathbf{F}(x,y,z)&=\nabla{\bigg[\sqrt{(x-x_0)^2+(y-y_0)^2+(z-z_0)^2 }\,\bigg]^{-1}}\\&=\nabla\frac{1}{|\;\mathbf{p}-\mathbf{p}_o|}\end{split}$$where $\mathbf{p}=(x,y,z)\in V$. This vector field is, apart from a multiplicative constant, the gradient of the fundamental solution of the laplacian: therefore, the integral of the divergence of this vector field is zero in every domain $V\subset\Bbb R^3\setminus\mathbf{p}_o$ since $\nabla\cdot\mathbf{F}$ is zero in such domains. However, applying \eqref{1} we have$$\begin{split}\int\limits_{V}\nabla\cdot\mathbf{F}(\mathbf{p})\,\mathrm{d}V&=-\lim_{R\to 0} \int\limits_{ \partial B(\mathbf{p},R)} \nabla\frac{1}{|\;\mathbf{p}-\mathbf{p}_o|} \cdot\hat{n}\, \mathrm{d}S \\&= -\lim_{R\to 0} \int\limits_{ \partial B(\mathbf{p},R)} \frac{ \partial }{\partial \hat{n}} \frac{1}{|\;\mathbf{p}-\mathbf{p}_o|} \mathrm{d}S\\&=-\lim_{R\to 0} \int\limits_{ \partial B(\mathbf{p},R)} \frac{ \partial }{\partial r} \frac{1}{r}\mathrm{d}S\\&=\lim_{R\to 0} \frac{1}{R^2} \int\limits_{ \partial B(\mathbf{p},R)} \mathrm{d}S = 4\pi,\end{split}$$and thus we can also define the flux of $\mathbf{F}$ throug $\partial V$. Final notes All the above development is done under the hypothesis $\delta=B(\mathbf{p},R)$: however, formula \eqref{1} is valid for more general classes of small volumes $\delta$, provided that the singularity of $\mathbf{F}$ is sufficiently "weak". Let's precise the exact meaning of the locution "weak singularity" in the context of fields with a single isolated singularity. Being the field $\mathbf{F}$ singular, we can say that, near the singular point $\mathbf{p}_o\in V$,$$|\mathbf{F}(\mathbf{p})|\le K{|\;\mathbf{p}-\mathbf{p}_o|^{-\alpha(|\mathbf{p}-\mathbf{p}_o)|}}\label{2}\tag{2}$$where $\alpha:\Bbb R_+\to \Bbb R_+$ is a non negative function ($\alpha\ge0$). Then we have that$$\lim_{R\to 0}\bigg|\int\limits_{ \partial B(\mathbf{p},R)}\mathbf{F}(\mathbf{p})\cdot\hat{n}\, \mathrm{d}S \Bigg|<\infty\iff \lim_{R\to 0}R^{-\alpha(R)+2}<\infty\iff \lim_{R\to 0} \alpha(R)\le 2$$Be it noted that this condition is stronger than the simple condition of local integrability of the field $\mathbf{F}$: this one implies that$$\lim_{R\to 0} \alpha(R)<3$$in estimate \eqref{2}, thus there are locally integrable vector fields for which formula \eqref{1} is not applicable. As I stated clearly at the beginning of my answer, formula \eqref{1} is not really a divergence (Gauss-Green) theorem for singular vector fields: it is a definition which uses the standard theorem applied to non-singular regions of $F$ (by eventually cutting small volume pieces $\delta$) to extend its range of applicability to a class of singular vector fields. Therefore you will not find it stated as a theorem: however, in books on partial differential equations which do not use the theory of distributions, \eqref{1} is silently used in the proof of Green's formula. See for example Tikhonov and Samarskii [1], chapter IV, §2.1, pp. 316-318. [1] A. N. Tikhonov and A. A. Samarskii (1990) [1963], " Equations of mathematical physics", New York: Dover Publications, pp. XVI+765 ISBN 0-486-66422-8, MR0165209, Zbl 0111.29008.
For one dimensional case there is a nice connection of Radon-Nikodym derivative and "classical" derivative on real line. Is there some kind of analogy for higher dimensional cases? Among other connections, the Radon-Nikodym derivative allows you to get the change of variable formula in $\mathbb{R}^n$. Setup: Let $K\subset\mathbb{R}^n$ be compact and equal to the closure of its interior, let $U$ be an open neighborhood of $K$, and consider a $C^1$ map $T:U\to\mathbb{R}^n$ satisfying $|T(x)-T(y)|>\lambda|x-y|$ for all $x,y\in K$ and some $\lambda>0$. Then $T$ is one-to-one onto its image and $T^{-1}$ is Lipschitz with Lipschitz constant $\lambda^{-1}$. Let $\mu = m\mid_K$ be the restriction of Lebesgue measure $m$ to $K$, i.e. $\mu(E) = m(E\cap K)$, and let $\nu = T\#\mu$ be the pullback measure $\nu(E) = \mu(T^{-1}(E))$. $\nu$ is absolutely continuous w.r.t. $m$, so $d\nu = fd\mu$ for some $f\in L^1(\mu)$. The Radon-Nikydym theorem and the general change of variable formula tell us that $$ \int_U g\circ T~dm = \int_{T(U)}g~d(T\#\mu) = \int_{T(U)}gf~dm. $$ The general change of variable formula is hard to use in its usual form, but if we can obtain a formula for $f$ then we can get something much easier to work with. In fact, under our current assumptions, we can get a formula for $f$. We then use the Lebesgue differentiation theorem to compute: $$ f(x) = \lim_{r\to 0}\frac{1}{m(B_r(x))}\int_{B_r(x)}fdm = \lim_{r\to 0}\frac{\nu(B_r(x))}{\mu(B_r(x))} = \lim_{r\to 0}\frac{m(T^{-1}(B_r(x))\cap K)}{m(B_r(x))}. $$ The last limit essentially measures the volume distortion factor of $T^{-1}$, and therefore can be shown to be $|\det(DT^{-1})(x)| = |\det(DT)(x)|^{-1}$, where $DT$ is the Jacobian matrix. This is well defined because of the condition $|T(x)-T(y)|>\lambda|x-y|$, so $DT(x)$ is nonsingular for all $x$. Consequently the change of variable formula is given by $$ \int_U g\circ T~dm = \int_{T(U)}g(x)|\det(DT)(x)|^{-1}~dm(x). $$ Notice that we haven't invoked compactness or $T\in C^1$ yet. Since $T$ is $C^1$ and $K$ is compact (which is a very common scenario for integration, hence not a huge restriction), the derivative of $T$ is bounded on $K$ and hence $T$ is also Lipschitz. Write $S=T^{-1}$, $W=T(U)$; then our formula becomes $$ \int_W g~dm = \int_W g\circ T\circ S~dm = \int_{S(W)}(g\circ T)|\det(DS)|^{-1}~dm = \int_{S(W)}(g\circ T)|\det(DT)|~dm, $$ that is, $$ \int_{T(U)} g~dm = \int_U(g\circ T)|\det(DT)|~dm, $$ the familiar version of the change of variable formula from third semester calculus.
How to make an animation of following gif in Mathematica? And how to make 3D analog? I tried first few steps line = Graphics[Line[{{1, 1}, {2, 2}}]]Manipulate[Show[line, line /. l : Line[pts_] :> Rotate[l, n, Mean[pts]]], {n, 0,Pi}] Mathematica Stack Exchange is a question and answer site for users of Wolfram Mathematica. It only takes a minute to sign up.Sign up to join this community How to make an animation of following gif in Mathematica? And how to make 3D analog? I tried first few steps line = Graphics[Line[{{1, 1}, {2, 2}}]]Manipulate[Show[line, line /. l : Line[pts_] :> Rotate[l, n, Mean[pts]]], {n, 0,Pi}] I'd like to expand on Quantum_Oli's answer to give an intuitive explanation for what's happening, because there's a neat geometric interpretation. At one point in the animation it looks like there is a circle of colored dots moving about the center, this is a special case of so called hypocycloids known as Cardano circles. A hypocyloid is a curve generated by a point on a circle that moves along the inside of a larger circle. It is closely related to the epicycloid, for which I have previously written some code. Here's a hypocycloid generated with code modified from that answer: The parametric equations for a hypocycloid are (as on Wikipedia) $$ x (\theta) = (R - r) \cos \theta + r \cos \left( \frac{R - r}{r} \theta \right) $$ $$ y (\theta) = (R - r) \sin \theta - r \sin \left( \frac{R - r}{r} \theta \right), $$ where $r$ is the radius of the smaller circle and $R$ is the radius of the larger circle. In a Cardano circle all points on the smaller circle move in straight lines, the relationship that characterizes a Cardano circle is $R = 2 r$. The question is, how does this relate to Quantum_Oli's answer? The equation that he gives for his points is {x,y} = Sin[ω t + φ] {Cos[φ], Sin[φ]}, we can rewrite this with TrigReduce: TrigReduce[Sin[ω t + φ] {Cos[φ], Sin[φ]}] {1/2 (Sin[t ω] + Sin[2 φ + t ω]), 1/2 (Cos[t ω] - Cos[2 φ + t ω])} That's neat; the form of this expression is the same as the form of the expression for a hypocycloid on Wikipedia. Identifying parameters between the formulae we find that $$ R - r = 1,\quad \frac{R-r}{r} = 1 \implies r = 1, R = 2 $$ thus proving that it's the formula for a Cardano circle, since the radii satisfy the condition that $R = 2 r$. Obviously, though, the points aren't stationary on the circle the way that they are in my example above. The animation is created by moving the points about, we can see in the expression above that Quantum_Oli solved this by introducing a phase offset $2φ$, and then changing this differently for different points in a certain way that he came up with. I extracted the part that generates the phase offset: phases[t_] := Table[t + Pi i, {i, 0, 1, 1/(3 \[Pi] - Abs[9.43 - t])}] Plugging the phase offset into the equations for the hypocycloid and using the code for generating a plot that was used above we then get This is the code that was used to generate the animation: fx[θ_, phase_: 0, r_: 1, k_: 2] := r (k - 1) Cos[θ] + r Cos[(k - 1) θ + 2 phase Degree]fy[θ_, phase_: 0, r_: 1, k_: 2] := r (k - 1) Sin[θ] - r Sin[(k - 1) θ + 2 phase Degree]center[θ_, r_, k_] := {r (k - 1) Cos[θ], r (k - 1) Sin[θ]}gridlines = Table[{x, GrayLevel[0.9]}, {x, -6, 6, 0.5}];epilog[θ_, phases_, r_: 1, k_: 2] := { Thick, LightGray, Circle[{0, 0}, k r], LightGray, Circle[center[θ, r, k], r], MapIndexed[{ Black, PointSize[0.03], Point[{fx[θ, #], fy[θ, #]}], Hue[First[#2]/10], PointSize[0.02], Point[{fx[θ, #], fy[θ, #]}] } &, phases] }plot[max_, phases_] := ParametricPlot[ Evaluate[Table[{fx[θ, phase], fy[θ, phase]}, {phase, phases}]], {θ, 0, 2 Pi}, PlotStyle -> MapIndexed[Directive[Hue[First[#2]/10], Thickness[0.01]] &, phases], Epilog -> epilog[max, phases], GridLines -> {gridlines, gridlines}, PlotRange -> {-3, 3}, Axes -> False ]phases[t_] := Table[t + Pi i, {i, 0, 1, 1/(3 π - Abs[9.43 - t])}]/DegreeManipulate[plot[t, phases[t]], {t, 0, 6 Pi}] Edit: Added the reversal and some refinements ω = 1;posP[t_, φ_] := Sin[ω t + φ] {Cos[φ], Sin[φ]}posL[φ_] := {-#, #} &@{Cos[φ], Sin[φ]}Animate[ Graphics[{PointSize[0.02], Table[{Black, Line[posL[π i]], Hue[i], Point[posP[t, π i]]}, {i, 0, 1, 1/(3π-Abs[9.43-t])}] }, PlotRange -> {{-1.5, 1.5}, {-1.5, 1.5}} ], {t, 0, 6π, 0.2} ]
Sequence of real numbers A sequence of real numbers (or a real sequence) is defined as a function $ f: \mathbb{N} \to \mathbb{R}$ , where $ \mathbb{N}$ is the set of natural numbers and $ \mathbb{R}$ is the set of real numbers. Thus, $ f(n)=r_n, \ n \in \mathbb{N}, \ r_n \in \mathbb{R}$ is a function which produces a sequence… This mathematical fallacy is due to a simple assumption, that $ -1=\dfrac{-1}{1}=\dfrac{1}{-1}$ . Proceeding with $ \dfrac{-1}{1}=\dfrac{1}{-1}$ and taking square-roots of both sides, we get: $ \dfrac{\sqrt{-1}}{\sqrt{1}}=\dfrac{\sqrt{1}}{\sqrt{-1}}$ Now, as the Euler’s constant $ i= \sqrt{-1}$ and $ \sqrt{1}=1$ , we can have $ \dfrac{i}{1}=\dfrac{1}{i} \ldots \{1 \}$ $ \Rightarrow i^2=1 \ldots \{2 \}$ . This is complete contradiction to the… Sets In mathematics, Set is a well defined collection of distinct objects. The theory of Set as a mathematical discipline rose up with George Cantor, German mathematician, when he was working on some problems in Trigonometric series and series of real numbers, after he recognized the importance of some distinct collections and intervals. Cantor defined the set as a ‘plurality… Last year, I managed to successfully finish Metric Spaces, Basic Topology and other Analysis topics. Starting from the next semester I’ll be learning more pure mathematical topics, like Functional Analysis, Combinatorics and more. The plan is to lead myself to Combinatorics by majoring Functional Analysis and Topology. But before all those, I’ll be studying measure theory and probability this July – August. Probability… “Irrational numbers are those real numbers which are not rational numbers!” Def.1: Rational Number A rational number is a real number which can be expressed in the form of where $ a$ and $ b$ are both integers relatively prime to each other and $ b$ being non-zero. Following two statements are equivalent to the definition 1. 1. $ x=\frac{a}{b}$… If you are aware of elementary facts of geometry, then you might know that the area of a disk with radius $ R$ is $ \pi R^2$ . The radius is actually the measure(length) of a line joining the center of disk and any point on the circumference of the disk or any other circular lamina. Radius for a disk… Triangle inequality has its name on a geometrical fact that the length of one side of a triangle can never be greater than the sum of the lengths of other two sides of the triangle. If $ a$ , $ b$ and $ c$ be the three sides of a triangle, then neither $ a$ can be greater than $… Ramanujan (1887-1920) discovered some formulas on algebraic nested radicals. This article is based on one of those formulas. The main aim of this article is to discuss and derive them intuitively. Nested radicals have many applications in Number Theory as well as in Numerical Methods . The simple binomial theorem of degree 2 can be written as: $ {(x+a)}^2=x^2+2xa+a^2 \… If mathematics was a language, logic was the grammar, numbers should have been the alphabet. There are many types of numbers we use in mathematics, but at a broader aspect we may categorize them in two categories: 1. Countable Numbers 2. Uncountable Numbers The numbers which can be counted in nature are called Countable Numbers and the numbers which can… Multiplication is probably the most important elementary operation in mathematics; even more important than usual addition. Every math-guy has its own style of multiplying numbers. But have you ever tried multiplicating by this way? Exercise: $ 88 \times 45$ =? Ans: as usual :- 3960 but I got this using a particular way: 88 45… Weierstrass had drawn attention to the fact that there exist functions which are continuous for every value of $ x$ but do not possess a derivative for any value. We now consider the celebrated function given by Weierstrass to show this fact. It will be shown that if $ f(x)= \displaystyle{\sum_{n=0}^{\infty} } b^n \cos (a^n \pi x) \ \ldots (1)… Once I listed books on Algebra and Related Mathematics in this article, Since then I was receiving emails for few more related articles. I have tried to list almost all freely available Calculus texts. Here we go: Elementary Calculus : An approach using infinitesimals by H. J. Keisler Multivariable Calculus by Jim Herod and George Cain Calculus by Gilbert Strang… Intro Let $ \mathbf{Q}$ be the set of rational numbers. It is well known that $ \mathbf{Q}$ is an ordered field and also the set $ \mathbf{Q}$ is equipped with a relation called “less than” which is an order relation. Between two rational numbers there exists an infinite number of elements of $ \mathbf{Q}$. Thus, the system of rational numbers seems… Statement A series $ \sum {u_n}$ of positive terms is convergent if from and after some fixed term $ \dfrac {u_{n+1}} {u_n} < r < {1} $ , where r is a fixed number. The series is divergent if $ \dfrac{u_{n+1}} {u_n} > 1$ from and after some fixed term. D’ Alembert’s Test is also known as the ratio test… Topic Beta & Gamma functions Statement of Dirichlet’s Theorem $ \int \int \int_{V} x^{l-1} y^{m-1} z^{n-1} dx dy ,dz = \frac { \Gamma {(l)} \Gamma {(m)} \Gamma {(n)} }{ \Gamma{(l+m+n+1)} } $ , where V is the region given by $ x \ge 0 y \ge 0 z \ge 0 x+y+z \le 1 $ . Brief Theory on Gamma and… We all know that the derivative of $x^2$ is 2x. But what if someone proves it to be just x? Consider a sequence of functions as follows:- $ f_1 (x) = \sqrt {1+\sqrt {x} } $ $ f_2 (x) = \sqrt{1+ \sqrt {1+2 \sqrt {x} } } $ $ f_3 (x) = \sqrt {1+ \sqrt {1+2 \sqrt {1+3 \sqrt {x} } } } $ ……and so on to $ f_n (x) = \sqrt {1+\sqrt{1+2 \sqrt {1+3 \sqrt {\ldots \sqrt {1+n… Real analysis is the branch of Mathematics in which we study the development on the set of real numbers. We reach on real numbers through a series of successive extensions and generalizations starting from the natural numbers. In fact, starting from the set of natural numbers we pass on successively to the set of integers, the set of rational numbers…
Does anyone here understand why he set the Velocity of Center Mass = 0 here? He keeps setting the Velocity of center mass , and acceleration of center mass(on other questions) to zero which i dont comprehend why? @amanuel2 Yes, this is a conservation of momentum question. The initial momentum is zero, and since there are no external forces, after she throws the 1st wrench the sum of her momentum plus the momentum of the thrown wrench is zero, and the centre of mass is still at the origin. I was just reading a sci-fi novel where physics "breaks down". While of course fiction is fiction and I don't expect this to happen in real life, when I tired to contemplate the concept I find that I cannot even imagine what it would mean for physics to break down. Is my imagination too limited o... The phase-space formulation of quantum mechanics places the position and momentum variables on equal footing, in phase space. In contrast, the Schrödinger picture uses the position or momentum representations (see also position and momentum space). The two key features of the phase-space formulation are that the quantum state is described by a quasiprobability distribution (instead of a wave function, state vector, or density matrix) and operator multiplication is replaced by a star product.The theory was fully developed by Hilbrand Groenewold in 1946 in his PhD thesis, and independently by Joe... not exactly identical however Also typo: Wavefunction does not really have an energy, it is the quantum state that has a spectrum of energy eigenvalues Since Hamilton's equation of motion in classical physics is $$\frac{d}{dt} \begin{pmatrix} x \\ p \end{pmatrix} = \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} \nabla H(x,p) \, ,$$ why does everyone make a big deal about Schrodinger's equation, which is $$\frac{d}{dt} \begin{pmatrix} \text{Re}\Psi \\ \text{Im}\Psi \end{pmatrix} = \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} \hat H \begin{pmatrix} \text{Re}\Psi \\ \text{Im}\Psi \end{pmatrix} \, ?$$ Oh by the way, the Hamiltonian is a stupid quantity. We should always work with $H / \hbar$, which has dimensions of frequency. @DanielSank I think you should post that question. I don't recall many looked at the two Hamilton equations together in this matrix form before, which really highlight the similarities between them (even though technically speaking the schroedinger equation is based on quantising Hamiltonian mechanics) and yes you are correct about the $\nabla^2$ thing. I got too used to the position basis @DanielSank The big deal is not the equation itself, but the meaning of the variables. The form of the equation itself just says "the Hamiltonian is the generator of time translation", but surely you'll agree that classical position and momentum evolving in time are a rather different notion than the wavefunction of QM evolving in time. If you want to make the similarity really obvious, just write the evolution equations for the observables. The classical equation is literally Heisenberg's evolution equation with the Poisson bracket instead of the commutator, no pesky additional $\nabla$ or what not The big deal many introductory quantum texts make about the Schrödinger equation is due to the fact that their target audience are usually people who are not expected to be trained in classical Hamiltonian mechanics. No time remotely soon, as far as things seem. Just the amount of material required for an undertaking like that would be exceptional. It doesn't even seem like we're remotely near the advancement required to take advantage of such a project, let alone organize one. I'd be honestly skeptical of humans ever reaching that point. It's cool to think about, but so much would have to change that trying to estimate it would be pointless currently (lol) talk about raping the planet(s)... re dyson sphere, solar energy is a simplified version right? which is advancing. what about orbiting solar energy harvesting? maybe not as far away. kurzgesagt also has a video on a space elevator, its very hard but expect that to be built decades earlier, and if it doesnt show up, maybe no hope for a dyson sphere... o_O BTW @DanielSank Do you know where I can go to wash off my karma? I just wrote a rather negative (though well-deserved, and as thorough and impartial as I could make it) referee report. And I'd rather it not come back to bite me on my next go-round as an author o.o
My university is participating in the implementation of a library borrowing managing system at Richelieu National Library in France. I received the order to formulate the query: "find all users having borrowed every book" in relational Algebra, in relational Calculus and in SQL (which would probably not happen, probably librarians want to test the limits of the database). The database has the following pattern (the primary keys are in bold): Borrowing( People, Book, DateBorrowing, ExpectedReturnDate, EffectiveReturnDate) Lateness( People, Book, DateBorrowing, LatenessFee) I tried $$\Pi_{People}(Borrowing)\div\Pi_{People}(\sigma_{Book} (Borrowing))$$ But It seemed to be wrong as far as to do $r\div s$, $S\subseteq R$ is needed, which seems not to be the case, but why? I'm still talking about people, isn't it? I then tried the following relational calculus formula: $$\{t.People|Borrowing(t)\wedge(\forall u Borrowing(u)\Rightarrow t.DateBorrowing)\}$$ To find every books that have a borrowing date. I know this calcuation is false but I don't know how to do better... Then in SQL: SELECT People FROM Borrowing WHERE FORALL Books EXISTS DateBorrowing That is what I tried and I know that is not the right way to "find all users having borrowed every book". Can you help me expressing correctely such a querry?
“Irrational numbers are those real numbers which are not rational numbers!” Def.1: Rational Number A rational number is a real number which can be expressed in the form of where $ a$ and $ b$ are both integers relatively prime to each other and $ b$ being non-zero. Following two statements are equivalent to the definition 1. 1. $ x=\frac{a}{b}$… If you are aware of elementary facts of geometry, then you might know that the area of a disk with radius $ R$ is $ \pi R^2$ . The radius is actually the measure(length) of a line joining the center of disk and any point on the circumference of the disk or any other circular lamina. Radius for a disk… Triangle inequality has its name on a geometrical fact that the length of one side of a triangle can never be greater than the sum of the lengths of other two sides of the triangle. If $ a$ , $ b$ and $ c$ be the three sides of a triangle, then neither $ a$ can be greater than $… Ramanujan (1887-1920) discovered some formulas on algebraic nested radicals. This article is based on one of those formulas. The main aim of this article is to discuss and derive them intuitively. Nested radicals have many applications in Number Theory as well as in Numerical Methods . The simple binomial theorem of degree 2 can be written as: $ {(x+a)}^2=x^2+2xa+a^2 \… If mathematics was a language, logic was the grammar, numbers should have been the alphabet. There are many types of numbers we use in mathematics, but at a broader aspect we may categorize them in two categories: 1. Countable Numbers 2. Uncountable Numbers The numbers which can be counted in nature are called Countable Numbers and the numbers which can… Weierstrass had drawn attention to the fact that there exist functions which are continuous for every value of $ x$ but do not possess a derivative for any value. We now consider the celebrated function given by Weierstrass to show this fact. It will be shown that if $ f(x)= \displaystyle{\sum_{n=0}^{\infty} } b^n \cos (a^n \pi x) \ \ldots (1)… Once I listed books on Algebra and Related Mathematics in this article, Since then I was receiving emails for few more related articles. I have tried to list almost all freely available Calculus texts. Here we go: Elementary Calculus : An approach using infinitesimals by H. J. Keisler Multivariable Calculus by Jim Herod and George Cain Calculus by Gilbert Strang… Statement A series $ \sum {u_n}$ of positive terms is convergent if from and after some fixed term $ \dfrac {u_{n+1}} {u_n} < r < {1} $ , where r is a fixed number. The series is divergent if $ \dfrac{u_{n+1}} {u_n} > 1$ from and after some fixed term. D’ Alembert’s Test is also known as the ratio test… Topic Beta & Gamma functions Statement of Dirichlet’s Theorem $ \int \int \int_{V} x^{l-1} y^{m-1} z^{n-1} dx dy ,dz = \frac { \Gamma {(l)} \Gamma {(m)} \Gamma {(n)} }{ \Gamma{(l+m+n+1)} } $ , where V is the region given by $ x \ge 0 y \ge 0 z \ge 0 x+y+z \le 1 $ . Brief Theory on Gamma and… We all know that the derivative of $x^2$ is 2x. But what if someone proves it to be just x? Consider a sequence of functions as follows:- $ f_1 (x) = \sqrt {1+\sqrt {x} } $ $ f_2 (x) = \sqrt{1+ \sqrt {1+2 \sqrt {x} } } $ $ f_3 (x) = \sqrt {1+ \sqrt {1+2 \sqrt {1+3 \sqrt {x} } } } $ ……and so on to $ f_n (x) = \sqrt {1+\sqrt{1+2 \sqrt {1+3 \sqrt {\ldots \sqrt {1+n…
Momentum Overview Momentum is a simple strategy. Look at the past 12 months and identify which stocks went up the most (the winners) and which stocks went down the most (the losers). Suppose we look at all NYSE stocks between 1990 and 2015. Buying the top 10% of winners and shorting the worst 10% of losers yields an average monthly return of about 1% (this is about 13% annually) while the S&P 500 yielded an average monthly return of about 0.5% (this is about 6% annually) over the same period. Momentum, however, has a problem - it is susceptible to crashes. Using the guide here I was able to replicate Professor Daniel’s momentum portfolios. Using his methodology, I constructed the winner minus loser (WML) portfolio described above. Below I’ve plotted the value of 100 dollars invested in WML and 100 dollars invested in the S&P 500 on January 1st 1990. Although momentum returns higher on average, the excess returns were wiped out in 2009 during a large momentum crash. If you have been reading my posts on bubbles, you will also notice there was a momentum crash around the time the Nasdaq “bubble” collapsed (the near vertical line in 2001). This makes sense. The stocks that went up the most were probably overvalued technology stocks (even on NYSE) so when these stocks fell, momentum fell with it. Another point to make is the relationship between bubbles and momentum. As I mentioned in Part 2 of the bubbles series, the ADF test is basically a test for return persistence. Momentum relies on return persistence to make money. A more rigorous treatment of this idea will be the topic of a future post. Sharpe Ratio Before getting into the paper, I wanted to review the concept of Sharpe-ratio, as the authors use it to evaluate the performance of their trading strategy. The Sharpe ratio is designed to measure the risk-adjusted return of an asset. Higher ratios indicate higher return for a given level of volatility. Higher ratios indicate a more efficient portfolio (in the mean-variance sense) - but this will be the topic of a future post. The Sharpe ratio is of asset is: \begin{equation} \frac{E(R_i)-r_f}{\sigma_i} \end{equation} where denotes returns, denotes standard deviation of returns and is the risk free rate. Sharpe ratio is a better way to compare assets than average returns, owing to the normalization by volatility. Intuition: an asset that returns 10% a year for sure is preferable to an asset that returns 20% half the time, and 0% half the time. Both assets have the same expected return, but the first has a higher Sharpe ratio. Using monthly data between 2010-2015 from FRED, and setting =0, I computed Sharpe ratios for 3 popular assets: 1) S&P 500: =0.009, =0.0287, Sharpe=0.312 2) BBB Total Return Index: =0.005, =0.010, Sharpe=0.442 3) AA Total Return Index: =0.004, =0.008, Sharpe=0.433 Even thought the S&P average returns are double that for BBB and AA bonds, it has a lower Sharpe ratio, owing to higher volatility. The Paper Summary The authors build a model to explain the poor performance of some high past return (also called, “high momentum”) stocks. In the model, agents receive a noisy signal about the value of a risky asset, but short sale constraints prevent pessimists from selling. This drives the price above the fundamental value in the short run, but eventually, uncertainty is resolved and the asset price goes back down. Note, this has the flavor of bubbles in Barberis et. al. (2015), which will be the topic of a future blog post. Empirically, going short the “overpriced” winners and long other winners generates Sharpe ratio of 1.08 between 1990 and 2015. This is very impressive, considering the Sharpe ratio of the S&P over the same period was less than half that. More importantly, this excess return is not explained by common asset pricing factors, with a Fama-French 3-factor alpha of 2.71% per month. The Model In the model, there are two types of agents: 1) Passive investors (meant to represent institutional investors) who lend out shares in a competitive market. Their demand is not sensitive to prices. 2) Speculators, who receive a signal, , about the value of the risky asset. The signal is uniformly distributed around the truth, so the speculators are correct on average. Call the signal’s CDF Note that by construction, half of the speculators are “optimists” and half are “pessimists.” Also, the speculators know that they have a different than other speculators, but believe their signal is correct. To short, you have to borrow. If lending supply exceeds lending demand, borrowing is costless, but if not, borrowing is costly (the authors frame this as a search cost). Call the cost of shorting . There are three periods in the model: Time 0: No signals received yet, no disagreement about the price so speculators stay out Time 1: Speculators receive their signals and enter the market Time 2: Uncertainty is resolved. At time 1, speculators with signal buy shares, where represents liquidity constraints of the speculators. Speculators who expect to make money net of shorting costs () go short. In other words, a fraction of speculators short. To understand how many “pessimists” stay in the market, we need to understand the dynamics . The cost of shorting is decreasing in institutional lending supply and increasing in (1) divergence of opinion (bound of uniform distribution) (2) the speculators budget constraint and (3) cost of searching. The market clearing price is , with and . For any , this is greater than the “fundamental” value of 1. The model predicts that with a restricted supply of shares , but Empirical Work The authors want to figure out which stocks are expensive to short, but do not have data on actual shorting costs. Given the model above, institutional ownership and difference of opinion (defined below) should approximate shorting cost well. Using data from 1989 to 2014, the authors form 5x5x5 (125 triple-sorted) portfolios based on: 1) Past returns (momentum) 2) Institutional ownership 3) Difference of opinion, measured as simultaneous increase in short interest and price All of these are calculated over to to avoid look-ahead bias. Based on the model, the stocks that are most likely to be overpriced are those with high past returns, low institutional ownership and high difference of opinion. The authors find these stocks lose about 20% of value in the 4-5 years following portfolio formation. They use this to create a trading strategy: Based on the 5x5x5 sort, there are 25 high momentum portfolios. Go short on the high momentum portfolio with the lowest institutional ownership and the highest difference opinion, and go long the other 24 winner portfolios. To see the power of their strategy, see Figure 5 from their paper reproduced below. Betting against winners avoids the momentum crashes that can be seen in WML (the orange line), and outperforms many well known strategies such as Betting Against Beta (BAB - the pink line) and Value (HML - the orange line). Note - the effect of the momentum crash in WML is less dramatic in their figure because they are only forming 5 momentum portfolios, as opposed to 10, which is what I did in the Momentum overview. Discussion The results in the paper are pretty amazing. By doing a simple triple sort on publicly available data (does not require proprietary data on shorting costs), they are able to avoid crashes associated with momentum. That being said there are a few points I wanted to make about the paper: 1) In the model, all traders are equally (un)informed. I think it would be interesting to see if the model predicts an equilibrium price greater than the fundamental price in the presence of some fully informed traders. The authors claim that adding this wouldn’t change the model, as informed traders would only short if it was profitable net of costs (same as speculators with a low ). I’m not so convinced, however, as uninformed traders might “learn” from the trades of the informed guys, and put less weight on their own signal. 2) I think this raises a bigger issue with any of these types of models - why are uninformed traders in the market in the first place? In this model, all of the “optimistic” traders lose money. In fact, when you account for the distribution of signals and the cost of shorting, all traders (in expectation) lose money. In the real world, I don’t think anyone would invest in an asset if they expected to lose money. Perhaps we need a more robust model of entry and exit to account for the presence of these uninformed traders. 3) Data snooping - Any time you form 125 portfolios, you are going to get some that are sparse. Sometimes their portfolios have as few as 16 stocks. I would be curious if their result was robust to a 3x3x3 sort (only 27 portfolios), as maybe there are a few stocks driving their main result. 4) Feasibility - Their strategy talks about shorting these overpriced winners, but the whole point of the paper is that these should be hard to short! Conclusion Despite the issues raised above, I think the paper is still provides a compelling explanation for momentum crashes: high shorting costs prevent the market from keeping prices in line with fundamentals.
So, the rotation of a 3d body can be described with Euler's equations of motion giving the rotational velocity in components along the principal axes of inertia. As showed in f.ex. this paper, Euler Top (free asymmetric top): solution of Euler’s equations in terms of elliptic integrals, Berry Groisman, Cambridge University, 2014. they can be expressed in terms of Jacobian Elliptic Functions sn, cn and dn (in case of torque free rotatio). I tried to approximate these functions using the trigonometric functions, such that: $$ \operatorname{sn}(x,k)=\sin(\operatorname{am}(x,k))≈\sin\left(\frac{\sin(2u)}{2} C(k)+u\right) $$ where $u=x\times(\pi/\text{Period of }\operatorname{sn}(x,k))$ and $C(k)$ is a function of $k$ which matches how "wide" the sin graph is at the top, and is constant for a given body, so that u is the only variable. However, I cannot find a way to translate these component functions of angular velocity from along the principal axes (body frame) to the inertial frame of reference and than integrate the angular velocity functions to give the position of the body. Does anyone have an idea for a similar method which, just as this one, is not necessarily accurate after a long period of time, but gives a pretty accurate approximation which is enough to approximate the free rotation of a rigid body for a relatively short period of time?
Keywords Eventually positive matrix, Eventually exponentially positive matrix, 2-generalized star sign pattern, Checkerboard block sign pattern Abstract A sign pattern is a matrix whose entries belong to the set $\{+, -, 0\}$. An $n$-by-$n$ sign pattern $\mathcal{A}$ is said to be potentially eventually positive if there exists at least one real matrix $A$ with the same sign pattern as $\mathcal{A}$ and a positive integer $k_{0}$ such that $A^{k}>0$ for all $k\geq k_{0}$. An $n$-by-$n$ sign pattern $\mathcal{A}$ is said to be potentially eventually exponentially positive if there exists at least one real matrix $A$ with the same sign pattern as $\mathcal{A}$ and a nonnegative integer $t_{0}$ such that $e^{tA}=\sum_{k=0}^{\infty}\frac{t^{k}A^{k}}{k!}>0$ for all $t\geq t_{0}$. Identifying necessary and sufficient conditions for an $n$-by-$n$ sign pattern to be potentially eventually positive (respectively, potentially eventually exponentially positive), and classifying these sign patterns are open problems. In this article, the potential eventual positivity of the $2$-generalized star sign patterns is investigated. All the minimal potentially eventually positive $2$-generalized star sign patterns are identified. Consequently, all the potentially eventually positive $2$-generalized star sign patterns are classified. As an application, all the minimal potentially eventually exponentially positive $2$-generalized star sign patterns are identified. Consequently, all the potentially eventually exponentially positive $2$-generalized star sign patterns are classified. Recommended Citation Ber-Lin, Yu; Huang, Ting-Zhu; and Sanzhang, Xu.(2019),"Potentially Eventually Positive 2-generalized Star Sign Patterns", Electronic Journal of Linear Algebra,Volume 35, pp. 100-115. DOI: https://doi.org/10.13001/1081-3810.3876
Introduction Energy and Power Basic Operations Periodic Signals Commonly encountered signals Practice Problems Motivating question Consider a simple circuit where a DC voltage source with $v$ volts is connected to a 1 Ohm resistor. How much power is dissipated by the resistor? It is clear that a power is $v^2/R = v^2$ watts or joules/sec. How much energy is dissipated in the resistor over a 1 minute time duration? Since energy is the integral of power, the energy dissipated is $v^2 \times 60 = 60 v^2$ joules. Now consider the same circuit but with a voltage source whose voltage varies with time, i.e., the voltage at time $t$ is $x(t)$. Let us now consider the question of how much energy is dissipated from the resistor over the entire time interval from $-\infty,\infty$. At any given time $t$, the power is given by $x^2(t)$ and the overall energy is given by $\int_{-\infty}^{\infty} x^2(t) \ dt$. Notice that we said that the power dissipated at time $t$ is $x^2(t)$, but can we define one value for the power of the entire signal $\underline{x}(t)$? Definition of Energy and Power The energy and power of a CT signal $\underline{x}(t)$ and DT signal $\underline{x}[n]$ are defined as(1) These defintions apply to both real and complex signals $x(t)$ and $x[n]$. Power of periodic signals Consider a is a periodic CT signal $\underline{x}(t)$ with time period $T_0$ such as the example shown in the figure below. For such a signal, the energy of the signal given by $\int_{-\infty}^{\infty} |x(t)|^2 dt$ is infinite. Since the power is the average energy per period, the power is given by where $t_0$ is any arbitrary time instant starting from which we measure the time period. If $x(t)$ is a periodic DT signal with time period $N_0$, Energy as the strength of a signal Even though we used the circuit example as a motivation to define the energy of a signal, the definition of energy is not confined only to signals which can be interpreted as a voltage waveform. Rather, the energy of a signal can be used as a measure of strength of a signal. Often, we encounter situations where we would like to measure the strength of a signal or compare the strengths of two signals and the energy of the signal provides a quantitative measure of the strength of the signal. The above definition of energy to measure the strength of a signal is indeed only one of many possible choices and there are other ways to define the energy or strength of a signal. For example, one can look at the maximum value taken by the signal as one measure of strength, one can look at the sum of the absolute values of a DT signal as another choice. All these measures are meaningful and depending on the decision that we would like to make, we must choose the measure. The energy of a signal defined as in (1) is commonly used and in this course, this will be our default definition of energy. The definition of energy is closely related to what is called in mathematics as the norm of a vector Energy type and power type signals $x(t)$ is an energy type signal if $0<E_x<\infty$ $x(t)$ is a power type signal if $0<P_x<\infty$ Clearly, for any periodic signal $E_x$ is not bounded and hence, periodic signals cannot be energy type signals. If the energy within one period of the signal is bounded, then the power will be bounded and hence, such signals will be power type signals. Example Problems Example 1: Consider the signal given below. Is this power or energy type signal? Example 2: Let $x(t) = A \cos\left(\omega_0t+ \theta\right)$? Is this a power signal or energy signal? What is the power of the signal $x(t) = e^{j\omega_0t}$? where $(T_0=\frac{2\pi}{\omega_0})$ What is the energy of the signal $x[n]$ given by
But if you don't want to have a Google account: Chrome is really good. Much faster than FF (I can't run FF on either of the laptops here) and more reliable (it restores your previous session if it crashes with 100% certainty). And Chrome has a Personal Blocklist extension which does what you want. : ) Of course you already have a Google account but Chrome is cool : ) Guys, I feel a little defeated in trying to understand infinitesimals. I'm sure you all think this is hilarious. But if I can't understand this, then I'm yet again stalled. How did you guys come to terms with them, later in your studies? do you know the history? Calculus was invented based on the notion of infinitesimals. There were serious logical difficulties found in it, and a new theory developed based on limits. In modern times using some quite deep ideas from logic a new rigorous theory of infinitesimals was created. @QED No. This is my question as best as I can put it: I understand that lim_{x->a} f(x) = f(a), but then to say that the gradient of the tangent curve is some value, is like saying that when x=a, then f(x) = f(a). The whole point of the limit, I thought, was to say, instead, that we don't know what f(a) is, but we can say that it approaches some value. I have problem with showing that the limit of the following function$$\frac{\sqrt{\frac{3 \pi}{2n}} -\int_0^{\sqrt 6}(1-\frac{x^2}{6}+\frac{x^4}{120})^ndx}{\frac{3}{20}\frac 1n \sqrt{\frac{3 \pi}{2n}}}$$equal to $1$, with $n \to \infty$. @QED When I said, "So if I'm working with function f, and f is continuous, my derivative dy/dx is by definition not continuous, since it is undefined at dx=0." I guess what I'm saying is that (f(x+h)-f(x))/h is not continuous since it's not defined at h=0. @KorganRivera There are lots of things wrong with that: dx=0 is wrong. dy/dx - what/s y? "dy/dx is by definition not continuous" it's not a function how can you ask whether or not it's continous, ... etc. In general this stuff with 'dy/dx' is supposed to help as some kind of memory aid, but since there's no rigorous mathematics behind it - all it's going to do is confuse people in fact there was a big controversy about it since using it in obvious ways suggested by the notation leads to wrong results @QED I'll work on trying to understand that the gradient of the tangent is the limit, rather than the gradient of the tangent approaches the limit. I'll read your proof. Thanks for your help. I think I just need some sleep. O_O @NikhilBellarykar Either way, don't highlight everyone and ask them to check out some link. If you have a specific user which you think can say something in particular feel free to highlight them; you may also address "to all", but don't highlight several people like that. @NikhilBellarykar No. I know what the link is. I have no idea why I am looking at it, what should I do about it, and frankly I have enough as it is. I use this chat to vent, not to exercise my better judgment. @QED So now it makes sense to me that the derivative is the limit. What I think I was doing in my head was saying to myself that g(x) isn't continuous at x=h so how can I evaluate g(h)? But that's not what's happening. The derivative is the limit, not g(h). @KorganRivera, in that case you'll need to be proving $\forall \varepsilon > 0,\,\,\,\, \exists \delta,\,\,\,\, \forall x,\,\,\,\, 0 < |x - a| < \delta \implies |f(x) - L| < \varepsilon.$ by picking some correct L (somehow) Hey guys, I have a short question a friend of mine asked me which I cannot answer because I have not learnt about measure theory (or whatever is needed to answer the question) yet. He asks what is wrong with \int_0^{2 \pi} \frac{d}{dn} e^{inx} dx when he applies Lesbegue's dominated convergence theorem, because apparently, if he first integrates and then derives, the result is 0 but if he first derives and then integrates it's not 0. Does anyone know?
Mathematical Induction Inequality Proofs Mathematical Induction Inequality is being used for proving inequalities. It is quite often applied for the subtraction and/or greatness, using the assumption at the step 2. Let’s take a look at the following hand-picked examples. Practice Questions for Mathematical Induction Inequality Basic Mathematical Induction Inequality Prove \( 4^{n-1} \gt n^2 \) for \(n \ge 3\) by mathematical induction. Step 1: Show it is true for \( n=3 \). LHS \(=4^{3-1} = 16 \) RHS \(=3^2=9 \) LHS > RHS Therefore it is true for \( n=3 \). Step 2: Assume that it is true for \( n=k \). That is, \( 4^{k-1} > k^2 \). Step 3: Show it is true for \( n=k+1 \). That is, \( 4^{k} > (k+1)^2 \). \( \begin{aligned} \displaystyle \require{color} \text{LHS } &= 4^k \\ &= 4^{k-1+1} \\ &= 4^{k-1} \times 4 \\ &\gt k^2 \times 4 &\color{red} \text{by the assumption } 4^{k-1} > k^2 \\ &= k^2 + 2k^2 + k^2 &\color{red} 2k^2 > 2k \text{ and } k^2 > 1 \text{ for } k \ge 3 \\ &\gt k^2 + 2k + 1 \\ &= (k+1)^2 \\ &=\text{RHS} \\ \text{LHS } &\gt \text{ RHS} \end{aligned} \) Therefore it is true for \( n=k+1 \) assuming that it is true for \( n=k \). Therefore \( 4^{n-1} \gt n^2 \) is true for \( n \ge 3 \). Mathematical Induction Inequality using the Difference It is quite often used to prove \( A > B \) by \( A-B >0 \). Prove \( n^2 \lt 2^n \) for \( n \ge 5 \) by mathematical induction. Step 1: Show it is true for \( n=5 \). LHS \( = 5^2 = 25 \) RHS \( = 2^5 = 32 \) LHS \( \lt \) RHS It is true for \( n=5 \). Step 2: Assume that it is true for \( n=k \). That is, \( k^2 \lt 2^k \). Step 3: Show it is true for \( n=k+1 \). That is, \( (k+1)^2 \lt 2^{k+1}. \) \( \begin{aligned} \displaystyle \require{color} \text{RHS } – \text{ LHS } &= 2^{k+1} – (k+1)^2 \\ &= 2 \times 2^k – (k^2+2k+1) \\ &\gt 2 \times k^2 – (k^2+2k+1) &\color{red} \text{ by the assumption from Step 2} \\ &= k^2 -2k -1 \\ &= (k-1)^2 -2 \\ &\gt 0 &\color{red} \text{, since } k \ge 5 \text{ and so } (k-1)^2 \ge 16 \\ 2^{k+1} – (k+1)^2 &\gt 0 \\ (k+1)^2 &\lt 2^{k+1} \\ \end{aligned} \) Therefore it is true for \( n=k+1 \) assuming it is true for \( n=k \). Therefore it is true for \( n=k+1 \) is true for \( n \ge 5 \). Related Topics Best Examples of Mathematical Induction Divisibility Mathematical Induction Fundamentals Mathematical Induction Inequality Proof with Factorials Mathematical Induction Inequality Proof with Two Initials
Let $(x_n)_{n\in\mathbb N}$ be a recursively defined sequence with $x_1=9$ and $$x_{n+1}=\frac{x_n}{2}+\frac{1}{x_n}\text{ for }n\geq 1.$$ Show that $x_n\geq\sqrt{2}$ for all $n$. Because $x_n\geq 0$ one can easily prove inductively that $$x_n^2\geq 2\Leftrightarrow x_n^2-2\geq 0\Leftrightarrow\left(\frac{x_{n-1}}{2}-\frac{1}{x_{n-1}}\right)^2\geq 0,$$ hence $x_n\geq\sqrt{2}$. However I have seen another approch which I was very curious about because I don't have the feeling that this can be done without proper justification other than the induction hypothesis: $$x_{n+1}=\frac{x_n}{2}+\frac{1}{x_n}\overset{(*)}{\geq}\frac{\sqrt{2}}{2}+\frac{1}{\sqrt{2}}=\sqrt{2}$$ For $(*)$ it is assumed that $x_n\geq\sqrt{2}$ holds for an $n\in\mathbb N$. Are there any objections about this consideration?
I'm trying to find the asymptotes of $f(x) = \arcsin(\frac{2x}{1+x^2})$. I've found that this function has no vertical asymptote, since $f$ is bounded between $[-\pi/2 , \pi/2 ]$, and since $\arcsin x$ is continuous where it is defined - for every $x_0 \in R$, $\lim_{x\to x0^+}|f(x)| = |f(x_0)| \neq \infty $. Hopefully this once is correct, please correct me if it isn't. I think I'm wrong in the calculation of the horizontal asymptotes : if $y=ax+b$ is a horizontal asymptote at $\infty$, then $a = \lim_{x\to\infty}\frac{f(x)}{x} = 0$. Now, $b= \lim_{x\to\infty}(f(x)-ax) = \lim_{x\to\infty}f(x) = 0$ So I'm getting that this function has no vertical asymptotes, which I guess is correct, but I also get $y=0$ as a horizontal asymptotes which I'm pretty sure is wrong.. Where is my mistake?
I have a backward parabolic equation of the form: \begin{equation} W_{\eta} + aW_{xx} - bW = 0 \end{equation} s.t. \begin{equation} \lim_{\eta \rightarrow \infty}(x,\eta) = g(x) \end{equation} were $x \in \mathbb{R}$, $\eta \geqslant 0$, and $a,b$ are positive constants. Applying the following transformations: \begin{align} W(x,\eta) &= U(x,t)e^{b\eta} \\ t &= a\eta \end{align} we would get the backward heat equation below \begin{equation} U_{t} = - U_{xx} \end{equation} However, the transversality condition becomes a problem, since as $\eta \rightarrow \infty$, $e^{b\eta} \rightarrow \infty$. Usually, if the terminal condition is of the form \begin{equation} W(x,H) = g(x) \end{equation} with $H$ finite, we could "reverse" it, that is, we could apply the following transformation: \begin{equation} \nu = H - \eta \end{equation} to obtain \begin{equation} -W_{\nu} + aW_{xx} - bW = 0 \end{equation} s.t. \begin{equation} W(x,0) = g(x) \end{equation} which we can solve the traditional way (Fourier transform). However, as my terminal condition happens only at infinity I can't apply the reverse transformation above, thus I don't know how to overcome this problem. Any hint or reference?
I have this limit: $$\lim_{x\to \infty} (e^x+x)^{\frac{1}{x}}$$ At first I was stumped but then decided to use L'hospitals rule and logs so it turns to: $$\lim_{x\to \infty} \frac{\ln(e^x+x)}{x}$$ Then differentiating it twice turns to: $$\lim_{x\to \infty} \frac{e^x}{e^x+1}$$ But then this means $\lim_{x\to \infty} \frac{e^x}{e^x+1}=1$, but I know from trying values on my calculator that it should be equal to $e$. Am I wrong or am I getting mixed up with the L'Hospitals rule? Thank you!
Taiwanese Journal of Mathematics Taiwanese J. Math. Volume 22, Number 1 (2018), 225-244. Quantitative Recurrence Properties for Systems with Non-uniform Structure Abstract Let $X$ be a subshift with non-uniform structure, and $\sigma \colon X \to X$ be a shift map. Further, define \[ R(\psi) := \{x \in X: d(\sigma^{n}x,x) \lt \psi(n) \textrm{ for infinitely many } n\} \] and \[ R(f) := \left\{ x \in X: d(\sigma^{n}x,x) \lt e^{-S_{n} f(x)} \textrm{ for infinitely many } n \right\}, \] where $\psi \colon \mathbb{N} \to \mathbb{R}^{+}$ is a nonincreasing and positive function and $f \colon X \to \mathbb{R}^{+}$ is a continuous positive function. In this paper, we give quantitative estimates of the above sets, that is, $\dim_{H} R(\psi)$ can be expressed by $\psi$ and $\dim_{H} R(f)$ is the solution of the Bowen equation of topological pressure. These results can be applied to a large class of symbolic systems, including $\beta$-shifts, $S$-gap shifts, and their factors. Article information Source Taiwanese J. Math., Volume 22, Number 1 (2018), 225-244. Dates Received: 9 March 2017 Revised: 6 April 2017 Accepted: 11 April 2017 First available in Project Euclid: 17 August 2017 Permanent link to this document https://projecteuclid.org/euclid.twjm/1502935241 Digital Object Identifier doi:10.11650/tjm/8071 Mathematical Reviews number (MathSciNet) MR3749362 Zentralblatt MATH identifier 06965367 Citation Zhao, Cao; Chen, Ercai. Quantitative Recurrence Properties for Systems with Non-uniform Structure. Taiwanese J. Math. 22 (2018), no. 1, 225--244. doi:10.11650/tjm/8071. https://projecteuclid.org/euclid.twjm/1502935241
Recent Posts Recent Comments Archives Categories Meta Author Archives: eas5828 At the beginning of the semester, I believed in climate change but was not fully aware of how serious it is or the ways people are trying to prevent it from getting worse. Between the lessons taught in class and … Continue reading In the article “Federal Agencies Deliver Blunt Report on Human-Caused Climate Change”, a recent report by the federally funded U.S. Global Change Research Program is summarized which contradicts what President Trump and his administration support. The report claims that the … Continue reading With all the technology and modern advances available in our society today, it is no surprise that “phantom” or “vampire” devices are a supposed issue. These are technological devices that still use energy while being plugged in but not used. … Continue reading A new possible renewable energy source has been discovered by a biophysicist at Columbia University. Ozgur Sahin has been studying evaporation and its link to electric generation. In an interview with Yale Environment 360, he talks about his recent work. … Continue reading \(x^3-3x^2-10x = 0 \) \((1+r)^n \) \((5.7\times 10^{-8})\times (1.6\times 10^{12}) = 9.12\times 10^4 \) \( \pi L (1- \alpha) R^2 = 4\pi\sigma T^4 R^2 \) \( 12\text{km}\times\frac{0.6\text{mile}}{1\text{km}}\approx 7.2\text{mile} \) \[12\text {km}\times\frac{0.6\text{mile}}{1\text{km}}\approx 7.2\text{mile} \] \( 4173445346.50\approx 4200000000=4.2\times 10^9 \) \[ 50\text{m}\times\frac{3.4\times … Continue reading
Introduction Energy and Power Basic Operations Periodic Signals Commonly encountered signals Practice Problems What is a signal? The word 'signal' has been used in different contexts in the English language and it has several different meanings . In this class, we will use the term signal to mean a function of an independent variable that carries some information or describes some physical phenomenon. Often (not always) the independent variable will be time, and the signals will describe phenomena that change with time. Such a signal can be denoted by ${x}(t)$, where $t$ is the independent variable and ${x}(t)$ denotes the function of $t$. Notice that this is slightly in contrast to the notation that you may have been used to from your calculus courses. There, you may have used $y=f(x)$ to denote a function of $x$, where $x$ is the dependent variable and $y$ is the independent variable. In this course, since signals will be referred to as ${x}(t)$, ${x}$ refers to the dependent variable typically. Here are two examples of such signals. We will also encounter signals that describe some phenomena that change with frequency. Such signals will be denoted by $X(\omega)$ where $\omega$ is the independent variable and the dependent variable $X$ changes with frequency. Here is an example. This notation, even though fairly standard in the literature, is potentially confusing since $x(t)$ is used to refer to two related but different things. Consider the sentence "A recording of John's speech will be denoted by $x(t)$ and a recording of Adele's music will be denoted by $y(t)$". Here $x(t)$ and $y(t)$ refer to the entire signals, i.e., the audio waveforms. However, if you consider the sentence "find all values of $t$ for which $x(t) < 2$", here $x(t)$ refers to the value taken by the signal at time $t$. To elaborate further, it is the function $x$ evaluated at time $t$. In the Example 2 above $x(\pi)= \pi \cos\pi = -\pi$. This terminology is fairly standard in all text books, but in my opinion this leads to confusion. Therefore, we will use underlined variables to denote signals and variables without underlines will refer to values of the signals. With this notation, the signal will be denoted by $\underline{x}(t)$ and the value taken by this signal at time $t$ will be denoted by $x(t)$. Continuous-time (CT) and Discrete-time (DT) signals We will encounter two classes of signals in this course. The first class of signals are those for which the independent variable changes in a continuous manner or, equivalently, the signal $\underline{x}(t)$ is defined for every real value (or a continuum of values) of $t$ in the range $(a,b)$ ($a$ can be $-\infty$ and $b$ can be $\infty$). The two examples considered above are examples of CT signals. In contrast, we will also be interested in signals which are defined only for integer values of the independent variable. These signals are called discrete-time (DT) signals and will be denoted by $\underline{x}[n]$. Such signals arise in two situations - (i) the phenomenon that is being modeled is naturally one for which the independent variable takes only integer values or (ii) We can obtain a DT signal from a CT signal by ‘sampling’ the CT signal. For e.g., we can choose to keep only values of the signal $\underline{x}(t)$ at time instants $nT_s, \forall n$ for a fixed sampling interval $T_s$. From the sampled values we can construct a DT signal $\underline{x}[n]$ by assigning $x[n] = x(nT_s)$. The following two examples elaborate on these two methods. How to specify or describe signals There are two ways in which we will specify or describe signals in this course. The first way is to provide an explicit mathematical description of the signals such as $x(t) = \sin(200\pi t)$ or $x(t) = e^{-t}$. Sometimes, these signals may have to be described piecewise. Often, it will be easier to describe signals by sketching the function described by the signals or "drawing a picture of the signal". One of the skills that a student should develop from this part of the course is to be able to write a mathematical description for a signal defined pictorially and vice versa. The following examples illustrates these ideas. Practical Examples Signals are everywhere in modern life. Here are a few examples. MATLAB ExercisesExercise 1 - Create your own audio file that is at least 4 seconds long (the exact time duration is not really important, but do not make the file too long). Use the wavrecord command in MATLAB and a sampling frequency of 10000 Hz. You can also try to find a wav file online. Here is one that I like samplewavfile . Use the sound command in MATLAB to play the sound. Make sure the recording is fine. Exercise 2 - Using the wavread command, read the signal into a vector called x. Also read the sampling frequency in to a variable called Fs. Make sure you understand what this sampling frequency means. Plot the received signal as a function of time. Your time axis must have units in seconds. Exercise 3 - Using the stem command in MATLAB plot the signal. What is the difference between this plot and the plot in Example 2
Griffiths's Introduction to Electrodynamics states $$\mathcal E = \oint \mathbf f \cdot d\mathbf l$$ In which $$\mathbf f = \mathbf f_s + \mathbf E$$ Where Griffiths describes the summation as the source, $\mathbf f_s$, which is ordinarily confined to one portion of the loop (a battery, say), and an electrostatic force, which serves to smooth out the flow and communicate the influence of the source to distant parts of the circuit Is $\mathbf E$ the force that prevents charge from clumping up and producing a buildup of charge in a part of the circuit (which would produce a non-steady current)? Anyway, in magnetostatic situations, I agree with the following: $$\oint \mathbf f \cdot d\mathbf l = \oint \mathbf f_s \cdot d\mathbf l$$ However, suppose $\frac{\partial \mathbf B}{\partial t} \ne 0$. This would imply: $$\mathcal E = \oint \mathbf f \cdot d\mathbf l = \oint (\mathbf f_s + \mathbf E) \cdot d\mathbf l = \oint \mathbf f_s \cdot d\mathbf l + \oint \mathbf E \cdot d\mathbf l$$ Which implies $$\mathcal E = \oint \mathbf f_s \cdot d\mathbf l - \frac{\partial \mathbf B}{\partial t}$$ I don't know what this means though, if it's significant. What does this mean, if what I did was sensible? In addition, is my mistake that, since the sole purpose of $\mathbf E$ in the equation for $\mathbf f$ was to prevent charge clumping up, that for non-steady currents this $\mathbf E$ term cannot exist and thus in that case $\mathbf f = \mathbf f_s$?
Stein's Example shows that the maximum likelihood estimate of $n$ normally distributed variables with means $\mu_1,\ldots,\mu_n$ and variances $1$ is inadmissible (under a square loss function) iff $n\ge 3$. For a neat proof, see the first chapter of Large-Scale Inference: Empirical Bayes Methods for Estimation, Testing, and Prediction by Bradley Effron. This was highly surprising to me at first, but there is some intuition behind why one might expect the standard estimate to be inadmissible (most notably, if $x \sim \mathcal N(\mu,1)$, then $\mathbb{E}\|x\|^2\approx \|\mu\|^2+n$, as outlined in Stein's original paper, linked to below). My question is rather: What property of $n$-dimensional space (for $n\ge 3$) does $\mathbb{R}^2$ lack which facilitates Stein's example? Possible answers could be about the curvature of the $n$-sphere, or something completely different. In other words, why is the MLE admissible in $\mathbb{R}^2$? Edit 1: In response to @mpiktas concern about 1.31 following from 1.30: $$E_\mu\left(\|z-\hat{\mu}\|^2\right)=E_\mu\left(S\left(\frac{N-2}{S}\right)^2\right)=E_\mu\left(\frac{(N-2)^2}{S}\right).$$ $$\hat{\mu_i} = \left(1-\frac{N-2}{S}\right)z_i$$ so $$E_\mu\left(\frac{\partial\hat{\mu_i}}{\partial z_i} \right)=E_\mu\left( 1-\frac{N-2}{S}+2\frac{z_i^2}{S^2}\right).$$ Therefore we have: $$2\sum_{i=1}^N E_\mu\left(\frac{\partial\hat{\mu_i}}{\partial z_i} \right)=2N-2E_\mu\left(\frac{N(N-2)}{S}\right)+4E_\mu\left(\frac{(N-2)}{S}\right)\\=2N-E_\mu\frac{2(N-2)^2}{S}.$$ Edit 2: In this paper, Stein proves that the MLE is admissible for $N=2$.
4 (p In this paper, we prove the degenerations of Schubert varieties in a minusculeG/P, as well as the class of Kempf varieties in the flag varietySL(n)/B, to (normal) toric varieties. Well known wonderfulG-varieties are those of rank zero, namely the generalized flag varietiesG/P, those of rank one, classified in [A], and certain complete symmetric varieties described in [DP] such as the famous space of complete conics. In this paper we compute the cohomology with trivial coefficients for the Lie superalgebraspsl(n, n), p (n) andq(2n); we show that the cohomology ring ofq(2n+1) is of Krull dimension 1 and we calculate the ring forq(3) andq(5). As a corollary we obtain af·g·p·d·f subgroup of SLn(?) (n ≧ 3. More generally, we prove that if Γ is an irreducible arithmetic non-cocompact lattice in a higher rank group, then Γ containsf·g·p·d·f groups. In the last section we give an exposition of results, communicated to us by J.-P. We prove that the moduli space of mathematical instanton bundles on P3 with c2 = 5 is smooth. We compute the ring of ${\mbox{\rm SL}}(2,{\mbox{\bf R}})$-invariants in the ring of polynomial functions, ${\mathcal P}$, on ${\mathcal A}$. We show that the absolute invariants (i.e.,the ${\mbox{\rm GL}}(2, {\mbox{\bf R}})$-invariants in the field of fractions of ${\mathcal P}$) distinguish the isomorphism classes of 2-dimensional non-associative real division algebras. Let G be a simple algebraic group over the algebraically closed field k of characteristic p ≥ 0. In case p >amp;gt; 0, assume G is defined and split over the finite field of p elements Fp. Let q be a power of p and let G(q) be the finite group of Fq-rational points of G. Assume B is F-stable, so that U is also F-stable and U(q) is a Sylow p-subgroup of G(q). It is proved that for any prime $p\geqslant 5$ the group $G_2(p)$ is a quotient of $(2,3,7;2p) = \langle X,Y: X^2=Y^3=(XY)^7 =[X,Y]^{2p}=1 \rangle.$ Given integers n,d,e with $1 \leqslant e >amp;lt; \frac{d}{2},$ let $X \subseteq {\Bbb P}^{\binom{d+n}{d}-1}$ denote the locus of degree d hypersurfaces in ${\Bbb P}^n$ which are supported on two hyperplanes with multiplicities d-e and e. For a finite-dimensional representation $\rho: G \rightarrow \mathrm{GL}(M)$ of a group G, the diagonal action of G on $M^p,$ p-tuples of elements of M, is usually poorly understood. Let k be an algebraically closed field of characteristic p ≥ 0. This result is not true when char k = p >amp;gt; 0 even in the case where H is a torus. However, we show that the algebra of invariants is always the p-root closure of the algebra of polarized invariants. Let p be a prime and let V be a finite-dimensional vector space over the field $\mathbb{F}_p$. 首页上一页12345下一页尾页
I'm looking for cases like $$\lim_{x \to 0} \frac {1-\cos(x)}{x^2}$$ that will not give you the answer the first time you use L'Hôpital's rule on them. For example in this case it will result in a number $\frac{1}{2}$ the second time you use L'Hôpital's rule. I want examples of limits like $\lim_{x \to c} \frac {f(x)}{g(x)}$ so that you have to use L'Hôpital's rule $5$ times, $18$ times, or say $n$ times on them to get an answer. Another question is about the case in which you use L'Hôpital's rule as many times as you want but you always end with $\lim_{x \to 0} \frac {0}{0}$. Does this case exist? Sure. Do you want $18$ times? Then consider the limit$$\lim_{x\to0}\frac{x^{18}}{x^{18}}$$or the non-trivial example$$\lim_{x\to0}\frac{\sin(x^{18})}{1-\cos(x^9)}.$$For the case in which you always get $\frac00$, consider the function$$\begin{array}{rccc}f\colon&\mathbb{R}&\longrightarrow&\mathbb{R}\\&x&\mapsto&\begin{cases}e^{-1/x^2}&\text{ if }x\neq0\\0&\text{ if }x=0\end{cases}\end{array}$$and the limit$$\lim_{x\to0}\frac{f(x)}{f(x)}$$or the non-trivial example$$\lim_{x\to0}\frac{f(x)}{f(x^2)}.$$ A couple of rather famous limits that each require 7 applications of L’Hôpital’s rule (unless evaluated by another method) are $$ \lim_{x \rightarrow 0} \,\frac{\tan{(\sin x)} \; - \; \sin{(\tan x)}}{x^7} \;\;\; \text{and} \;\;\; \lim_{x \rightarrow 0} \, \frac{\tan{(\sin x)} \; - \; \sin{(\tan x)}}{\arctan{(\arcsin x)} \; - \; \arcsin{(\arctan x)}} \;\; $$ These two limits are discussed in the chronologically listed references below, with [11] being a generalization of the tan/sin and arctan/arcsin version. (Both [10] and [11] were brought to my attention by user21820.) Another limit that requires 7 applications of L’Hôpital’s rule is the following, which I mentioned (in an incorrect way, however) at the end of [6]: $$ \lim_{x \rightarrow 0} \,\frac{\tan x \; – \; 24\tan \frac{x}{2} \; - 4\sin x \; + \; 15x}{x^7} $$ [1] sci.math, 13 February 2000 [2] sci.math, 16 April 2000 [3] sci.math, 11 July 2000 [4] sci.math, 13 August 2001 [5] sci.math, 12 February 2005 [6] sci.math, 27 December 2007 [7] sci.math, 7 October 2008 [8] A question regarding a claim of V. I. Arnold, mathoverflow, 8 April 2010. [9] How find this limit $\lim_{x\to 0^{+}}\dfrac{\sin{(\tan{x})}-\tan{(\sin{x})}}{x^7}$, Mathematics Stack Exchange, 2 November 2013. [10] Limit of $\dfrac{\tan^{-1}(\sin^{-1}(x))-\sin^{-1}(\tan^{-1}(x))}{\tan(\sin(x))-\sin(\tan(x))}$ as $x \rightarrow 0$, Mathematics Stack Exchange, 26 May 2014. [11] $\lim_{x \to 0} \dfrac{f(x)-g(x)}{g^{-1}(x)-f^{-1}(x)} = 1$ for any $f,g \in C^1$ that are tangent to $\text{id}$ at $0$ with some simple condition, Mathematics Stack Exchange, 26 May 2014. To me, the simplest (nontrivial) way to do this is to exploit functions' representations as power series. For instance, begin with: $$e^x = 1 + x + \frac{x^2}{2} + \frac{x^3}{6} + \frac{x^4}{24} + \dots = \sum_{i=0}^{\infty} \frac{x^i}{i!}.$$ To cook up an interesting L'Hospital problem, subtract off the first few terms of this series expansion from $e^x$ and divide by an appropriate term. All the following are classical Calculus I examples which are inspired by the above series expansion: \begin{align*}\lim_{x \to 0} \frac{e^x - 1}{x} &\qquad \text{(requires 1 use of L'H)} \\ \lim_{x \to 0} \frac{e^x - 1- x}{x^2} &\qquad \text{(requires 2 uses of L'H)} \\ \lim_{x \to 0} \frac{e^x - 1 - x - \frac{x^2}{2}}{x^3} &\qquad \text{(requires 3 uses of L'H)} \end{align*} and so forth. You can pick any function you like in place of $e^x$, of course, so long as it has enough derivatives to play with. You can also use this approach to cook up slightly more interesting examples. For instance, we could subtract off the appropriate terms from $e^x$ and $\cos(x)$ to get their series expansions to be $Cx^2 + [\text{lower order terms}]$. Specifically, $$\lim_{x \to 0} \frac{e^x - 1 - x}{\cos(x)- 1 }$$ has a nonzero limit and requires two uses of L'Hospital's rule. If we wanted four, we could have subtracted out the $x^2$ and $x^3$ terms from the $e^x$ expansion and the $x^2$ term from the $\cos(x)$ expansion. What I like about this approach: The examples are nontrivial, in the sense that no elementary algebraic techniques will save you from having to use L'Hospital's rule. You can immediately tell how many uses of L'Hospital's rule will be required. I think it conveys something important both about Taylor series representations of functions andabout how L'Hospital's rule works. For the "$\frac{\infty}{\infty}$" case, if you only use the L'Hôpital's rule and don't change your fraction between its successive applications, then one of simple nontrivial never-ending examples from many textbooks is $$\lim_{x\to0+}\frac{\ln x}{\cot x}$$. You can construct simple (boring) examples quite easily using polynomials. A very trivial example is $\frac{x^n}{x^n}$. For the never works case, replace $x^n$ with the interesting function: $e^{\frac{-1}{x^2}}$.
I created a function by first considering some well known limits: $$\lim_{n\to\infty } \frac{n}{2}\sin\left ( \frac{2\pi}{n} \right )=\pi $$ $$\lim_{n\to\infty } \left ( 1+\frac{1}{n} \right )^{n}=e$$ Now, recalling Euler's identity: $$e^{i \pi }=-1$$ The following limit can be created, which is a combination of Euler's identity and the above two limits; $$\lim_{n\to\infty } {\left ( 1+\frac{1}{n} \right )^{i \frac{n^{2}}{2}\sin\left ( \frac{2\pi}{n} \right ) }}=-1$$ This led me to the idea of creating the following function: $$h(x)=\left | \left ( 1+\frac{1}{x} \right )^{\frac{\ i}{2}x^{2}\sin \left (\frac{2\pi }{x} \right )} \right |$$ With $ x \in \mathbb{R}$ This function oscillates with a decaying amplitude and I find it very interesting. Now it was a while ago that I was messing around with this function, and I wanted to find a way to express it such that it wasn't the Absolute value of a complex function... I can't remember quite how I found it but I think the following function is equivalent on the interval $\left [ -1,0 \right ]$ $$g(x)=e^{-\frac{\pi }{2}x^{2}\sin \left (\frac{2\pi }{x} \right )}$$ Alas, to my question, are these functions equivalent on this interval (they appear to be when they are plotted), and how might I go about proving this? I suspect I could take advantage of the series expansion for the natural log but not sure if that's the best way about it... Could someone point me in the right direction? Thanks for reading.
Wikipedia said The description of most manifolds requires more than one chart (a single chart is adequate for only the simplest manifolds). A specific collection of charts which covers a manifold is called an atlas. An atlas is not unique as all manifolds can be covered multiple ways using different combinations of charts. Two atlases are said to be equivalent if their union is also an atlas. I think it means: Let $X$ be a topological space and $\Phi_1=\{\phi_\alpha:U_{\alpha}\to\Bbb R^{n_\alpha}\mid\alpha\in A\}$, $\Phi_2=\{\phi_\beta:U_{\beta}\to\Bbb R^{n_\beta}\mid\beta\in B\}$ be two of its(i.e., $X$'s) topological atlas. Then $\Phi_1$ and $\Phi_2$ are equivalent iff $\Phi_1\cup\Phi_2$ is also a topological atlas of $X$. My question is: if $\Phi_1$ and $\Phi_2$ are atlases of $X$, which means they both can "cover" $X$, then it is definitely true that $\Phi_1\cup\Phi_2$ can "cover" $X$, isn't it? I can't see the meaning of such definition of equivalence. Where did I make the mistake?
And Stephen Hawking died today. He will leave a great, black hole in modern science. I saw him lecture in London not long after A Brief History of Time came out. It was one of the events that inspired me along my path to science. I recall he got more laughs than a lot of stand-ups I've seen. But I can't really get behind 3/14. The weird American way of writing dates, mixed-endian style, really irks me. As a result, I have previously boycotted Pi Day, instead celebrating it on 31/4, aka 31 April, aka 1 May. Admittedly, this takes the edge off the whole experience a bit, so I've decided to go full big-endian and adopt ISO-8601 from now on, which means Pi Day is on 3141-5-9. Expect an epic blog post that day. Transcendence Anyway, I will transcend the bickering over dates (pausing only to reject 22/7 and 6/28 entirely so don't even start) to get back to pi. It so happens that Pi Day is of great interest in our house this year because my middle child, Evie (10), is a bit obsessed with pi at the moment. Obsessed enough to be writing a book about it (she writes a lot of books; some previous topics: zebras, Switzerland, octopuses, and Settlers of Catan fan fiction, if that's even a thing). I helped her find some ways to generate pi numerically. My favourite one uses Riemann's zeta function, which we'd recently watched a Numberphile video about. It's the sum of the reciprocals of the natural numbers raised to increasing powers: $$\zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s}$$ def zeta(s, terms=1000): z = 0 for t in range(1, int(terms)): z += 1 / t**s return z (6 * zeta(2, terms=1e7))**0.5 Which returns pi, correct to 6 places: 3.141592558095893 >>> from mpmath import * >>> mp.dps = 50 >>> mp.pretty = True >>> >>> sqrt(6*zeta(2)) 3.1415926535897932384626433832795028841971693993751068 ...which is correct to 50 decimal places. Here's the bit of Evie's book where she explains a bit about transcendental numbers. I was interested in this, because while I 'knew' that pi is transcendental, I couldn't really articulate what that really meant, and why (say) √2, which is also irrational, is not also transcendental. Succinctly, transcendental means 'non-algebraic' (equivalent to being non-constructible). Since √2 is obviously the solution to \(x^2 - 2 = 0\), it is algebraic and therefore not transcendental. Have a transcendental pi day! The xkcd comic is by Randall Munroe and licensed CC-BY-NC.
How to prove this for positive real numbers? $$a+b+c\leq\frac{a^3}{bc}+\frac{b^3}{ca}+\frac{c^3}{ab}$$ I tried AM-GM, CS inequality but all failed. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Using Cauchy-Schwarz Inequality twice: $a^4 + b^4 +c^4 \geq a^2b^2 +b^2c^2 +c^2a^2 \geq ab^2c +ba^2c +ac^2b = abc(a+b+c)$ I have come up with an answer with myself. Using CS inequality $$(a^4+b^4+c^4)(1+1+1)\geq(a^2+b^2+c^2)^2$$ $$(a^2+b^2+c^2)(1+1+1)\geq(a+b+c)^2$$ Hence we have $$a^4+b^4+c^4\geq\frac{(a+b+c)^4}{27}=(a+b+c)\left(\frac{a+b+c}{3}\right)^3\geq abc(a+b+c)$$ Here other two answers used Cauchy-Scwartz Inequality. I am giving a simple $AM\ge GM$ inequality proof. You asked, $$\frac{a^3}{bc}+\frac{b^3}{ca}+\frac{c^3}{ab}\ge a+b+c\\\implies a^4+b^4+c^4\ge a^2bc+b^2ca+c^2ab$$ Now, from, $AM\ge GM$, we have $$\frac {a^4+ a^4+b^4+c^4}4\ge \left(a^4\cdot a^4\cdot b^4\cdot c^4\right)^{1/4}=a^2bc\tag 1$$ Similarly, $$\frac {a^4+ b^4+b^4+c^4}4\ge \left(a^4\cdot b^4\cdot b^4\cdot c^4\right)^{1/4}=ab^2c\tag 2$$ and also, $$\frac {a^4+ b^4+c^4+c^4}4\ge \left(a^4\cdot b^4\cdot c^4\cdot c^4\right)^{1/4}=abc^2\tag 3$$ Now, summing up $(1),(2),(3)$, we have, $a^4+b^4+c^4\ge a^2bc+b^2ca+c^2ab$, that is $$\frac{a^3}{bc}+\frac{b^3}{ca}+\frac{c^3}{ab}\ge a+b+c$$ By Holder $$\sum_{cyc}\frac{a^3}{bc}\geq\frac{(a+b+c)^3}{3(ab+ac+bc)}=\frac{(a+b+c)\cdot(a+b+c)^2}{3(ab+ac+bc)}\geq a+b+c$$
Frequency Response of Mechanical Systems In this follow-up to a previous blog post on damping in structural dynamics, we take a detailed look at the harmonic response of damped mechanical systems. We also demonstrate different ways of setting up a frequency-response analysis in the COMSOL Multiphysics® software as well as how to interpret the results. What Is Frequency Response? In a general sense, the frequency response of a system shows how some property of the system responds to an input as a function of excitation frequency. When talking about frequency response in COMSOL Multiphysics, we usually mean the linear (or linearized) response to a harmonic excitation. In order to produce a frequency response curve, we need to perform a frequency sweep; that is, solve for a number of different frequencies. A frequency response curve will, in general, exhibit a number of distinct peaks located at the natural frequencies of the system. A typical frequency response curve. There are two natural frequencies at 13 Hz and 31 Hz in the plotted range. The Single-DOF System, Revisited Various aspects of the dynamics of a single-DOF system with viscous damping were discussed in the previous blog post. One result is that the damped natural frequency is \omega_d = \omega_0\sqrt{1-\zeta^2} \approx \omega_0 \left ( 1 – \frac{\zeta^2}{2} \right ) This is the frequency at which the system will vibrate (with a decaying amplitude) if released from a deformed state, when there is no other external excitation. An interesting question arises: “Which excitation frequency will give the maximum amplitude of the response?” You would expect it to be exactly the damped natural frequency, but as we will show below, this is not the case. A single-DOF system. Since we are dealing with harmonic motion, it is convenient to use a complex notation, factoring out the common harmonic multiplier e^{i \omega t}. The equation of motion is then \left (-\omega^2m +ic\omega +k \right) u = f The phase angle of the load f can be taken as reference so that f is real-valued. A normalized form can be obtained by dividing by the stiffness k: \left (1-\left (\frac{\omega}{\omega_0} \right) ^2 +2i\zeta \left (\frac{\omega}{\omega_0} \right) \right) u = \frac{f}{k} The right-hand side is now exactly the static displacement. Thus, the ratio between the dynamic and static solutions is \displaystyle H(\omega) = \left (1-\left (\frac{\omega}{\omega_0} \right) ^2 +2i\zeta \left (\frac{\omega}{\omega_0} \right) \right)^{-1} =\frac{1}{1-\beta ^2 +2i\zeta \beta} The function H is sometimes called the transfer function. Here, β is used to denote the ratio between the excitation frequency and the undamped natural frequency. The magnitude of the transfer function is \displaystyle \left | \frac{1}{1-\beta ^2 +2i\zeta \beta} \right | = \frac{1}{\sqrt {(1-\beta ^2)^2 +4\zeta^2 \beta^2}} This function is shown in the graph below. Using standard calculus, the frequency giving maximum amplitude can be determined by finding the minimum of the (squared) denominator {(1-\beta ^2)^2 +4\zeta^2 \beta^2}. The result is \beta = \sqrt{1-2 \zeta^2} Thus, the excitation frequency giving the maximum response is \omega_{\mathrm {max}} = \omega_0\sqrt{1-2\zeta^2} \approx \omega_0 \left ( 1 – \zeta^2 \right ) which is lower than the damped natural frequency. Actually, the frequency shift is twice as large. The fact that the excitation frequency that causes maximum amplification does not coincide with the frequency of free vibration may seem like a paradox. This can be attributed to the phase shift between force and displacement caused by the damping. Without damping, the load and displacement flip from being perfectly in phase below the natural frequency to being 180° out-of-phase above the natural frequency. With damping, the transition in phase shift is smooth, as shown in the graph below. Irrespective of the damping level, the phase shift at the undamped natural frequency is always 90°. Phase shift of the displacement as function of frequency. The fact that the force and displacement are slightly out-of-phase when there is damping affects the possibility of the force to supply energy to the system. Loss Factor Damping Let’s repeat the analysis for a single-DOF system with loss factor damping. In this case, the equation of motion is \left (-\omega^2m +k(1+i\eta ) \right) u = f and the damped natural frequency can be shown to be \displaystyle \omega_d = \omega_0 \sqrt {\left( \frac{1}{2} \left( 1 + \sqrt{1+\eta^2} \right ) \right ) } \approx \omega_0 \left (1 + \frac{\eta^2}{8} \right ) It may come as a surprise that the effect of adding damping in this case is to increase, rather than decrease, the natural frequency. The explanation is that this form of loss factor damping representation actually also increases the stiffness. The absolute value of the complex-valued stiffness is |\tilde k| = k \sqrt {1 + \eta^2} \approx k \left ( 1+ \frac{\eta^2}{2} \right ) With this loss factor damping, the transfer function is \displaystyle \frac{1}{1-\beta ^2 +i\eta } and its magnitude is \displaystyle \left | \frac{1}{1-\beta ^2 +i\eta} \right | = \frac{1}{\sqrt {(1-\beta ^2)^2 +\eta^2}} It can be immediately seen that the maximum amplitude occurs at β = 1; that is, at the undamped natural frequency. Again, maximum amplification occurs at a frequency that is lower than the damped natural frequency. The alternative definition of loss factor damping mentioned in the previous blog post has the property that the absolute value of the complex stiffness is independent of the damping level. This is obtained by using a definition that normalizes the complex stiffness so that a pure rotation in the complex plane is obtained, \tilde k = \displaystyle \frac{k(1+i \eta)}{\sqrt{1+ \eta^2}} Such a formulation leads to a natural frequency that decreases with damping: \displaystyle \omega_d = \omega_0 \sqrt { \frac {\frac{1}{2} \left( 1 + \sqrt{1+\eta^2} \right )}{1+ \eta^2} } \approx \omega_0 \left (1 – \frac{3\eta^2}{8} \right ) An analysis that is omitted here will show a corresponding drop in the excitation frequency that will give the maximum amplification so that it is still lower than the damped natural frequency. The phase shift between excitation and response when using loss factor damping is particularly interesting: Even at very low excitation frequencies, there is still a phase shift. Its asymptotic value is arctan(η). Phase shift of the displacement as a function of frequency when using loss factor damping. Low-frequency asymptotes are indicated by the dotted lines. A Note About Friction When friction between two surfaces supplies the damping mechanism, the response to a harmonic input is no longer harmonic because of the nonlinearity in the system. There may still be a periodic, but anharmonic, response. Such problems cannot be solved by the frequency-domain methods, in which the assumption is that the input-output relation is linear. Modeling Frequency Response in COMSOL Multiphysics® Setting Up the Study After adding a structural mechanics physics interface in the Model Wizard, you will be presented with a number of study types, four of which can be used for computing frequency response: Frequency Domain Frequency Domain, Prestressed Frequency Domain, Modal Frequency Domain, Prestressed, Modal Available study types for a Solid Mechanics interface. Two of the studies use a direct solution approach and two use the mode superposition approach. In the prestressed types of analysis, the change in stiffness from a stationary preload is taken into account. Mode superposition is very well suited for frequency-domain analysis, since it it easy to select the appropriate eigenmodes based on the given frequencies. In either case, you perform a frequency sweep by providing a list of frequencies in the study settings for which the response is computed. Often, you want to cluster the frequencies around the natural frequencies of the structure. Entering frequencies for a frequency sweep. Note that without damping, the response exactly at a natural frequency tends toward infinity. This means that it is not possible to solve an undamped frequency response problem at, or close to, a natural frequency. The numerical formulation will give a singular, or at least ill-conditioned, system matrix. Perturbation or Not? There is a very important setting in the Stationary node in the solver sequence for a frequency-domain study: Linearity. Selecting the Linearity property. In principle, any frequency-domain analysis can be considered to be a small perturbation, so using Linear perturbation is never wrong. The most common case, however, is that the vibrations are centered around zero. In that case, it does not really matter whether the problem is considered as Linear or Linear Perturbation. The setting does, however, always fundamentally change the interpretation of loads. A load can be tagged as Harmonic Perturbation. Such a load is only taken into account if Linearity is set to Linear perturbation. All loads not marked as Harmonic Perturbation are ignored in such a study. Conversely, if Linearity is not Linear perturbation, then all loads marked as Harmonic Perturbation are ignored, and other loads are considered as harmonic. An edge load, designated as Harmonic Perturbation . The purpose of this setting is to be able to discriminate between loads causing a possible prestress state and the harmonic excitation acting on top of that. When you add a standard Frequency Domain study, the study is, by default, not set as perturbation. Thus, the Harmonic Perturbation tag should not be used for the loads in this case, unless you change the Linearity setting. When you add a Frequency Domain, Prestressed study, the frequency response study step is set up for perturbation analysis. If the study is of a mode superposition type, then the study is always of a linear perturbation type. Interpreting the Results The results of a frequency-domain analysis are complex-valued and the harmonic variation is implicit. The phase angle of the complex number describes the phase shift with respect to the reference phase (which can be chosen arbitrarily, but is often taken as the phase of the main load). It also provides information about the phase shift between different points in the structure. Note that since the displacement components within a single finite element can have different phase angles, it is also quite possible that the components of the stress tensor are not in phase with each other. This can be of importance in, for example, fatigue analysis. In many cases, like in a color plot, it is only possible to display a real number. The convention during all results presentation is as follows: If you request a complex-valued variable v in a context where a real value is expected, then the real part is used. \displaystyle v = \Re(\tilde v e^{i \phi}) The phase angle Φ is a property of the dataset that you can modify. Adjusting the phase angle in the dataset. In most frequency-response analyses, you are interested in the amplitude of a result quantity, v, as function of frequency. This means that you should investigate abs(v) rather than v itself. The difference between the two is shown in the figure below. Example of a frequency response graph. Note that the graph of “u” is identical to “real(u)”. In order to see what happens in more detail, we can add the imaginary part and argument of the result quantity to the graph: Frequency response including phase shift. For low frequencies, the real part is close to the absolute value. In the vicinity of the natural frequency, the imaginary part is dominant instead. This means that the response is almost out of phase with the excitation. Now, let’s investigate what happens if we change the phase angle in the dataset to 45°. Frequency response when the phase angle in the dataset is 45°. As expected, the amplitude graph does not change. However, the individual values of the real and imaginary parts do. The phase angle curve shifts π/4 upward. Actually, this is the same exact graph that we would obtain if a 45° phase angle was added to the load. Adding a phase angle to a load. Instead of using the phase angle input, you can equivalently enter the load directly using complex notation: Complex representation of the same load as above. The possibility to prescribe the phase angle is important when not all loads are in phase with each other. A rotating unbalanced mass can, for example, be described conveniently by giving the load in the y direction a 90° phase shift with respect to the load in the x direction. Results from a Perturbation Study If the study is of the perturbation type, there will actually be two sets of results: the prestress solution and the perturbation solution. In this case, you will, in the various result presentation features, get access to an extra selection: Expression evaluated for. Selecting the evaluation type for a perturbation analysis. Here, you can choose to study the perturbation solution, the prestress solution, or combinations thereof. For the perturbation solution, you also get one more option: the Compute differential check box. Selecting Compute differential . This setting affects how nonlinear expressions are treated. When Compute differential is not selected, then a nonlinear quantity is taken at face value. For example, the expression u^2 will simply take the square of the variable u from the perturbation solution. Since u is, in general, complex-valued, this will usually be a nonsensical operation. When Compute differential is selected, then the nonlinear quantity will be linearized around the prestressed state. The expression u^2 will evaluate to 2*u0*u, where u0 is the value at the linearization point. Converting Frequency-Response Results to the Time Domain There are some situations in which you may want to actually visualize the harmonic response from a frequency-domain analysis in the time domain. In particular, this is true if you have multiple excitation frequencies. Response to the excitation of two loads with different frequencies. You can transform frequency-response results to the time domain using the Frequency to Time FFT study step. Study sequence for transforming results from the frequency domain to the time domain. This technique is used in the following tutorial models: Concluding Remarks Frequency-domain analysis is a powerful tool for analyzing linear systems subjected to harmonic excitation. Actually, by performing an initial Fourier transform of the loading, any type of periodic excitation can be studied using frequency-response analysis. There are many more examples of mechanical frequency-response analyses available in the Application Gallery, such as: Comments (0) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science TAGS CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
Newform invariants Coefficients of the \(q\)-expansion are expressed in terms of a basis \(1,\beta_1,\beta_2\) for the coefficient ring described below. We also show the integral \(q\)-expansion of the trace form. Basis of coefficient ring in terms of a root \(\nu\) of \(x^{3}\mathstrut -\mathstrut \) \(x^{2}\mathstrut -\mathstrut \) \(14982256920\) \(x\mathstrut +\mathstrut \) \(433388802120300\): \(\beta_{0}\) \(=\) \( 1 \) \(\beta_{1}\) \(=\) \( 12 \nu - 4 \) \(\beta_{2}\) \(=\) \((\)\( 72 \nu^{2} + 3124008 \nu - 719149373520 \)\()/5\) \(1\) \(=\) \(\beta_0\) \(\nu\) \(=\) \((\)\(\beta_{1}\mathstrut +\mathstrut \) \(4\)\()/12\) \(\nu^{2}\) \(=\) \((\)\(5\) \(\beta_{2}\mathstrut -\mathstrut \) \(260334\) \(\beta_{1}\mathstrut +\mathstrut \) \(719148332184\)\()/72\) For each embedding \(\iota_m\) of the coefficient field, the values \(\iota_m(a_n)\) are shown below. For more information on an embedded modular form you can click on its label. This newform does not admit any (nontrivial) inner twists. This newform can be constructed as the kernel of the linear operator \(T_{2}^{3} \) \(\mathstrut +\mathstrut 289380 T_{2}^{2} \) \(\mathstrut -\mathstrut \)\(21\!\cdots\!28\)\( T_{2} \) \(\mathstrut -\mathstrut \)\(95\!\cdots\!32\)\( \) acting on \(S_{42}^{\mathrm{new}}(\Gamma_0(3))\).
One may describe the waves in terms of $\Delta x$, the deviation of the position of strings' atoms from the equilibrium locations. Because they're attached, $\Delta x(\sigma)=0$ for $\sigma$, the coordinate along the string, equal to any of the end point values, $\sigma=0$ and $\sigma=L$. But $\Delta x$ obeys a wave equation, so the eigenstates of the frequency have to depend on $\sigma$ as sines and cosines:$$\Delta x(\sigma,0)=A\sin(k\sigma)+B\cos(k\sigma)$$I wrote the solution at $t=0$, a moment when $\Delta x $ is nonzero (or maximized). The condition $\Delta x=0$ at $\sigma=0$ says $B=0$, so we only have signs, and $\Delta x =0$ at $\sigma=L$ implies $kL=n\pi$ because the sine vanishes when the argument is a multiple of $\pi$ i.e. $L=n\lambda /2$ because $k=2\pi/\lambda $. The calculation works both for longitudinal and transverse oscillations $\Delta x$ – but they are usually transverse. (In string theory where this exact same calculation is important for the basics as well, the longitudinal oscillations are absolutely unphysical.) I reduced the spatial dependence on a sine. The sine is a standing wave – much like a cosine, just a shifted sine – and it may also be understood as an equal mixture of the right-going and left-going (signs) complex wave $\exp(\pm ik\sigma)$.
Given a time series $u_i$ of returns (where $i=1,\dotsc,t$), $\sigma_i$ is calculated from GARCH(1,1) as $$ \sigma_i^2=\omega+\alpha u_{i-1}^2 +\beta \sigma_{i-1}^2. $$ What is the mathematical basis to say that $u_i^2/\sigma_i^2$ will exhibit little autocorrelation in the series? Hull's book "Options, Futures and Other Derivatives" is an excellent reference. In 6th ed. p. 470, "How Good is the Model?" he states that If a GARCH model is working well, it should remove the autocorrelation. We can test whether it has done so by considering the autocorrelation structure for the variables $u_i^2/\sigma_i^2$. If these show very little autocorrelation our model for $\sigma_i$ has succeeded in explaining autocorrelation in the $u_i^2$. Maximum likelihood estimation for variance ends with maximizing$$-m \space \ln(v) -\sum_{i=1}^{t} u_i^2/v_i$$where $v_i$ is variance = $\sigma_i^2$. This function does not really mean $u_i^2/v_i$ being minimized, because $-\ln(v_i)$ gets larger and so does $u_i^2/v_i$ as $v_i$ gets smaller. However, it makes intuitive sense that dividing $u_t$ return by its (instant or regime) volatility explains away volatility-related component of the time series. I am looking for a mathematical or logical explanation of this. I think Hull is not very accurate here as the time series may have trends etc.; also, there are better approaches to finding i.i.d. from the times series than using $u_i^2/\sigma_i^2$ alone. I particularly like Filtering Historical Simulation- Backtest Analysis by Barone-Adesi (2000).
Yes, by linearizing $f$ at a fixed point. More explicitly, let $x_0\in M$ be a fixed point of $f$. Let $U$ be an open neigborhood of $x_0\in M$ for which there is a diffeomorphism with an open subset $V$ of $\mathbf R^n$. Replacing $U$ by $f^{-1}(U)\cap U$ if necessary, we may assume that $f(U)\subseteq U$.Then $f(U)=U$ since $f^2=\mathrm{id}$. Therefore, we may assume that $M=V$ is an open subset of $\mathbf R^n$. We may as well assume that $x_0$ is the origin in $\mathbf R^n$. Now, considerthe map$$g\colon V\rightarrow \mathbf R^n$$defined by$$g(x)=x+D_0f(f^{-1}(x)),$$where $D_0f$ is the differential of $f$ at $0$. The map $g$ is of course differentiable. Moreover, one has$$g(f(x))=f(x)+D_0f(x)=D_0f(x+D_0f^{-1}(f(x)))=D_0f(x+D_0f(f^{-1}(x)))=D_0f(g(x))$$since $f^{-1}=f$. Observe that$$D_0g=\mathrm{id}+D_0f\circ D_0f^{-1}=2\mathrm{id}$$on $\mathbf R^n$. By the inverse function theorem, there is an open neighbohood $W$ of the origin in $\mathbf R^n$, contained in $V$, such that$$g\colon W\rightarrow \mathbf R^n$$ is a diffeomorphism onto its image. Replacing $W$ by $f^{-1}(W)$ if necessary, we may assume that $f(W)=W$ like before. Since$$g(f(x))=D_0f(g(x)),$$for all $x\in W$, we may assume that the action of $f$ on $V$ is the restriction of a linear map $L\colon\mathbf R^n\rightarrow\mathbf R^n$. Since $f^2=\mathrm{id}$, one has $L^2=\mathrm{id}$. This means that we can diagonalize $L$ over $\mathbf R$, and we may assume that the matrix of $L$ in the standard basis is the diagonal matrix $\mathrm{diag}(1,\ldots,1,-1,\ldots,-1)$, say with $m$ diagonal entries equal to $-1$. Then, the set of fixed point of $f$ on $V$ is equal to $V\cap\mathbf R^m$, where $\mathbf R^m$ is identified with the subset $\mathbf R^m\times\{0\}^{n-m}$ of $\mathbf R^n$. This proves that the set of fixed points of $f$ is a smooth submanifold of $M$. The argument applies more generally to any finite group action on $M$, or even, any action of a compact group on $M$, adapting the diagonizability part a little bit. The corresponding statement for topological manifolds is false. Note also that it is crucial here not to ask for a smooth manifold to be connected or even nonempty!
You use the quantile regression estimator $$\hat \beta(\tau) := \arg \min_{\theta \in \mathbb R^K} \sum_{i=1}^N \rho_\tau(y_i - \mathbf x_i^\top \theta).$$ where $\tau \in (0,1)$ is constant chosen according to which quantile needs to be estimated and the function $\rho_\tau(.)$ is defined as $$\rho_\tau(r) = r(\tau - I(r<0)).$$ In order to see the purpose of the $\rho_\tau(.)$ consider first that it takes the residuals as arguments, when these are defined as $\epsilon_i =y_i - \mathbf x_i^\top \theta$. The sum in the minimization problem can therefore be rewritten as $$\sum_{i=1}^N \rho_\tau(\epsilon_i) =\sum_{i=1}^N \tau \lvert \epsilon_i \lvert I[\epsilon_i \geq 0] + (1-\tau) \lvert \epsilon_i \lvert I[\epsilon_i < 0]$$ such that positive residuals associated with observation $y_i$ above the suggested quantile regression line $\mathbf x_i^\top \theta$ are given the weight of $\tau$ while negative residuals associated with observations $y_i$ below the suggested quantile regression line $\mathbf x_i^\top \theta$ are weighted with $(1-\tau)$. Intuitively: With $\tau=0.5$ positive and negative residuals are "punished" with the same weight and an equal number of observation are above and below the "line" in optimum so the line $\mathbf x_i^\top \hat \beta$ is the median regression "line". When $\tau=0.9$ each positive residual is weighted 9 times that of a negative residual with weight $1-\tau= 0.1$ and so in optimum for every observation above the "line" $\mathbf x_i^\top \hat \beta$ approximately 9 will be placed below the line. Hence the "line" represent the 0.9-quantile. (For an exact statement of this see THM. 2.2 and Corollary 2.1 in Koenker (2005) "Quantile Regression") The two cases are illustrated in these plots. Left panel $\tau=0.5$ and right panel $\tau=0.9$. Linear programs are predominantly analyzed and solved using the standard form $$(1) \ \ \min_z \ \ c^\top z \ \ \mbox{subject to } A z = b , z \geq 0$$ To arrive at a linear program on standard form the first problem is that in such a program (1) all variables over which minimization is performed $z$ should be positive. To achieve this residuals are decomposed into positive and negative part using slack variables: $$\epsilon_i = u_i - v_i$$ where positive part $u_i = \max(0,\epsilon_i) = \lvert \epsilon_i \lvert I[\epsilon_i \geq 0]$ and $v_i = \max(0,-\epsilon_i) =\lvert \epsilon_i \lvert I[\epsilon_i < 0]$ is the negative part. The sum of residuals assigned weights by the check function is then seen to be $$\sum_{i=1}^N \rho_\tau(\epsilon_i) = \sum_{i=1}^N \tau u_i + (1-\tau) v_i = \tau \mathbf 1_N^\top u + (1-\tau)\mathbf 1_N^\top v,$$ where $u = (u_1,...,u_N)^\top$ and $v=(v_1,...,v_N)^\top$ and $\mathbf 1_N$ is vector $N \times 1$ all coordinates equal to $1$. The residuals must satisfy the $N$ constraints that $$y_i - \mathbf x_i^\top\theta = \epsilon_i = u_i - v_i$$ This results in the formulation as a linear program $$\min_{\theta \in \mathbb R^K,u\in \mathbb R_+^N,v\in \mathbb R_+^N}\{ \tau \mathbf 1_N^\top u + (1-\tau)\mathbf 1_N^\top v\lvert y_i= \mathbf x_i\theta + u_i - v_i, i=1,...,N\},$$ as stated in Koenker (2005) "Quantile Regression" page 10 equation (1.20). However it is noticeable that $\theta\in \mathbb R$ is still not restricted to be positive as required in the linear program on standard form (1). Hence again decomposition into positive and negative part is used $$\theta = \theta^+ - \theta^- $$ where again $\theta^+=max(0,\theta)$ is positive part and $\theta^- = \max(0,-\theta)$ is negative part. The $N$ constraints can then be written as $$\mathbf y:= \begin{bmatrix} y_1 \\ \vdots \\ y_N\end{bmatrix} = \begin{bmatrix} \mathbf x_1^\top \\ \vdots \\ \mathbf x_N^\top \end{bmatrix}(\theta^+ - \theta^-) + \mathbf I_Nu - \mathbf I_Nv ,$$ where $\mathbf I_N = diag\{\mathbf 1_N\}$. Next define $b:=\mathbf y$ and the design matrix $\mathbf X$ storing data on independent variables as $$ \mathbf X := \begin{bmatrix} \mathbf x_1^\top \\ \vdots \\ \mathbf x_N^\top \end{bmatrix} $$ To rewrite constraint: $$b= \mathbf X(\theta^+ - \theta^-) + \mathbf I_N u- \mathbf I_N v= [\mathbf X , -\mathbf X , \mathbf I_N ,\mathbf - \mathbf I_N] \begin{bmatrix} \theta^+ \\ \theta^- \\ u \\ v\end{bmatrix}$$ Define the $(N \times 2K + 2N )$ matrix $$A := [\mathbf X , -\mathbf X , \mathbf I_N ,\mathbf - \mathbf I_N]$$and introduce $\theta^+$ and $\theta^-$ as variables over which to minimize so they are part of $z$ to get $$b = A \begin{bmatrix} \theta^+ \\ \theta^- \\ u \\ v\end{bmatrix} = Az$$ Because $\theta^+$ and $\theta^-$ only affect the minimization problem through the constraint a $\mathbf 0$ of dimension $2K\times 1$ must be introduced as part of the coeffient vector $c$ which can the appropriately be defined as $$ c = \begin{bmatrix}\mathbf 0 \\ \tau \mathbf 1_N \\ (1-\tau) \mathbf 1_N \end{bmatrix},$$ thus ensuring that $c^\top z = \underbrace{\mathbf 0^\top(\theta^+ - \theta^-)}_{=0}+\tau \mathbf 1_N^\top u + (1-\tau)\mathbf 1_N^\top v = \sum_{i=1}^N \rho_\tau(\epsilon_i).$ Hence $c,A$ and $b$ are then defined and the program as given in $(1)$ completely specified. This is probably best digested using an example. To solve this in R use the package quantreg by Roger Koenker. Here is also illustration of how to set up the linear program and solve with a solver for linear programs: base=read.table("http://freakonometrics.free.fr/rent98_00.txt",header=TRUE) attach(base) library(quantreg) library(lpSolve) tau <- 0.3 # Problem (1) only one covariate X <- cbind(1,base$area) K <- ncol(X) N <- nrow(X) A <- cbind(X,-X,diag(N),-diag(N)) c <- c(rep(0,2*ncol(X)),tau*rep(1,N),(1-tau)*rep(1,N)) b <- base$rent_euro const_type <- rep("=",N) linprog <- lp("min",c,A,const_type,b) beta <- linprog$sol[1:K] - linprog$sol[(1:K+K)] beta rq(rent_euro~area, tau=tau, data=base) # Problem (2) with 2 covariates X <- cbind(1,base$area,base$yearc) K <- ncol(X) N <- nrow(X) A <- cbind(X,-X,diag(N),-diag(N)) c <- c(rep(0,2*ncol(X)),tau*rep(1,N),(1-tau)*rep(1,N)) b <- base$rent_euro const_type <- rep("=",N) linprog <- lp("min",c,A,const_type,b) beta <- linprog$sol[1:K] - linprog$sol[(1:K+K)] beta rq(rent_euro~ area + yearc, tau=tau, data=base)
I have an smooth structure on $M$ given by the atlas $\{(U_\alpha,\varphi_\alpha)\}$ And on another manifold $N$ we define $\{(V_\alpha,\mu_\alpha)\}$ with $V_\alpha = \pi(U_\alpha)$ and $\mu_\alpha = \pi_2\circ\varphi_\alpha\circ \sigma$ where $\pi:M\rightarrow N$ is a smooth submersion, $\pi_2:\mathbb{R}^{n+k}\rightarrow \mathbb{R}^n$ the projection on the $n$ last coordinates, and $\sigma:N\rightarrow M$ a homeomorphism. What I want to show is that $\{(V_\alpha,\mu_\alpha)\}$ is indeed a smooth structure. In the proof https://www.mathi.uni-heidelberg.de/~lee/StephanSS16.pdf, I don't understand the reasoning so decided to make my own version and would like to know if there are any mistakes in it. So here's what I do: I suppose $V_\alpha\cap V_\beta \neq \varnothing$ and so need to show that $\mu_\alpha\circ\mu_\beta^{-1}:\mu_\beta(V_\alpha\cap V_\beta)\rightarrow \mu_\alpha(V_\alpha\cap V_\beta)$ is smooth. $$\mu_\alpha\circ\mu_\beta^{-1} = \pi_2\circ\varphi_\alpha\circ\sigma\circ\sigma^{-1}\circ\varphi_\beta^{-1}\circ\pi_2^{-1} = \pi_2\circ\varphi_\alpha\circ\varphi_\beta^{-1}\circ\pi_2^{-1}$$ but $\varphi_\alpha\circ\varphi_\beta^{-1}$ is smooth and $\pi_2$ and $\pi_2^{-1}$ are also smooth so the whole thing is smooth.
For a rational number $p > 1$. We know that the function $z^p$ is holomorphic on $\mathbb{C} \setminus \mathbb{R}^-$ (excluding $z = 0$). Is there an analytic continuation of the function $z^p$ at zero ? Thank you. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community For a rational number $p > 1$. We know that the function $z^p$ is holomorphic on $\mathbb{C} \setminus \mathbb{R}^-$ (excluding $z = 0$). Is there an analytic continuation of the function $z^p$ at zero ? Thank you. If $p\in\mathbb N$, then yes, obviously. Otherwise, the answer is negative. So, you have $p=\frac mn$, with $m,n\in\mathbb N$, $n>1$ and $\gcd(m,n)=1$. Suppose that you there was an analytic continuation to a larger set containing $0$. Let$f$ be that continuation. Then $f^n(z)=z^m$. Therefore, $f(0)=0$. Let $a_1z+a_2z^2+\cdots$ be the Taylor series of $f$ at $0$ (there is no $a_0$ here, since $a_0=f(0)=0$). Then$$(a_1z+a_2z^2+\cdots)^n=z^m.$$But if you expand $(a_1z+a_2z^2+\cdots)^n$, you get a power series which begins with ${a_1}^nz^n$ and therefore $n=m$. This cannot happen, since $n>1$ and $\gcd(m,n)=1$.
The Question: The temperature $u(x,t)$ in a semi-infinite conductor occupying $x \in [0,\infty)$ satisfies the equation $$\frac{\partial u}{\partial t} = \frac{\partial^2 u}{\partial x^2} \qquad x,t>0$$ The temperature is $0$ at $t=0$, i.e. $u(x,0)=0$. For $t>0$, heat is supplied at $x=0$ at the constant flux $$Q=-ku_x(0,t) \qquad k,Q \; \text{are constants}$$ By applying the Laplace Transform in the $t$ direction, find $u(x,t)$ at $x=0$ for $t>0$. My Attempt: I transformed the PDE, using the fact that $u(x,0)=0$: $$p\hat u(x,p) = \frac{\partial ^2 \hat u}{\partial x^2}(x,p) \qquad p>0$$ and found the general solution: $$\hat u(x,p) = A(p)e^{\sqrt p x}+B(p)e^{-\sqrt px}$$ Next, I transformed the boundary condition: $$\frac{\partial \hat u}{\partial x}(0,p) = -\frac{Q}{k}p$$ BUT THE PROBLEM IS that there is only one boundary condition. Even if the question only asks for me to determine $u(x,t)$ at $x=0$, I still need one more boundary condition to determine both $A(p)$ and $B(p)$. Am I misunderstanding something?
The format of the 3 dimensional MB distribution is $A \cdot e^{-\frac{E}{k_BT}} \cdot g(E)$ in which $A$ can be derived using normalization (integration up to $\infty$ must be 1) and $g(E)$ being the degeneracy according to $g(E)=\frac{V\pi \cdot 2^{2.5}m^{1.5}}{h^3}$ The 3 dimensional average kinetic energy $\bar E$ of a particles system can then be calculated by multiplying this MB distribution with $E$ and integrating it over infinity, which yields: $$\bar E = \int_0^{\infty} \frac{2}{\sqrt \pi} \cdot (\frac{1}{k_BT})^{\frac{3}{2}} \cdot e^{-\frac{E}{k_BT}} \cdot \sqrt{E} \cdot E \cdot dE = \frac{3}{2}k_BT$$ The format for the 1 dimensional MB distribution (e.g. the x-coordinate) is $A\cdot e^{-\frac{E_x}{k_BT}}$ where $A$ is derived by normalizing the integration to 1, which gives $A= \frac{1}{k_BT}$ When calculating the 1 dimensional average energy $\bar E_x$, this MB distribution should also be multiplied by the energy $E_x$ and integrated up to $\infty$ which gives:$$\bar E_x=\int^{\infty}_0 \frac{1}{k_BT}\cdot e^{-\frac{E_x}{k_BT}}\cdot E_x\cdot dE = k_BT$$But this should be $\frac{1}{2}k_BT$ instead. The peculiar thing is that when writing $E_x$ in terms of $\frac{1}{2}mv_x^2$ within the formula $A\cdot e^{-\frac{E_x}{k_BT}}$, normalizing $A$ to that, multiplying the formula with $\frac{1}{2}mv_x^2$ and integrating it up to $\infty$, then one would indeed get $\frac{1}{2}k_BT$.$$\int^{\infty}_0\frac{\sqrt{2m}}{\sqrt{\pi k_BT}}\cdot e^{-\frac{mv_x^2}{2k_BT}}\cdot \frac{1}{2}mv_x^2 \cdot dv=\frac{1}{2}k_BT$$But it wasn't necessary for the 3 dimensional MB distribution to write the format down in terms of $v$ to get the correct average kinetic energy. Why does the 1 dimensional MB distribution in terms of $E_x$ give an incorrect average energy and how would one realise that this is the wrong way to do it?
There are several different relations between Chern-Simons/WZW models, and there are several way to show these. A nice paper doing this in a concrete way is Elitzur et al Nucl.Phys. B326 (1989) 108. The Chern-Simons theory on a compact spatial manifold give rise to a finite dimensional Hilbert space (only global degrees of freedom) which turns out to be isomorphic to the space of conformal blocks of a WZW model (which is also finite dimensional, since there are finite number of WZW primaries under the associated affine Lie algebra). If you however put the theory on a manifold with boundary, there will be local degrees of freedom near the boundary and the Hilbert space infinite dimensional (the dynamics of the boundary degrees of freedom are controlled by a WZW model). Let me go through a simple example of the latter type, you can fill out the detailed calculations. The action is given by $$ S[a] = \frac k{4\pi}\int_\mathcal M\text{tr}\left(a\wedge\text d a + \frac 23 a\wedge a\wedge a\right).$$One can show that for $k\in\mathbb Z$ and the boundary condition $a_0\big|_{\partial\mathcal M} = 0$, $e^{iS[a]}$ is gauge invariant and the equations for motions well-defined. Next we need to fix the gauge appropriately, lets now assume our three-manifold has the following simple form $\mathcal M = \mathbb R\times\Sigma$. Make a temporal decomposition $\text d = \partial_0\text dx^0 + \tilde{\text d}$, where $\tilde{\text d} = \partial_i\text dx^i$, and $a = \tilde a_0 + \tilde a$, where $\tilde a_0 = a_0\text dx^0$ and $\tilde a = a_i\text dx^i$ $(i=1,2)$. With this decomposition we get the following action $$S[a] = -\frac k{4\pi}\int_\mathcal M\text{tr}\left(\tilde a\wedge\partial_0\tilde a\right)\wedge\text dx^0 + \frac k{2\pi}\int_\mathcal M\text{tr}\left(\tilde a_0\wedge \tilde f\right),$$where $\tilde f = \tilde{\text d}\tilde a +\tilde a\wedge\tilde a$. It is clear that $\tilde a_0$ is just a Lagrange multiplier and we fix the gauge as $a_0 = 0$ (everywhere, not just on the boundary). Alternatively, integrate out $a_0$ and we get $\delta(\tilde f)$ in the path-integral. We therefore have the following action and constraint $$S[\tilde a, \tilde a_0=0] = -\frac k{4\pi}\int_\mathcal M\text{tr}\left(\tilde a\wedge\partial_0\tilde a\right)\wedge\text dx^0, \qquad \tilde f = \tilde{\text d}\tilde a +\tilde a\wedge\tilde a=0.$$Thus the phase-space of the theory is the moduli space of flat connections on $\Sigma$. Whether the phase-space has finite or infinite volume, depends of whether $\Sigma$ has a boundary or not. For simplicity, let us restrict to the simple manifold $\mathcal M = \mathbb R\times D^2$, where $\Sigma=D^2$ is the $2$-disc. Since $\pi_1(\mathcal M)=0$, there no non-trivial Wilson loops/holonomies (since the Wilson loop only depend on the homotopy class of a curve, for flat-connections) and there are thereby no topological degrees of freedom. In this case we can solve the flat connection constraint $\tilde f = 0$ by letting the gauge field be a pure-gauge $$\tilde a = -\tilde dUU^{-1},$$where $U:\mathcal M\rightarrow G$ is a single-valued, group-valued function. Here, $U$, parametrize the local degrees of freedom (of the Chern-Simons theory) modulo gauge redundancies. The action that determines the dynamics of $U$ is found by substituting $\tilde a$ in the above action. Using the coordinates $(t,r,\theta)$, we find \begin{align*} S_{CWZW}&[U] = S\left[\tilde a=-\tilde{\text d} UU^{-1},\tilde a_0=0\right],\\ &= \frac k{4\pi}\int_{\partial\mathcal M}\text{tr}\left(\partial_\theta U^{-1}\partial_tU\right)\text d^2x + \frac k{12\pi}\int_{\mathcal M}\text{tr}\left([\text d UU^{-1}]^3\right),\\ &= \frac k{4\pi}\int_{\partial\mathcal M}\text{tr}\left(\partial_\theta U^{-1}\partial_tU\right)\text d^2x + \frac k{12\pi}\int_{\mathcal M}\text{tr}\left(\epsilon^{\mu\nu\rho}\partial_\mu UU^{-1}\partial_\nu UU^{-1}\partial_\rho UU^{-1}\right)\text d^3 x.\end{align*}Formally, one also has to check that the path-integral does not come with any Jacobian $$\int\mathcal D\tilde a\delta(\tilde f) = \int\mathcal DU,$$where $\mathcal DU$ comes from the Haar measure of $G$. This shows what you were looking for, that the partition function of the Chern-Simons theory is determined by a (chiral) WZW model on the boundary. For more general $\mathcal M$, one can do a similar calculation with a few extra elements. See the above reference for details. There are of course other ways to show this relation. One can for example show that the Dirac bracket on the phase-space (moduli space of flat-connections) reduce to the affine Lie algebra $\hat{\mathfrak g}_k$ (which is the Chiral algebra of the WZW model). There are also approaches where functional equations for the wave functional are derived. One can also use canonical quantization as Witten does, exploding the fact that the moduli space of flat connections (modulo gauge transformation) is a Kähler manifold and the symplectic form represents the first Chern class of a holomorphic line bundle. This last approach is more abstract and less direct, than the one taken above.
How do I solve this trig equation? $$\cos 2x - \sin2x = \sqrt{3} \cos 4x$$ I have tried in different ways. But I can't get to a final answer.Please help. My work is, $$\cos 2x - \sin 2x = √3(\cos 2x - \sin 2x)(\cos 2x + \sin 2x)$$ $$⇔(\cos 2x - \sin 2x)((1-√3(\cos 2x + \sin 2x))= 0$$ Then, $$\tan 2x = 1 \ \ \ \text{or} \ \ \ (\cos 2x + \sin 2x)= \dfrac1{\sqrt3}$$ I don't know whether this way is correct. Please someone help for a better solution.
Maximum likelihood estimates of a distribution Maximum likelihood estimation (MLE) is a method to estimate the parameters of a random population given a sample. I described what this population means and its relationship to the sample in a previous post. Before we can look into MLE, we first need to understand the difference between probability and probability density for continuous variables. Probability density can be seen as a measure of relative probability, that is, values located in areas with higher probability will get have higher probability density. More precisely, probability is the integral of probability density over a range. For example, the classic “bell-shaped” curve associated to the Normal distribution is a measure of probability density, whereas probability corresponds to the area under the curve for a given range of values: If we assign an statistical model to the random population, any particular value (let’s call it \(x_i\)) sampled from the population will have a probability density according to the model (let’s call it \(f(x_i)\)). If we then assume that all the values in our sample are statistically independent (i.e. the probability of sampling a particular value does not depend on the rest of values already sampled), then the likelihood of observing the whole sample (let’s call it \(L(x)\)) is defined as the product of the probability densities of the individual values (i.e. \(L(x) = \prod_{i=1}^{i=n}f(x_i)\) where \(n\) is the size of the sample). For example, if we assume that the data were sampled from a Normal distribution, the likelihood is defined as: \[ L(x) = \prod_{i=1}^{i=n}\frac{1}{\sqrt{2 \pi \sigma^2}}e^{-\frac{\left(x_i - \mu \right)^2}{2\sigma^2}} \] Note that \(L(x)\) does not depend on \(x\) only, but also on \(\mu\) and \(\sigma\), that is, the parameters in the statistical model describing the random population. The idea behind MLE is to find the values of the parameters in the statistical model that maximize \(L(x)\). In other words, it calculates the random population that is most likely to generate the observed data, while being constrained to a particular type of distribution. One complication of the MLE method is that, as probability densities are often smaller than 1, the value of \(L(x)\) can become very small as the sample size grows. For example the likelihood of 100 values sampled from a standard Normal distribution is very small: set.seed(2019)sample = rnorm(100)prod(dnorm(sample)) ## [1] 2.23626e-58 When the variance of the distribution is small it is also possible to have probability densities higher than one. In this case, the likelihood function will grow to very large values. For example, for a Normal distribution with standard deviation of 0.1 we get: sample_large = rnorm(100, sd = 0.1)prod(dnorm(sample_large, sd = 0.1)) ## [1] 2.741535e+38 The reason why this is a problem is that computers have a limited capacity to store the digits of a number, so they cannot store very large or very small numbers. If you repeat the code above but using sample sizes of say 1000, you will get 0 or Inf instead of the actual values, because your computer will just give up. Although it is possible to increase the amount of digits to be stored per number, this does not really solve the problem, as it will eventually come back with larger samples. Furthermore, in most cases we will need to use numerical optimization algorithms (see below) which will make the problem even worse. Therefore, we cannot work directly with the likelihood function. One trick is to use the natural logarithm of the likelihood function instead (\(log(L(x))\)). A nice property is that the logarithm of a product of values is the sum of the logarithms of those values, that is: \[ \text{log}(L(x)) = \sum_{i=1}^{i=n}\text{log}(f(x_i)) \] Also, the values of log-likelihood will always be closer to 1 and the maximum occurs for the same parameter values as for the likelihood. For example, the likelihood of the first sample generated above, as a function of \(\mu\) (fixing \(\sigma\)) is: whereas for the log-likelihood it becomes: Although the shapes of the curves are different, the maximum occurs for the same value of \(\mu\). Note that there is nothing special about the natural logarithm: we could have taken the logarithm with base 10 or any other base. But it is customary to use the natural logarithm as some important probability density functions are exponential functions (e.g. the Normal distribution, see above), so taking the natural logarithm makes mathematical analyses easier. You may have noticed that the optimal value of \(\mu\) was not exactly 0, even though the data was generated from a Normal distribution with \(\mu\) = 0. This is the reason why it is called a maximum likelihood estimator. The source of such deviation is that the sample is not a perfect representation of the population, precisely because of the randomness in the sampling procedure. A nice property of MLE is that, generally, the estimator will converge asymptotically to the true value in the population (i.e. as sample size grows, the difference between the estimate and the true value decreases). The final technical detail you need to know is that, except for trivial models, the MLE method cannot be applied analytically. One option is to try a sequence of values and look for the one that yields maximum log-likelihood (this is known as grid approach as it is what I tried above). However, if there are many parameters to be estimated, this approach will be too inefficient. For example, if we only try 20 values per parameter and we have 5 parameters we will need to test 3.2 million combinations. Instead, the MLE method is generally applied using algorithms known as non-linear optimizers. You can feed these algorithms any function that takes numbers as inputs and returns a number as ouput and they will calculate the input values that minimize or maximize the output. It really does not matter how complex or simple the function is, as they will treat it as a black box. By convention, non-linear optimizers will minimize the function and, in some cases, we do not have the option to tell them to maximize it. Therefore, the convention is to minimize the negative log-likelihood (NLL). Enough with the theory. Let’s estimate the values of \(\mu\) and \(\sigma\) from the first sample we generated above. First, we need to create a function to calculate NLL. It is good practice to follow some template for generating these functions. An NLL function should take two inputs: (i) a vector of parameter values that the optimization algorithm wants to test ( pars) and (ii) the data for which the NLL is calculated. For the problem of estimating \(\mu\) and \(\sigma\), the function looks like this: NLL = function(pars, data) { # Extract parameters from the vector mu = pars[1] sigma = pars[2] # Calculate Negative Log-LIkelihood -sum(dnorm(x = data, mean = mu, sd = sigma, log = TRUE))} The function dnorm returns the probability density of the data assuming a Normal distribution with given mean and standard deviation ( mean and sd). The argument log = TRUE tells R to calculate the logarithm of the probability density. Then we just need to add up all these values (that yields the log-likelihood as shown before) and switch the sign to get the NLL. We can now minimize the NLL using the function optim. This function needs the initial values for each parameter ( par), the function calculating NLL ( fn) and arguments that will be passed to the objective function (in our example, that will be data). We can also tune some settings with the control argument. I recommend to set the setting parscale to the absolute initial values (assuming none of the initial values are 0). This setting determines the scale of the values you expect for each parameter and it helps the algorithm find the right solution. The optim function will return an object that holds all the relevant information and, to extract the optimal values for the parameters, you need to access the field par: mle = optim(par = c(mu = 0.2, sigma = 1.5), fn = NLL, data = sample, control = list(parscale = c(mu = 0.2, sigma = 1.5)))mle$par ## mu sigma ## -0.07332745 0.90086176 It turns out that this problem has an analytical solution, such that the MLE values for \(\mu\) and \(\sigma\) from the Normal distribution can also be calculated directly as: c(mu = mean(sample), sigma = sd(sample)) ## mu sigma ## -0.0733340 0.9054535 There is always a bit of numerical error when using optim, but it did find values that were very close to the analytical ones. Take into account that many MLE problems (like the one in the section below) cannot be solved analytically, so in general you will need to use numerical optimization. MLE applied to a scientific model In this case, we have a scientific model describing a particular phenomenon and we want to estimate the parameters of this model from data using the MLE method. As an example, we will use a growth curve typical in plant ecology. Let’s imagine that we have made a series of a visits to a crop field during its growing season. At every visit, we record the days since the crop was sown and the fraction of ground area that is covered by the plants. This is known as ground cover (\(G\)) and it can vary from 0 (no plants present) to 1 (field completely covered by plants). An example of such data would be the following (data belongs to my colleague Ali El-Hakeem): data = data.frame(t = c(0, 16, 22, 29, 36, 58), G = c(0, 0.12, 0.32, 0.6, 0.79, 1))plot(data, las = 1, xlab = "Days after sowing", ylab = "Ground cover") Our first intuition would be to use the classic logistic growth function (see here) to describe this data. However, this function does not guarantee that \(G\) is 0 at \(t = 0\) . Therefore, we will use a modified version of the logistic function that guarantees \(G = 0\) at \(t = 0\) (I skip the derivation): \[ G = \frac{\Delta G}{1 + e^{k \left(t - t_{h} \right)}} - G_{o} \] where \(k\) is a parameter that determines the shape of the curve, \(t_{h}\) is the time at which \(G\) is equal to half of its maximum value and \(\Delta G\) and \(G_o\) are parameters that ensure \(G = 0\) at \(t = 0\) and that \(G\) reaches a maximum value of \(G_{max}\) asymptotically. The values of \(\Delta G\) and \(G_o\) can be calculated as: \[ \begin{align} G_o &= \frac{\Delta G}{1 + e^{k \cdot t_{h}}} \\ \Delta G &= \frac{G_{max}}{1 - 1/\left(1 + e^{k \cdot t_h}\right)} \end{align} \] Note that the new function still depends on only 3 parameters: \(G_{max}\), \(t_h\) and \(k\). The R implementation as a function is straightforward: G = function(pars, t) { # Extract parameters of the model Gmax = pars[1] k = pars[2] th = pars[3] # Prediction of the model DG = Gmax/(1 - 1/(1 + exp(k*th))) Go = DG/(1 + exp(k*th)) DG/(1 + exp(-k*(t - th))) - Go} Note that rather than passing the 3 parameters of the curve as separate arguments I packed them into a vector called pars. This follows the same template as for the NLL function described above. Non-linear optimization algorithms always requires some initial values for the parameters being optimized. For simple models such as this one we can just try out different values and plot them on top of the data. For this model, \(G_{max}\) is very easy as you can just see it from the data. \(t_h\) is a bit more difficult but you can eyeball it by cheking where \(G\) is around half of \(G_{max}\). Finally, the \(k\) parameter has no intuitive interpretation, so you just need to try a couple of values until the curve looks reasonable. This is what I got after a couple of tries: plot(data, las = 1, xlab = "Days after sowing", ylab = "Ground cover")curve(G(c(Gmax = 1, k = 0.15, th = 30), x), 0, 60, add = TRUE) If we want to estimate the values of \(G_{max}\), \(k\) and \(t_h\) according to the MLE method, we need to construct a function in R that calculates NLL given an statistical model and a choice of parameter values. This means that we need to decide on a distribution to represent deviations between the model and the data. The canonical way to do this is to assume a Normal distribution, where \(\mu\) is computed by the scientific model of interest, letting \(\sigma\) represent the degree of scatter of the data around the mean trend. To keep things simple, I will follow this approach now (but take a look at the final remarks at the end of the article). The NLL function looks similar to the one before, but now the mean is set to the predictions of the model: NLL = function(pars, data) { # Values predicted by the model Gpred = G(pars, data$t) # Negative log-likelihood -sum(dnorm(x = data$G, mean = Gpred, sd = pars[4], log = TRUE))} We can now calculate the optimal values using optim and the “eyeballed” initial values (of course, we also need to have an initial estimate for \(\sigma\)): par0 = c(Gmax = 1.0, k = 0.15, th = 30, sd = 0.01)fit = optim(par = par0, fn = NLL, data = data, control = list(parscale = abs(par0)), hessian = TRUE)fit$par ## Gmax k th sd ## 0.99926603 0.15879585 26.70700004 0.01482376 Notice that eyeballing the initial values already got us prettly close to the optimal solution. Of course, for complicated models your initial estimates will not be as good, but it always pays off to play around with the model before going into optimization. Finally, we can compare the predictions of the model with the data: plot(data, las = 1, xlab = "Days after sowing", ylab = "Ground cover")curve(G(fit$par, x), 0, 60, add = TRUE) Final remarks The model above could have been fitted using the method of ordinary least squares (OLS) with the R function nls. Actually, unless something went wrong in the optimization you should obtain the same results as with the method described here. The reason is that OLS is equivalent to MLE with a Normal distribution and constant standard deviation. However, I believe it is worthwhile to learn MLE because: You do not have to restrict yourself to the Normal distribution. In some cases (e.g. when modelling count data) it does not make sense to assume a Normal distribution. Actually, in the ground cover model, since the values of \(G\) are constrained to be between 0 and 1, it would have been more correct to use another distribution, such as the Beta distribution (however, for this particular data, you will get very similar results so I decided to keep things simple and familiar). You do not have to restrict yourself to modelling the mean of the distribution only. For example, if you have reason to believe that errors do not have a constant variance, you can also model the \(\sigma\) parameter of the Normal distribution. That is, you can model any parameter of any distribution. If you undestand MLE then it becomes much easier to understand more advanced methods such as penalized likelihood (aka regularized regression) and Bayesian approaches, as these are also based on the concept of likelihood. You can combine the NLL of multiple datasets inside the NLL function, whereas in ordinary least squares, if you want to combine data from different experiments, you have to correct for different in scales or units of measurement and for differences in the magnitude of errors your model makes for different datasets. Many methods of model selection (so-called information criteriasuch as AIC) are based on MLE. Using a function to compute NLL allows you to work with any model (as long as you can calculate a probability density) and dataset, but I am not sure this is possible or convenient with the formula interface of nls(e.g combining multiple datasets is not easy when using a formula interface). Of course, if none of the above applies to your case, you may just use nls. But at least now you understand what is happening behind the scenes. In future posts I discuss some of the special cases I gave in this list. Stay tuned!
Interesting question. The "third law" of thermodynamics actually states that The entropy of a perfect crystal at absolute zero is exactly zero. Since many systems in nature crystallize at some point when their temperature is lowered, this statement is often misinterpreted as: The entropy of a physical system at absolute zero is exactly zero. ( ) wrong! Entropy is defined as $$S=k_B \log(\Omega)$$ where $\Omega$ is the number of microstates of the system. Since there is only one microstate corresponding to a perfect crystalline configuration, in the case of a crystal we have $\Omega=1$, and therefore $S=0$. But there are systems with a degenerate ground state, i.e. many different states of lowest energy, and also systems (like glasses) that get "stuck" and cannot reach their real ground state. Those system won't satisfy $\Omega=1$ at $T=0$, and therefore their entropy will be different from $0$. You can find more about the correct interpretation of the "third law" here and here. Liquid helium is a unique case: it is the only element which remains liquid at atmospheric temperature down to $T=0$ and does not crystallize. Its phase diagram is shown in the picture below: Quoting from Wikipedia: Below the $\lambda$-line the liquid can be described by the so-called two-fluid model. It behaves as if it consists of two components: a normal component, which behaves like a normal fluid, and a superfluid component with zero viscosity and zero entropy. [...] By lowering the temperature, the fraction of the superfluid density increases from zero at $T_\lambda$ to one at zero kelvin. Below 1 K the helium is almost completely superfluid. So, at $T=0$ $^4$He is completely superfluid and it has zero entropy. Why is this? Why is the entropy of a liquid $0$? Well, it's because liquid helium is no ordinary liquid. In the superfluid state, all the atoms are in the same quantum state. Moreover, they are identical. This means that there is in fact only one ground state, because if we exchange two atoms nothing changes. Therefore, the number of microstates is $\Omega=1$ and $$S=0$$
Quadratic equation An algebraic equation of the second degree. The general form of a quadratic equation is \begin{equation}\label{eq:1} ax^2+bx+c=0,\quad a\ne0. \end{equation} In the field of complex numbers a quadratic equation has two solutions, expressed by radicals in the coefficients of the equation: \begin{equation}\label{eq:2} x_{1,2} = \frac{-b \pm\sqrt{b^2-4ac}}{2a}. \end{equation} When $b^2>4ac$ both solutions are real and distinct, when $b^2<4ac$, they are complex (complex-conjugate) numbers, when $b^2=4ac$ the equation has the double root $x_1=x_2=-b/2a$. For the reduced quadratic equation \begin{equation} x^2+px+q=0 \end{equation} formula \eqref{eq:2} has the form \begin{equation} x_{1,2}=-\frac{p}{2}\pm\sqrt{\frac{p^2}{4}-q}. \end{equation} The roots and coefficients of a quadratic equation are related by (cf. Viète theorem): \begin{equation} x_1+x_2=-\frac{b}{a},\quad x_1x_2=\frac{c}{a}. \end{equation} The expression $b^2-4ac$ is called the discriminant of the equation. It is easily proved that $b^2-4ac=(x_1-x_2)^2$, in accordance with the fact mentioned above that the equation has a double root if and only if $b^2=4ac$. Formula \eqref{eq:2} holds also if the coefficients belong to a field with characteristic different from $2$. Formula \eqref{eq:2} follows from writing the left-hand side of the equation as $a(x+b/2a)^2+(c-b^2/4a)$ (splitting of the square). References [a1] K. Rektorys (ed.) , Applicable mathematics , Iliffe (1969) pp. Sect. 1.20 Comments Over a field of characteristic 2 (cf. Characteristic of a field), the solution by completing the square is no longer available. Instead, by a change of variable, the equation may be written either as$$X^2 + c = 0$$or in Artin--Schreier form$$X^2 + X + c = 0 \ .$$ In the first case, the equation has a double root $c^{1/2}$. In the Artin--Schreier case, the map $A:X \mapsto X^2+X$ is two-to-one, since $A(X+1) = A(X)$. If $\alpha$ is a root of the equation, so is $\alpha+1$. See Artin-Schreier theorem. References [a1] R. Lidl, H. Niederreiter, "Finite fields" , Addison-Wesley (1983); second edition Cambridge University Press (1996) Zbl 0866.11069 How to Cite This Entry: Quadratic equation. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Quadratic_equation&oldid=35677
Algebraic Geometry Seminar Fall 2016 The seminar meets on Fridays at 2:25 pm in Van Vleck B305. Here is the schedule for the previous semester. Contents Algebraic Geometry Mailing List Please join the AGS Mailing List to hear about upcoming seminars, lunches, and other algebraic geometry events in the department (it is possible you must be on a math department computer to use this link). Fall 2016 Schedule date speaker title host(s) September 16 Alexander Pavlov (Wisconsin) Betti Tables of MCM Modules over the Cones of Plane Cubics local September 23 PhilSang Yoo (Northwestern) Classical Field Theories for Quantum Geometric Langlands Dima October 7 Botong Wang (Wisconsin) Enumeration of points, lines, planes, etc. local October 14 Luke Oeding (Auburn) Border ranks of monomials Steven October 28 Adam Boocher (Utah) Bounds for Betti Numbers of Graded Algebras Daniel November 4 Lukas Katthaen Finding binomials in polynomial ideals Daniel November 11 Daniel Litt (Columbia) TBA Jordan November 18 David Stapleton (Stony Brook) TBA Daniel December 2 Rohini Ramadas (Michigan) TBA Daniel and Jordan December 9 Robert Walker (Michigan) TBA Daniel Abstracts Alexander Pavlov Betti Tables of MCM Modules over the Cones of Plane Cubics Graded Betti numbers are classical invariants of finitely generated modules over graded rings describing the shape of a minimal free resolution. We show that for maximal Cohen-Macaulay (MCM) modules over a homogeneous coordinate rings of smooth Calabi-Yau varieties X computation of Betti numbers can be reduced to computations of dimensions of certain Hom groups in the bounded derived category D(X). In the simplest case of a smooth elliptic curve embedded into projective plane as a cubic we use our formula to get explicit answers for Betti numbers. In this case we show that there are only four possible shapes of the Betti tables up to a shifts in internal degree, and two possible shapes up to a shift in internal degree and taking syzygies. PhilSang Yoo Classical Field Theories for Quantum Geometric Langlands One can study a class of classical field theories in a purely algebraic manner, thanks to the recent development of derived symplectic geometry. After reviewing the basics of derived symplectic geometry, I will discuss some interesting examples of classical field theories, including B-model, Chern-Simons theory, and Kapustin-Witten theory. Time permitting, I will make a proposal to understand quantum geometric Langlands and other related Langlands dualities in a unified way from the perspective of field theory. Botong Wang Enumeration of points, lines, planes, etc. It is a theorem of de Brujin and Erdős that n points in the plane determines at least n lines, unless all the points lie on a line. This is one of the earliest results in enumerative combinatorial geometry. We will present a higher dimensional generalization to this theorem. Let E be a generating subset of a d-dimensional vector space. Let [math]W_k[/math] be the number of k-dimensional subspaces that is generated by a subset of E. We show that [math]W_k\leq W_{d-k}[/math], when [math]k\leq d/2[/math]. This confirms a "top-heavy" conjecture of Dowling and Wilson in 1974 for all matroids realizable over some field. The main ingredients of the proof are the hard Lefschetz theorem and the decomposition theorem. I will also talk about a proof of Welsh and Mason's log-concave conjecture on the number of k-element independent sets. These are joint works with June Huh. Luke Oeding Border ranks of monomials What is the minimal number of terms needed to write a monomial as a sum of powers? What if you allow limits? Here are some minimal examples: [math]4xy = (x+y)^2 - (x-y)^2[/math] [math]24xyz = (x+y+z)^3 + (x-y-z)^3 + (-x-y+z)^3 + (-x+y-z)^3[/math] [math]192xyzw = (x+y+z+w)^4 - (-x+y+z+w)^4 - (x-y+z+w)^4 - (x+y-z+w)^4 - (x+y+z-w)^4 + (-x-y+z+w)^4 + (-x+y-z+w)^4 + (-x+y+z-w)^4[/math] The monomial [math]x^2y[/math] has a minimal expression as a sum of 3 cubes: [math]6x^2y = (x+y)^3 + (-x+y)^3 -2y^3[/math] But you can use only 2 cubes if you allow a limit: [math]6x^2y = \lim_{\epsilon \to 0} \frac{(x^3 - (x-\epsilon y)^3)}{\epsilon}[/math] Can you do something similar with xyzw? Previously it wasn't known whether the minimal number of powers in a limiting expression for xyzw was 7 or 8. I will answer this and the analogous question for all monomials. The polynomial Waring problem is to write a polynomial as linear combination of powers of linear forms in the minimal possible way. The minimal number of summands is called the rank of the polynomial. The solution in the case of monomials was given in 2012 by Carlini--Catalisano--Geramita, and independently shortly thereafter by Buczynska--Buczynski--Teitler. In this talk I will address the problem of finding the border rank of each monomial. Upper bounds on border rank were known since Landsberg-Teitler, 2010 and earlier. We use symmetry-enhanced linear algebra to provide polynomial certificates of lower bounds (which agree with the upper bounds). This work builds on the idea of Young flattenings, which were introduced by Landsberg and Ottaviani, and give determinantal equations for secant varieties and provide lower bounds for border ranks of tensors. We find special monomial-optimal Young flattenings that provide the best possible lower bound for all monomials up to degree 6. For degree 7 and higher these flattenings no longer suffice for all monomials. To overcome this problem, we introduce partial Young flattenings and use them to give a lower bound on the border rank of monomials which agrees with Landsberg and Teitler's upper bound. I will also show how to implement Young flattenings and partial Young flattenings in Macaulay2 using Steven Sam's PieriMaps package. Adam Boocher Let R be a standard graded algebra over a field. The set of graded Betti numbers of R provide some measure of the complexity of the defining equations for R and their syzygies. Recent breakthroughs (e.g. Boij-Soederberg theory, structure of asymptotic syzygies, Stillman's Conjecture) have provided new insights about these numbers and we have made good progress toward understanding many homological properties of R. However, many basic questions remain. In this talk I'll talk about some conjectured upper and lower bounds for the total Betti numbers for different classes of rings. Surprisingly, little is known in even the simplest cases.
NTS ABSTRACTSpring2019 Return to [1] Contents Jan 23 Yunqing Tang Reductions of abelian surfaces over global function fields For a non-isotrivial ordinary abelian surface $A$ over a global function field, under mild assumptions, we prove that there are infinitely many places modulo which $A$ is geometrically isogenous to the product of two elliptic curves. This result can be viewed as a generalization of a theorem of Chai and Oort. This is joint work with Davesh Maulik and Ananth Shankar. Jan 24 Hassan-Mao-Smith--Zhu The diophantine exponent of the $\mathbb{Z}/q\mathbb{Z}$ points of $S^{d-2}\subset S^d$ Abstract: Assume a polynomial-time algorithm for factoring integers, Conjecture~\ref{conj}, $d\geq 3,$ and $q$ and $p$ prime numbers, where $p\leq q^A$ for some $A>0$. We develop a polynomial-time algorithm in $\log(q)$ that lifts every $\mathbb{Z}/q\mathbb{Z}$ point of $S^{d-2}\subset S^{d}$ to a $\mathbb{Z}[1/p]$ point of $S^d$ with the minimum height. We implement our algorithm for $d=3 \text{ and }4$. Based on our numerical results, we formulate a conjecture which can be checked in polynomial-time and gives the optimal bound on the diophantine exponent of the $\mathbb{Z}/q\mathbb{Z}$ points of $S^{d-2}\subset S^d$. Jan 31 Kyle Pratt Breaking the $\frac{1}{2}$-barrier for the twisted second moment of Dirichlet $L$-functions Abstract: I will discuss recent work, joint with Bui, Robles, and Zaharescu, on a moment problem for Dirichlet $L$-functions. By way of motivation I will spend some time discussing the Lindel\"of Hypothesis, and work of Bettin, Chandee, and Radziwi\l\l. The talk will be accessible, as I will give lots of background information and will not dwell on technicalities. Feb 7 Shamgar Gurevich Harmonic Analysis on $GL_n$ over finite fields Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the {\it character ratio}: $$trace (\rho(g))/dim (\rho),$$ for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant {\it rank}. This talk will discuss the notion of rank for GLn over finite fields, and apply the results to random walks. This is joint work with Roger Howe (TAMU). Feb 14 Tonghai Yang The Lambda invariant and its CM values Abstract: The Lambda invariant which parametrizes elliptic curves with two torsions (X_0(2)) has some interesting properties, some similar to that of the j-invariants, and some not. For example, $\lambda(\frac{d+\sqrt d}2)$ is a unit sometime. In this talk, I will briefly describe some of the properties. This is joint work with Hongbo Yin and Peng Yu. Feb 28 Brian Lawrence Diophantine problems and a p-adic period map. Abstract: I will outline a proof of Mordell's conjecture / Faltings's theorem using p-adic Hodge theory. Joint with Akshay Venkatesh. March 7 Masoud Zargar Sections of quadrics over the affine line Abstract: Abstract: Suppose we have a quadratic form Q(x) in d\geq 4 variables over F_q[t] and f(t) is a polynomial over F_q. We consider the affine variety X given by the equation Q(x)=f(t) as a family of varieties over the affine line A^1_{F_q}. Given finitely many closed points in distinct fibers of this family, we ask when there exists a section passing through these points. We study this problem using the circle method over F_q((1/t)). Time permitting, I will mention connections to Lubotzky-Phillips-Sarnak (LPS) Ramanujan graphs. Joint with Naser T. Sardari March 14 Elena Mantovan p-adic automorphic forms, differential operators and Galois representations A strategy pioneered by Serre and Katz in the 1970s yields a construction of p-adic families of modular forms via the study of Serre's weight-raising differential operator Theta. This construction is a key ingredient in Deligne-Serre's theorem associating Galois representations to modular forms of weight 1, and in the study of the weight part of Serre's conjecture. In this talk I will discuss recent progress towards generalizing this theory to automorphic forms on unitary and symplectic Shimura varieites. In particular, I will introduce certain p-adic analogues of Maass-Shimura weight-raising differential operators, and discuss their action on p-adic automorphic forms, and on the associated mod p Galois representations. In contrast with Serre's classical approach where q-expansions play a prominent role, our approach is geometric in nature and is inspired by earlier work of Katz and Gross. This talk is based joint work with Eishen, and also with Fintzen--Varma, and with Flander--Ghitza--McAndrew. March 28 Adebisi Agboola Relative K-groups and rings of integers Abstract: Suppose that F is a number field and G is a finite group. I shall discuss a conjecture in relative algebraic K-theory (in essence, a conjectural Hasse principle applied to certain relative algebraic K-groups) that implies an affirmative answer to both the inverse Galois problem for F and G and to an analogous problem concerning the Galois module structure of rings of integers in tame extensions of F. It also implies the weak Malle conjecture on counting tame G-extensions of F according to discriminant. The K-theoretic conjecture can be proved in many cases (subject to mild technical conditions), e.g. when G is of odd order, giving a partial analogue of a classical theorem of Shafarevich in this setting. While this approach does not, as yet, resolve any new cases of the inverse Galois problem, it does yield substantial new results concerning both the Galois module structure of rings of integers and the weak Malle conjecture. April 4 Wei-Lun Tsai Hecke L-functions and $\ell$ torsion in class groups Abstract: The canonical Hecke characters in the sense of Rohrlich form a set of algebraic Hecke characters with important arithmetic properties. In this talk, we will explain how one can prove quantitative nonvanishing results for the central values of their corresponding L-functions using methods of an arithmetic statistical flavor. In particular, the methods used rely crucially on recent work of Ellenberg, Pierce, and Wood concerning bounds for $\ell$-torsion in class groups of number fields. This is joint work with Byoung Du Kim and Riad Masri. April 11 Taylor Mcadam Almost-prime times in horospherical flows Abstract: Equidistribution results play an important role in dynamical systems and their applications in number theory. Often in such applications it is desirable for equidistribution to be effective (i.e. the rate of convergence is known). In this talk I will discuss some of the history of effective equidistribution results in homogeneous dynamics and give an effective result for horospherical flows on the space of lattices. I will then describe an application to studying the distribution of almost-prime times in horospherical orbits and discuss connections of this work to Sarnak’s Mobius disjointness conjecture. April 18 Ila Varma Malle's Conjecture for octic $D_4$-fields. Abstract: We consider the family of normal octic fields with Galois group $D_4$, ordered by their discriminant. In forthcoming joint work with Arul Shankar, we verify the strong Malle conjecture for this family of number fields, obtaining the order of growth as well as the constant of proportionality. In this talk, we will discuss and review the combination of techniques from analytic number theory and geometry-of-numbers methods used to prove these results. April 25 Rafe Jones Eventually stable polynomials and arboreal Galois representations Abstract: Call a polynomial defined over a field K eventually stable if its nth iterate has a uniformly bounded number of irreducible factors (over K) as n grows. I’ll discuss some far-reaching conjectures on eventual stability, and recent work on various special cases. I’ll also describe some natural connections between eventual stability and arboreal Galois representations, which Nigel Boston introduced in the early 2000s.
I was reading proofs on the scaling property of fourier transform: I notice this line: $$\mathcal{F}[g(ct)] = \int_{-\infty}^{\infty} g(ct) \ e^{-i\omega t} dt \tag{1}$$ I have a few issues with this line: $1)$ Is there a missing factor of $\frac{1}{\sqrt{2\pi}}$ on the right hand side? Because shouldn't a Fourier transform be (derived from the inversion theorem): $$\mathcal{F}[g(t)] = \frac{1}{\sqrt{2\pi}} \int_{- \infty}^{\infty} g(t) \ e^{-i \omega t} dt \tag{2}$$ $2)$ I believe that the correct expression should be: $$\mathcal{F}[g(ct)] = c\int_{- \infty}^{\infty} g(ct) \ e^{-i \omega (ct)}dt$$ Proof of (2) We let $x = ct$ and hence $dx = c dt$. Hence: $$\mathcal{F}[g(ct)] = \mathcal{F}[g(x)] = \frac{1}{\sqrt{2\pi}} \int_{- \infty}^{\infty} g(x) \ e^{-i \omega x} dx $$ Now we can reverse the substitution and we obtain: $$\frac{1}{\sqrt{2\pi}} \int_{- \infty}^{\infty} g(x) \ e^{-i \omega x} dx = c \ \int_{- \infty}^{\infty} g(ct) \ e^{-i \omega (ct)} dt \tag{3}$$ in which $(1) \neq (3)$. Is there something wrong with my proof?
Derivative-Free Optimization for Data Fitting Calibrating the parameters of complex numerical models to fit real world observations is one of the most common problems found in the industry (finance, multi-physics simulations, engineering, etc.). Let us consider a process that is observed at times $t_i$ and measured with results $y_i$, for $i=1,2,\dots,m$. Furthermore, the process is assumed to behave according to a numerical model $\phi(t,x)$ where $x$ are parameters of the model. Given the fact that the measurements might be inaccurate and the process might not exactly follow the model, it is beneficial to find model parameters $x$ so that the error of the fit of the model to the measurements is minimized. This can be formulated as an optimization problem in which $x$ is the decision variables and the objective function is the sum of squared errors of the fit at each individual measurement, thus: NAG introduces, at Mark 26.1, a model-based derivative-free solver (e04ff) able to exploit the structure of calibration problems. It is a part of the NAG Optimization Modelling Suite which significantly simplifies the interface of the solver and related routines. Derivative-free Optimization for Least Squares Problems To solve a nonlinear least squares problem (1) most standard optimization algorithms such as Gauss–Newton require derivatives of the model or their estimates. They can be computed by: explicitly written derivatives algorithmic differentiation (see NAG AD tools) finite differences (bumping), $\frac{\partial \phi}{\partial x_i} \approx \frac{\phi(x+he_i) - \phi(x)}{h}$ If exact derivatives are easy to compute then using derivative-based methods is preferable. However, explicitly writing the derivatives or applying AD methods might be impossible if the model is a black box. The alternative, estimating derivatives via finite differences, can quickly become impractical or too computationally expensive as it presents several issues: Expensive, one gradient evaluation requires at least $n+1$ model evaluations; Inaccurate, the size of the model perturbations $h$ influences greatly the quality of the derivative estimations and is not easy to choose; Sensitive to noise, if the model is subject to some randomness (e.g. Monte Carlo simulations) or is computed to low accuracy to save computing time then finite differences estimations will be highly inaccurate; Poor utilization of model evaluations, each evaluation is only used for one element of one gradient and the information is discarded as soon as that gradient is no longer useful to the solver. These issues can greatly slow down the convergence of the optimization solver or even completely prevent it. In these cases, using a derivative-free solver is the preferable option as it is: able to reach convergence with a lot fewer function evaluations; more robust with respect to noise in the model evaluations. Illustration on a noisy test case: Consider the following unbounded problem where $\epsilon$ is some random uniform noise in the interval $\left[-\nu,\nu\right]$ and $r_i$ are the residuals of the Rosenbrock test function $(m = n = 2)$: \begin{equation} \min_{x\in \mathbb{R}^n} \sum_{i=1}^m{(r_i(x) + \epsilon)^2}\\ \label{eq:rosen} \end{equation} Let us solve this problem with a Gauss–Newton method combined with finite differences (e04fc) and the new derivative-free solver (e04ff). We present in the following table the number of model evaluations needed to reach a point within $10^{-5}$ of the actual solution for various noise levels $\nu$ (non convergence is marked as $\infty$). level of noise $\nu$ 0.0e00 1.0e$-$10 1.0e$-$08 1.0e$-$06 1.0e$-$04 1.0e$-$02 1.0e$-$01 e04fc 89 92 221 $\infty$ $\infty$ $\infty$ $\infty$ e04ff 29 29 29 29 29 31 $\infty$ The number of objective evaluations required to reach a point within $10^{-5}$ of the solution On this example, the new derivative solver is both cheaper in term of model evaluations and far more robust with respect to noise. References Powell M J D (2009) The BOBYQA algorithm for bound constrained optimization without derivatives Report DAMTP 2009/NA06 University of Cambridge http://www.damtp.cam.ac.uk/user/na/NA_papers/NA2009_06.pdf Zhang H, CONN A R and Scheinberg k (2010) A Derivative-Free Algorithm for Least-Squares Minimization SIAM J. Optim. 20(6) 3555–3576
I cannot claim to be an expert on AQFT, but the parts that I'm familiar with rely on local fields quite a bit. First, a clarification. In your question, I think you may be conflating two ideas: local fields ($\phi(x)$, $F^{\mu\nu}(x)$, $\bar{\psi}\psi(x)$, etc) and unobservable local fields ($A_\mu(x)$, $g_{\mu\nu}(x)$, $\psi(x)$, etc). Local fields are certainly recognizable in AQFT, even if they are not used everywhere. In the Haag-Kastler or Brunetti-Fredenhagen-Verch (aka Locally Covariant Quantum Field Theory or LQFT), you can think of algebras assigned to spacetime regions by a functor, $U\mapsto \mathcal{A}(U)$. These could be causal diamonds in Minkowski space (Haag-Kastler) or globally hyperbolic spacetimes (LCQFT). You can also have a functor assigning smooth compactly supported test functions to spacetime regions, $U\mapsto \mathcal{D}(U)$. A local field is then a natural transformation $\Phi\colon \mathcal{D} \to \mathcal{A}$ between these two functors. Unwrapping the definition of a natural transformation, you find for every spacetime region $U$ a map $\Phi_U\colon \mathcal{D}(U)\to \mathcal{A}(U)$, such that $\Phi_U(f)$ behaves morally as a smeared field, $\int \mathrm{d}x\, f(x) \Phi(x)$ in physics notation. This notion of smeared field is certainly in use in the algebraic constructions of free fields as well as in the perturbative renormalization of interacting LCQFTs (as developed in the last decade and a half by Hollands, Wald, Brunetti, Fredenhagen, Verch, etc), where locality is certainly taken very seriously. Now, my understanding of This post has been migrated from (A51.SE) unobservable local fields is unfortunately much murkier. But I believe that they are indeed absent from the algebras of observables that one would ideally work with. For instance, following the Haag-Kastler axioms, localized algebras of observables must commute when spacelike separated. That is impossible if you consider smeared fermionic fields as elements of your algebra. However, I think at least the fermionic fields can be recovered via the DHR analysis of superselection sectors. The issue with unobservable fields with local gauge symmetries is much less clear (at least to me) and may not be completely settled yet (though see some speculative comments on my part here).
Search Now showing items 1-2 of 2 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... The ALICE TPC, a large 3-dimensional tracking device with fast readout for ultra-high multiplicity events (Elsevier, 2010-10-01) The design, construction, and commissioning of the ALICE Time-Projection Chamber (TPC) is described. It is the main device for pattern recognition, tracking, and identification of charged particles in the ALICE experiment ...
Let $\{x_n\}$ a Cauchy sequence in a normed vector space $X$. Is $$y_n = \frac{x_n}{\|x_n\|}$$ another Cauchy sequence in $D = \{x\in X : \|x\| = 1\}$? Remark: The idea is prove that if $D$ is complete, then $X$ is also complete. Thanks so much. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Let $\{x_n\}$ a Cauchy sequence in a normed vector space $X$. Is $$y_n = \frac{x_n}{\|x_n\|}$$ another Cauchy sequence in $D = \{x\in X : \|x\| = 1\}$? Remark: The idea is prove that if $D$ is complete, then $X$ is also complete. Thanks so much. Consider: $$ x_n = \frac{(-1)^n}{n} $$ Then: $$ y_n = (-1)^n $$ $x_n$ is Cauchy in $\Bbb R$ as it converges to $0$. However, $y_n$ keeps alternating between $1$ and $-1$ and it's not Cauchy in $\{-1, 1\}$. Let $f(x)=\frac{x}{\|x\|}$. Let $r >0$, then $f$ is uniformly continuous on $B(0,r)^c$. To see this, suppose $\|x-y\| < \delta$. Then $|\|x\|-\|y\|| \leq \|x-y\| < \delta$ as well, and we have: \begin{eqnarray} \|f(x)-f(y)\|&=& \left\Vert \frac{x}{\|x\|} - \frac{y}{\|x\|} +\frac{y}{\|x\|} - \frac{y}{\|y\|} \right\Vert \\ &\le& \frac{1}{\|x\|}\|x-y \| + \left| \frac{1}{\|x\|} - \frac{1}{\|y\|} \right| \|y \| \\ &=& \frac{1}{\|x\|}\|x-y \| + \frac{1}{\|x\|} |\|x\|-\|y\| | \\ &\le& 2 \frac{\delta}{r} \end{eqnarray} Note that uniformly continuous functions map Cauchy sequences into Cauchy sequences. Also, if $x_n$ is Cauchy, then $\lim_n \|x_n\|$ converges, as $\mathbb{R}$ is complete. (Also, note that $\|x_n\|$ is bounded.) Now suppose $x_n$ is Cauchy. If $l=\liminf_n \|x_n\| >0$, then for $n$ sufficiently large, $x_n \in B(0,\frac{l}{2} )^c$, and then $f(x_n)$ is Cauchy, hence $f(x_n) \to \hat{f}$ for some $\hat{f} \in D$, and $n = \lim_n \|x_n\|$ exists. Then we have \begin{eqnarray} \|x_n - n \hat{f}\| &=& \left\Vert \|x_n\| f(x_n) - n \hat{f} \right\Vert \\ &=& \left\Vert \|x_n\| f(x_n) -n f(x_n) + n f(x_n)- n \hat{f} \right\Vert \\ &\le& \left| \|x_n\|-n \right| + n \|f(x_n)-\hat{f}\| \end{eqnarray} and it follows that $\lim_n x_n = n \hat{f}$. If $l=0$, pick some element $d \in D$, let $B$ be an upper bound for $\|x_n\|$ and let $x_n'=x_n+(B+1)d$. Then $\|x_n'\| \ge 1$, and $x_n'$ is also Cauchy. By the above result, $x_n' \to \hat{y}$ for some $\hat{y}$. Hence $\lim_n x_n = \hat{y}-(B+1)d$. It follows that $X$ is complete. I can not see clear the solution at all. That's why I am trying to use the fact (hint of my professor) of if a Cauchy sequence have a convergent subsequence, then the complete Cauchy sequence converges. Thus, in the case of $\lim\limits_{n\rightarrow +\infty}\|x_n\| \neq 0$, I considered a subsequence $\{x_{n_k}\}$ that hold $x_{n_k}\neq 0, \forall k\in\mathbb{N}$ and then the new sequence $$y_k = \frac{x_{n_k}}{\|x_{n_k}\|}$$ is good defined. Later, $\{x_{n_k}\}$ is also a Cauchy sequence and $y_k \neq 0$ for all $k\in\mathbb{N}$. So, the next step, is prove that $y_k$ is Cauchy in $D$ and for the fact, I need to prove that exists of a $M > 0$ that $M \leq x_{n_k}$ for all $k\in\mathbb{N}$. But I don't know how I can justify the existence of $M$. Please help me. Thanks
This question already has an answer here: Imagine we have a portfolio allocation vector $(x_1, ..., x_n)$ with $x_1+...+x_n=1$; Also we assume that the vector has only elements $\geq 0$, so we have $|x_1|+...+|x_n|=1$, so no short selling; I want to generate such vectors $x$ and I want that they are somehow uniformly distributed on the set of possible values; In a paper I read to generate $y_1, ... y_n \sim \text{Exp}(1)$ i.i.d. and then set: $x_i=y_i/(\sum\limits_{j=1}^{n}y_j)$; Now it is obvious that the sum of elements of the generated vector is 1 but why do I use the exponential distribution? What advantage do I get? Isn't is more reasonable to take e.g. $y_i$ to be uniformly distributed on [0,1] and then again set $x_i=y_i/(\sum\limits_{j=1}^{n}y_j)$?
Category of F-algebras Collection context $F$ in ${\bf C}\longrightarrow{\bf C}$ definiendum $\mathcal{A}:\mathrm{Ob}_\mathrm{it}$ postulate $\mathcal{A}$ … $F$-algebra definiendum $\langle f\rangle:\mathrm{it}[\langle A,\alpha\rangle, \langle B,\beta\rangle]$ postulate $f\circ\alpha=\beta\circ F(f)$ Discussion The category of F-algebras and F-algebra homomorphisms. The postulate says that it can't matter if you perform the operation ($\alpha$ resp. $\beta$) before or after the transformation $f$. Note that $\alpha,\beta,f$ are arrows in ${\bf C}$, while $\langle f\rangle$ denotes the arrow between $F$-algebras $\langle A,\alpha\rangle$ and $\langle B,\beta\rangle$ corresponding to the homomorphism $f$. Clearly, $\langle f\rangle$ and $f$ are in bijection and one often just writes $f$ for both. Reference Wikipedia: F-algbera
In Sec. 2.4 of Inside Interesting Integrals (2015), Paul J Nahin says of $$I:=\int_0^{\pi/2}\ln (a\sin x)dx=\int_0^{\pi/2}\ln (a\cos x)dx$$that: For many years it was commonly claimed in textbooks that these are quite difficult integrals to do, best tackled with the powerful techniques of contour integration. As you’ll see with the following analysis, however, that is simply not the case. The real method is$$2I=\int_0^{\pi/2}\ln(\frac{a^2}{2}\sin 2x)dx=\frac{\pi}{2}\ln\frac{a}{2}+\frac{1}{2}\int_0^\pi\ln(a\sin y) dy=\frac{\pi}{2}\ln\frac{a}{2}+I,$$so $I=\frac{\pi}{2}\ln\frac{a}{2}$. Presumably Euler's calculation, which Nahin says he did for $a=1$ in 1769, didn't use the above method, or else textbooks would never have claimed the need for contour integration. So when did the above proof first surface? And when did textbooks first claim the need for complex methods?
I saw this lovely tweet from PGS yesterday: Our basin studies team spotted this on fast-track imaging from Republic of Guinea. A 7.5 km diameter depression, with no salt or mobile shale, nor dissolution of fluid escape. We interpreted the structure as a complex meteorite impact crater. https://t.co/Z4TUOtsv54 #meteorite pic.twitter.com/hScJ31SoE3— PGS (@PGSNews) August 1, 2019 Kudos to them for sharing this. It’s always great to see seismic data and interpretations on Twitter — especially of weird things. And impact structures are just cool. I’ve interpreted them in seismic myself. Then uninterpreted them. I wish PGS were able to post a little more here, like a vertical profile, maybe a timeslice. I’m sure there would be tons of debate if we could see more. But not all things are possible when it comes to commercial seismic data. It’s crazy to say more about it without more data (one-line interpretation, yada yada). So here’s what I think. Impact craters are rare There are at least two important things to think about when considering an interpretation: How well does this match the model? (In this case, how much does it look like an impact structure?) How likely are we to see an instance of this model in this dataset? (What’s the base rate of impact structures here?) Interpreters often forget about the second part. (There’s another part too: How reliable are my interpretations? Let’s leave that for another day, but you can read Bond et al. 2007 as homework if you like.) The problem is that impact structures, or astroblemes, are pretty rare on Earth. The atmosphere takes care of most would-be meteorites, and then there’s the oceans, weather, tectonics and so on. The result is that the earth’s record of surface events is quite irregular compared to, say, the moon’s. But they certainly exist, and occasionally pop up in seismic data. In my 2011 post Reliable predictions of unlikely geology, I described how skeptical we have to be when predicting rare things (‘wotsits’). Bayes’ theorem tells us that we must modify our assigned probability (let’s say I’m 80% sure it’s a wotsit) with the prior probability (let’s pretend a 1% a priori chance of there being a wotsit in my dataset). Here’s the maths: \( \ \ \ P = \frac{0.8 \times 0.01}{0.8 \times 0.01\ +\ 0.2 \times 0.99} = 0.0388 \) In other words, the conditional probability of the feature being a rare wotsit, given my 80%-sure interpretation, is 0.0388 or just under 4%. As cool as it would be to find a rare wotsit, I probably need a back-up hypothesis. Now, what’s that base rate for astroblemes? (Spoiler: it’s much less than 1%.) Just how rare are astroblemes? First things first. If you’re interpreting circular structures in seismic, you need to read Simon Stewart’s paper on the subject (Stewart 1999), and his follow-up impact crater paper (Stewart 2003), which expands on the topic. Notwithstanding Stewart’s disputed interpretation of the Silverpit not-a-crater structure in the North Sea, these two papers are two of my favourites. According to Stewart, the probability P of encountering r craters of diameter d or more in an area A over a time period t years is given by: \( \ \ \ P(r) = \mathrm{e}^{-\lambda A}\frac{(\lambda A)^r}{r!} \) where \( \ \ \ \lambda = t n \) and \( \ \ \ \log n = - (11.67 \pm 0.21) - (2.01 \pm 0.13) \log d \) We can use these equations to compute the probability plot on the right. It shows the probability of encountering an astrobleme of a given diameter on a 2400 km² seismic survey spanning the Phanerozoic. (This doesn’t take into account anything to do with preservation or detection.) I’ve estimated that survey size from PGS’s tweet, and I’ve highlighted the 7.5 km diameter they mentioned. The probability is very small: about 0.00025. So Bayes tells us that an 80%-confident interpretation has a conditional probability of about 0.001. One in a thousand. So what? My point here isn’t to claim that this structure is not an astrobleme. I haven’t seen the data, I’ve no idea. The PGS team mentioned that they considered the possiblity of influence by salt or shale, and fluid escape, and rejected these based on the evidence. My point is to remind interpreters that when your conclusion is that something is rare, you need commensurately more and better evidence to support the claim. And it’s even more important than usual to have multiple working hypotheses. Last thing: if I were PGS and this was my data (i.e. not a client’s), I’d release a little cube (anonymized, time-shifted, bit-reduced, whatever) to the community and enjoy the engagement and publicity. With a proper license, obviously. References Hughes, D, 1998, The mass distribution of the crater-producing bodies. In Meteorites: Flux with time and impact effects, Geological Society of London Special Publication 140, 31–42. Davis, J, 1986, Statistics and data analysis in geology, John Wiley & Sons, New York. Stewart, SA (1999). Seismic interpretation of circular geological structures. Petroleum Geoscience 5, p 273–285. Stewart, SA (2003). How will we recognize buried impact craters in terrestrial sedimentary basins? Geology 31 (11), p 929–932.
When modeling engineering systems, it can be difficult to identify the key parameters driving system behavior because they are often buried deep within the model. Analytical models can help because they describe systems using mathematical equations, showing exactly how different parameters affect system behavior. In this article, we will derive analytical models of the loads and bending moments on the wing of a small passenger aircraft to determine whether the wing design meets strength requirements. We will derive the models in the notebook interface in Symbolic Math Toolbox™. We will then use the data management and analysis tools in MATLAB ® to simulate the models for different scenarios to verify that anticipated bending moments are within design limits. While this example is specific to aircraft design, analytical models are useful in all engineering and scientific disciplines –for example, they can be used to model drug interactions in biological systems, or to model pumps, compressors, and other mechanical and electrical systems. Analytical Modeling of Wing Loads We will evaluate the three primary loads that act on the aircraft wing: aerodynamic lift, load due to wing structure weight, and load due to the weight of the fuel contained in the wing. These loads act perpendicular to the wing surface, and their magnitude varies along the length of the wing (Figures 1a, 1b, and 1c). We derive our analytical model of wing loads in the Symbolic Math Toolbox notebook interface, which offers an environment for managing and documenting symbolic calculations. The notebook interface provides direct support for the MuPAD language, which is optimized for handling and operating on symbolic math expressions. We derive equations for each load component separately and then add the individual components to obtain total load. Lift We assume an elliptical distribution for lift across the length of the wing, resulting in the following expression for lift profile: \[q_1(x) = ka\sqrt{L^2 - x^2}\] where \(L\) = length of wing \(x\) = position along wing \(ka\) = lift profile coefficient We can determine the total lift by integrating across the length of the wing \[\text{Lift} = \int_0^L ka \sqrt{L^2 - x^2} \mathrm{d}x\] Within the notebook interface we define \(q_1(x)\) and calculate its integral (Figure 2). We incorporate math equations, descriptive text, and images into our calculations to clearly document our work. Completed notebooks can be published in PDF or HTML format. Through integration, we find that \[\text{Lift} = \frac{\pi L^2 ka}{4}\] We determine \(ka\) by equating the lift expression that we just calculated with the lift expressed in terms of the aircraft’s load factor. In aircraft design, load factor is the ratio of lift to total aircraft weight: \[n = \frac{\text{Lift}}{W_{to}}\] Load factor equals 1 during straight and level flight, and is greater than 1 when an aircraft is climbing or during other maneuvers where lift exceeds aircraft weight. We equate our two lift expressions, \[\frac{W_{to} n}{2} = \frac{\pi L^2 ka}{4}\] and solve for the unknown \(ka\) term. Our analysis assumes that lift forces are concentrated on the two wings of the aircraft, which is why the left-hand side of the equation is divided by 2. We do not consider lift on the fuselage or other surfaces. Plugging \(ka\) into our original \(q_1(x)\) expression, we obtain an expression for lift: \[q_1(x) = \frac{2 W_{to} n \sqrt{L^2 - x^2}}{L^2 \pi}\] An analytical model like this helps us understand how various parameters affect lift. For example, we see that lift is directly proportional to load factor ( n) and that for a load factor of 1, the maximum lift \[\frac{2 W_{to}}{\pi L}\] occurs at the wing root (\(x=0\)). Weight of Wing Structure We assume that the load caused by the weight of the wing structure is proportional to chord length (the width of the wing), which is highest at the wing base (\(C_o\)) and tapers off toward the wing tip (\(C_t\)). Therefore, the load profile can be expressed as \[q_w(x) = kw\left(\frac{C_t - C_o}{L} x + C_o\right)\] We define \(q_w(x)\) and integrate it across the length of the wing to calculate the total load from the wing structure: We then equate this structural load equation with the structural load expressed in terms of load factor and weight of the wing structure (\(W_{ws}\)) \[\frac{W_{ws} n}{2} = \frac{kw L (C_o + C_t)}{2}\] and solve for \(kw\). Plugging kw into our original \(q_w(x)\) expression, we obtain an analytical expression for load due to weight of the wing structure. \[q_w(x) = - \frac{W_{ws} n \left(C_o - \frac{x(C_o - C_t)}{L}\right)}{L(C_o + C_t)}\] Weight of Fuel Stored in Wing We define the load from the weight of the fuel stored in the wing as a piecewise function where load is zero when \(x > L_f\). We assume that this load is proportional to the width of the fuel tank, which is at its maximum at the base of the wing (\(C_{of}\)) and tapers off as we approach the tip of the fuel storage tank (\(C_{tf}\)). We derive \(q_f(x)\) in the same way that we derived \(q_w(x)\), resulting in an equation of the same form: \[q_f(x) = \begin{cases} 0 & \text{if } L_f < x\\ -\frac{W_f n \left(C_{of} - \frac{x(C_{of} - C_{tf})}{L_f}\right)}{L_f (C_{of} + C_{tf})} & \text{if } x \leq L_f\end{cases}\] Total Load We calculate total load by adding the three individual load components. This analytical model gives a clear view of how aircraft weight and geometry parameters affect total load. \[q_t(x) = \begin{cases} \begin{split} &\frac{n}{L^2 \pi}\left(2 W_{to}\sqrt{L^2 - x^2}\right) \\ &\quad + \frac{n\left( -\pi C_o L W_{ws} + \pi C_o W_{ws} x - \pi C_t W_{ws} x\right)}{L^2 \pi (C_o + C_t)} \end{split} & \text{if } L_f < x\\ \begin{split} &\frac{2 W_{to} n \sqrt{L^2 - x^2}}{L^2 \pi} - \frac{W_{ws} n (C_t x - C_o x +C_o L)}{L^2 (C_o + C_t)} \\ &\quad - \frac{W_f n (C_{tf} x - C_{of} x + C_{of} L_f)}{L_f^2(C_{of} + C_{tf})} \end{split} & \text{if } x \leq L_f\end{cases}\] Defining Model Parameters and Visualizing Wing Loads We now have an analytical model for wing loads that we can use to evaluate aircrafts with various wing dimensions and weights. The small passenger aircraft that we are modeling has the following parameters: \(W_{to}\) = 4800 kg (total aircraft weight) \(W_{ws}\) = 630 kg (weight of wing structure) \(W_f\) = 675 kg (weight of fuel stored in wing) \(L\) = 7 m (length of wing) \(L_f\) = 4.8 m (length of fuel tank within wing) \(C_o\) = 1.8 m (chord length at wing root) \(C_t\) = 1.4 m (chord length at wing tip) \(C_{of}\) = 1.1 m (width of fuel tank at wing root) \(C_{tf}\) = 0.85 m (width of fuel tank at Lf) To evaluate load during the climb phase, we assume a load factor of 1.5, then plot the individual load components and total load (Figure 3). We see that lift is the largest contributor to total load and that the maximum load of 545 N*m occurs at the end of the fuel tank. Fuel load also contributes significantly to the total load, while the weight of the wing is the smallest contributor. While it is useful to visualize wing loads, what really concerns us are the shear force and bending moments resulting from these loads. We need to determine whether worst-case bending moments experienced by the wing are within design limits. Deriving a Bending Moment Model We can use the expression that we derived for load on the wing to calculate bending moment. We start by integrating total load to determine shear force: \[V(x) = - \int q_t(x) \mathrm{d}x\] Bending moment can then be calculated by integrating shear force: \[M(x) = \int V(x) \mathrm{d}x\] We write a custom function in the MuPAD language, CalcMoment.mu. This function accepts load profile and returns the bending moment along the length of the wing (Figure 4). Symbolic Math Toolbox includes an editor, debugger, and other programming utilities for authoring custom symbolic functions in the MuPAD language. We use this function with the aircraft parameters that we defined previously to obtain an expression for bending moment as a function of length along wing ( x) and load factor ( n) \[\begin{cases} \begin{split} &0.27 n x^3 - 2085.0 n x - 25.31 n x^2 - 1056.56 n \\ &\quad + 10695.21 n \left(0.14 x \arcsin(0.14x) + \sqrt{1.0 - 0.02 x^2}\right) \\ &\quad - 10.39 n \left(49.0 - 1.0 x^2\right)^{\frac{3}{2}}\end{split} & \text{if } 2.4 < x \\ \begin{split} &2.77 n x^3 - 1747.5 n x - 104.64 n x^2 - 1444.25 n \\ &\quad + 10695.21 n \left(0.14 x \arcsin(0.14x) + \sqrt{1.0 - 0.02 x^2}\right) \\ &\quad - 10.39 n \left(49.0 - 1.0 x^2\right)^{\frac{3}{2}}\end{split} & \text{if } x \leq 2.4 \end{cases}\] As with wing loads, we plot bending moment assuming a load factor of 1.5 (Figure 5). As expected, the bending moment is highest at the wing root with a value of 8.5 kN*m. The wing is designed to handle bending moments up to 40 kN*m at the wing root, but since regulations require a safety factor of 1.5, bending moments exceeding 26.7 kN*m are unacceptable. We will simulate bending moments for various operating conditions, including conditions where the load factor is greater than 1.5, to ensure that we are not in danger of exceeding the 26.7 kN*m limit. Simulating Bending Moment in MATLAB The bending moment equation is saved in our notebook as moment. Using the getVar command in Symbolic Math Toolbox, we import this variable into the MATLAB workspace as a sym object, which can be operated on using MATLAB symbolic functions included in Symbolic Math Toolbox. bendingMoment = getVar(nb,’moment’); Since we want to numerically evaluate bending moments for various conditions, we convert our sym object to its equivalent numeric function using the matlabFunction command. h_MOMENT = matlabFunction(bendingMoment) h_MOMENT is a MATLAB function that accepts load factor ( n) and length along wing ( x) as inputs. Because we’re evaluating bending moments at the wing root ( x=0), load factor becomes the only variable that affects bending moments. As mentioned earlier, load factor is equal to Lift / Wto. Using the standard lift equation, and assuming the aircraft is not banking, load factor can be expressed as \[n = \frac{\rho A C_L V^2}{2 W_{to}}\] Where \(\rho\) = air density —1.2 kg/m^3 \(A\) = planform area (approximately equal to total surface area of wings)—23 m^2 \(C_L\) = lift coefficient (varies with aircraft angle of attack, which ranges from 3 to 12 degrees)— 0.75 to 1.5 \(V\) = net aircraft velocity (accounts for aircraft speed and external wind conditions)—40 m/s to 88 m/s We define these variables in MATLAB and evaluate h_MOMENT for the range of lift coefficients and aircraft velocities listed above. We store the results in a dataset array (Figure 6), available in Statistics and Machine Learning Toolbox™. Dataset arrays provide a convenient way to manage data in MATLAB, enabling us to filter the data to view and analyze the subsets we are most interested in. Since we want to determine whether bending moments ever exceed the 26.7 kN*m threshold, we only need the bending moment data where the load factor is greater than 1.5. We filter the dataset and plot the data for these conditions (Figure 7). moment_filt = moment(moment.loadFactor>1.5,:) x = moment_filt.netVel; y = moment_filt.CL; z = moment_filt.maxMoment; [X,Y] = meshgrid(x,y); Z = griddata(x,y,z,X,Y); surf(X,Y,Z) The plot shows that the peak bending moment, 19.3 kN*m, occurs when net aircraft velocity and lift coefficient are at their maximum values of 88 m/s and 1.5, respectively. This result confirms that bending moments will be safely below the 26.7 kN*m limit, even for worst-case conditions. The Value of Analytical Modeling Our analytical models gave us a clear view into how different aircraft parameters and operating conditions affect loads and bending moments on the aircraft wing. They enabled us to verify that the wing is able to withstand worst-case loading conditions that it could encounter during the climb phase of flight. The models discussed in this article were used only for high-level proof-of-concept analysis, but analytical models could also be used for more detailed system modeling tasks—for example, to model the airflow near the leading edge or tip of the wing.
Suppose that we have an IID random sample $\mathbf{x} = (x_1, \dots, x_n)$ from a given distribution with the following PDF: $$\theta (1 - e^{-x})^{\theta -1}e^{-x}, \, x > 0, \, \theta > 0$$ Also, let's say that we have a non-informative prior distribution such that $\pi(\theta) \propto \frac{1}{\theta^c}$ where $c$ is a given constant. I'd like to calculate the posterior distribution, but to do so, I have to calculate: $$m(\mathbf{x}) = \int f(\mathbf{x} \mid \theta) \pi(\theta) d\theta = \int_0^{\infty} \left( \prod_{i=1}^n \theta (1-e^{-x_i})^{\theta-1}e^{-x_i} \right)\left(\frac{1}{\theta^c} \right) d \theta$$ I don't see an easy way to integrate the above. The examples I have seen are simple ones such as Poisson with Gamma prior and Binomial with Beta prior. Is there another approach to analytically writing the posterior PDF?
There is a formula for the hypergeometric ($_2F_1$) that expresses it as a sum of Pochhammer symbols, times something that reads $$_2F_1(a,b,c;x) = \sum_{i=0}^{\infty} \frac{(a)_i (b)_i}{(c)_i} \frac{x^i}{i!}$$ This is a result that I easily verified. Result1 = Series[Hypergeometric2F1[a22, b22, c22, y], {y, 0, 10}]; Result2 = Sum[(Pochhammer[a22, w] Pochhammer[b22, w])/Pochhammer[c22, w]*y^w/ w!, {w, 0, 10}];Normal[Result1/Result2 // Factor // Simplify] // Timing {1.04688, 1} In what I need to write down I have products of hypergeometric functions but the arguments have the same structure and for each hypergeometric the value of $n$ is fixed -see below the exact form. For now I am only interested in clarifying the following thing. This is one of the hypergeometric functions that I have. Hypergeometric2F1[1/2 (Δ + l) + n, 1/2 (Δ + l) + n, Δ + l + 2 n, x] And this is how I re-wrote it Sum[(Pochhammer[1/2 (Δ + l) + n, i] Pochhammer[ 1/2 (Δ + l) + n, i])/ Pochhammer[(Δ + l + 2 n), i]*x^i/i!, {i, 0, 10}] For my purposes, I need to sum over values of $\Delta,l$ and this is where the problem starts. Some standard results for the Pochhammer symbol. Pochhammer[0, 0]1Pochhammer[0, 1]0Pochhammer[0, 2]0 This means that for certain combinations of $\Delta, l, n$ the denominator can be zero. The good thing is that the numerator comes with two zeros, and there is an exact cancellation; actually we would get zero. Mathematica does not want to handle a fraction like $\frac{0*0}{0}$, so I need to put by hand the following: If $\Delta + l + 2n \neq 0$ give me all the terms in the sum, otherwise if $\Delta + l + 2n = 0$ and $n=0$ the corresponding term in the sum is $1$ and all the other terms in the sum are $0$ and I want to sum this. An example with numbers just to clarify a bit more. $\Delta = 4, l=0, n=1$, then the sum runs smoothly. $\Delta = 4, l=0, n=-2$ then I would like to get back $1$ when $i=0$ and $0$ for the other values of $i$ and in the end sum this thing, which gives $1$. My attempts: First one: ftest[x_, z_, Δ_, l_, n_] := If[(1/2 (Δ + l) + n != 0), Sum[(Pochhammer[1/2 (Δ + l) + n, i] Pochhammer[ 1/2 (Δ + l) + n, i])/ Pochhammer[(Δ + l + 2 n), i]*x^i/i!, {i, 0, 10}], If[(1/2 (Δ + l) + n == 0) ∧ (i == 0), Sum[(Pochhammer[1/2 (Δ + l) + n, i] Pochhammer[ 1/2 (Δ + l) + n, i])/ Pochhammer[(Δ + l + 2 n), i]*x^i/i!, {i, 0, 10}], 0]] Second one: ftest[x_, z_, Δ_, l_, n_] := If[(1/2 (Δ + l) + n != 0) ∨ ((1/2 (Δ + l) + n == 0) ∧ (i == 0)), Sum[(Pochhammer[1/2 (Δ + l) + n, i] Pochhammer[ 1/2 (Δ + l) + n, i])/ Pochhammer[(Δ + l + 2 n), i]*x^i/i!, {i, 0, 10}], 0] The problem I have: In the example with the specific numbers I gave above I have no troubles. I am showing the results using the the expression in the second effort ftest[x, z, 4, 0, 1] 1 + (3 x)/2 + (12 x^2)/7 + (25 x^3)/14 + (25 x^4)/14 + ( 7 x^5)/4 + (56 x^6)/33 + (18 x^7)/11 + (225 x^8)/143 + ( 275 x^9)/182 + (132 x^10)/91 For the other example I gave, with the zero value of the Pochhammer $\Delta = 4, l=0, n=-2$ I get the following ftest[x, z, 4, 0, -2] If[i == 0, \!\(\*UnderoverscriptBox[\(\[Sum]\), \(i = 0\), \(10\)]\*FractionBox[\(\((Pochhammer[\*FractionBox[\(4 + 0\), \(2\)] - 2, i]\ Pochhammer[\*FractionBox[\(4 + 0\), \(2\)] - 2, i])\)\ \*SuperscriptBox[\(x\), \(i\)]\), \(Pochhammer[ 4 + 0 + 2\ \((\(-2\))\), i]\ \(i!\)\)]\), 0] Thank you in advance for your help.
A) \[2.5\times {{10}^{10}}\] B) \[2.5\times {{10}^{11}}\] C) \[2.5\,\,\times \,\,{{10}^{12}}\] D) \[2.5\,\,\times \,\,{{10}^{9}}\] Correct Answer: A \[\lambda \,\,=\,\,5000\,\overset{{}^\circ }{\mathop{A}}\,\] Energy received per second \[=\,\,{{10}^{-}}^{\,8}J/sec\] \[\therefore \] Energy of one photon is \[E=hv=hc/\lambda \], \[=\,\,\frac{6.63\times {{10}^{-34}}\times 3\times {{10}^{8}}}{5000\times {{10}^{-10}}}\,\,=\,\,3.99\times {{10}^{-19}}J\] \[\therefore \] Number of photon received per second \[=\frac{Energy\text{ }received\text{ }per\text{ }second}{Energy\text{ }of\text{ }one\text{ }photon}\] \[=\,\,\frac{{{10}^{-8}}}{3.99\text{ }\times {{10}^{-}}^{19}}\,\,=\,\,2.5\times {{10}^{10}}\] Solution : You need to login to perform this action. You will be redirected in 3 sec
Which of the highlighted C–H bonds in the following compounds is weakest? (R = generic alkyl group) I personally thought that the answer was 1 but it is given as 4 in the answer key. I didn't understand how it is so. Chemistry Stack Exchange is a question and answer site for scientists, academics, teachers, and students in the field of chemistry. It only takes a minute to sign up.Sign up to join this community I believe hyperconjugation is a possible cause for this phenomonen. In general, the greater the amount of electron donation to the antibonding MO of a bond, the weaker the bond. Note also that it is widely-acclaimed that $\ce {C-C}$ $\sigma$ bonding MOs are better hyperconjugative donors than $\ce {C-H}$ $\sigma$ bonding MOs. In the case of methane, there is no hyperconjugation occuring. Thus, naturally, we would predict that it has the strongest of the $\ce {C-H}$ bonds. In the case of the $\ce {C-H}$ bond made to the tertiary carbon, there is the most numerous number of hyperconjugative donations of electron density to the $\ce {C-H}$ $\sigma$ antibonding MO due to there being the most number of adjacent $\ce {\alpha C - \beta C}$ and $\ce {\alpha C - \beta H}$ bonds acting as hyperconjugative donors. $\ce{X–H}$ bond dissociation energies usually correspond inversely to $\ce{X^.}$ radical stability, so another way of looking at it is to recall that the stability of radicals increases in the order methyl < primary < secondary < tertiary. The root cause of this is hyperconjugation (from $\sigma_\ce{C–H}$ to singly filled $\mathrm{p}$ orbital on carbon), analogous to the stabilisation of carbocations. So, it is just the second side of the same coin, really. Consider ethane. The bond lengths are as follows: $\ce{C-H}$: $\pu{110 pm}$. Bond angle is: $\ce{H-C-H}$: $109.6^\circ$. And, consider those for methane: lengths are $\ce{C-H}$: $\pu{109 pm}$ and bond angle is: $\ce{H-C-H}$: $109.5^\circ$. Though small, the angle has risen from methane to ethane (0° to 1°). Also, the bond length has risen. These influences are (supposed to have been) caused by steric effects of a methyl group in ethane. And thereby causing a decrease in bond energy. Also, due to the presence of three alkyl groups, the electrophilic character of carbon (though small already) is reduced reflecting in its share of electrons in the $\ce{C-H}$ bond of a 3° alkane causing the bond to weaken (I emphasize, this effect has very very little share).