text
stringlengths
256
16.4k
From: https://stats.stackexchange.com/a/236297/22199, I quote A mixture distribution combines different component distributions with weights that typically sum to one (or can be renormalized). A gaussian-mixture is the special case where the components are Gaussians. Does this mean the following is a regression equation for a Gaussian mixture model? $$\hat{y} = p(\alpha_1 + \boldsymbol{\beta}_1.\mathbf{x}_1) + (1- p)(\alpha_0 + \boldsymbol{\beta}_0.\mathbf{x}_0)$$ Here, $p = p(\mathbf{x}_M)$ is a function of covariates $\mathbf{x}_M$ that outputs a probability of being in the first Gaussian distribution, versus the second. That is, $$\begin{split}p(\mathbf{x}_M) &= \operatorname{expit}(p(\mathbf{x}_M))\\ &= \frac{1}{1+ e^{-(\alpha_M + \boldsymbol{\beta}_M .\mathbf{x}_M)}} \end{split},$$ and $\alpha$'s are the constant terms, the $\boldsymbol{\beta}$'s denote vectors of coefficients, and $\mathbf{x}$ the vector of covariates. Estimation of the constants and coefficients are done in the usual way by minimising the squared error $(\hat{y} - y)^2$.
Regarding question 2: FQHEs are examples of topological phases of matter, whose effective field theories are topological quantum field theories, whose quasi-particle excitations are (essentially) described by modular tensor categories (MTCs). Since what we're usually interested in for constructing models of topological quantum computing (and related things) is the algebraic data of these quasi-particles, the problem of classifying topological phases is very closely related to the problem of classifying modular tensor categories. Since we're talking about categories what we really want to do is classify them up to something, which in this case is (braided) monoidal equivalence. It turns out though that, for every equivalence class of MTCs, representatives of that equivalence class can be constructed from solutions to certain polynomial equations called the pentagon, hexagon, and pivotal equations. The collection of these solutions define an algebraic set $X$. For a given $X$ there exists an algebraic group $G$ which acts on $X$ such that for points $F\text{ and }F' \in X$, $F$ and $F'$ give rise to monoidally equivalent categories if and only if there exists $g \in G$ such that $g \cdot F = F'$. Thus the orbits of $G$ in $X$ are in 1-1 correspondence with equivalence classes of categories, and so now we can consider the problem of classifying orbits of $G$. It turns out that in doing this we have almost the nicest possible situation imaginable - $G$ is reductive, all orbits have the same dimension and are in fact the irreducible components of $X$. This then implies that we can construct another algebraic set $Y$ which is an orbit space for $X$ - that is to say that the points of $Y$ are in 1-1 correspondence with orbits of $G$ in $X$ and the regular functions on $Y$ are those regular functions on $X$ which are invariant under the action of $G$. All of this allows us to classify orbits (i.e. MTCs) by looking at the evaluations of $G$-invariant functions on $X$. Picking these functions is a really hard problem in general, but for MTCs we have a set of generic candidates: Every MTC gives you a pair of matrices $(S,T)$ which are the so called modular data of the category. They are called this because they specify a representation of the modular group $SL(2,\mathbb Z)$. It is conjectured that MTCs are classified by their modular data. Bringing things back to physics, the $(S,T)$ matrices have physical meaning - the entries of the $S$-matrix encodes the mutual statistics between particle types and the $T$-matrix encodes the self-statistics. Additionally, the entries of $S$ and $T$ are given by the evaluations of regular functions on $Y$ which is to say that they are given by the evaluation of $G$-invariant regular functions on $X$. That this has physical meaning can be seen by noting that $G$ is (essentially) the gauge group for our TQFT. Given two quasi-particles $a$ and $b$, the state space $V_{a b}$ for their composite system is finite dimensional and decomposes in to subspaces $V_{a b}^c$, where $c$ is another quasi-particle type (including the vacuum) and whose dimension is the number of fusion channels from $a\otimes b$ to $c$. $G$ is the direct product of the groups of basis transformations on the $V_{ab}^c$ spaces. The information in paragraphs 1,6, and 7 is pretty standard and can basically be found in these lecture notes. The details for paragraph 2 can be found in arxiv:1305:2229 and for paragraphs 3 and 4 in arxiv:1509.03275.This post imported from StackExchange MathOverflow at 2016-11-17 18:49 (UTC), posted by SE-user Matthew Titsworth
This question already has an answer here: Density of irrationals 5 answers Proof that for $a,b \in \mathbb{R}$ there is an irrational number $r$ so that $a < r < b$. Basically, proof, that between any two irrationals, there is another irrational r. I'm sure there are already many ways out there how to do it, however I have troubles proving it in the following way: (1) For every $x,y \in \mathbb{R}$ there is a bijective function between $[0,1]$ and $[x,y]$ (already proven) (2) $\frac{\sqrt{2}}{2} \in ]0,1[$ (3) Now when mapping $[0,1]$ onto $[x,y]$ $\frac{\sqrt{2}}{2}$ will also be mapped into the new intervall, therefore there has to be an irrational number in $[x,y]$ Now the problem I see is, that for example $\frac{\sqrt{2}}{2}$ could be mapped onto a rational number and therefore I'd have to proof, that there is a different irrational in $[x,y]$. It'd be nice if you could help me complete the proof.
Integer Addition is Commutative Jump to navigation Jump to search Theorem $\forall x, y \in \Z: x + y = y + x$ $\blacksquare$ Let $x = \eqclass {a, b} {}$ and $y = \eqclass {c, d} {}$ for some $x, y \in \Z$. Then: \(\displaystyle x + y\) \(=\) \(\displaystyle \eqclass {a, b} {} + \eqclass {c, d} {}\) Definition of Integer \(\displaystyle \) \(=\) \(\displaystyle \eqclass {a + c, b + d} {}\) Definition of Integer Addition \(\displaystyle \) \(=\) \(\displaystyle \eqclass {c + a, d + b} {}\) Natural Number Addition is Commutative \(\displaystyle \) \(=\) \(\displaystyle \eqclass {c, d} {} + \eqclass {a, b} {}\) Definition of Integer Addition \(\displaystyle \) \(=\) \(\displaystyle y + x\) Definition of Integer $\blacksquare$ Sources 1951: Nathan Jacobson: Lectures in Abstract Algebra: I. Basic Concepts... (previous) ... (next): Introduction $\S 5$: The system of integers 1982: P.M. Cohn: Algebra Volume 1(2nd ed.) ... (previous) ... (next): Chapter $2$: Integers and natural numbers: $\S 2.1$: The integers: $\mathbf Z. \, 2$ 2008: Paul Halmos and Steven Givant: Introduction to Boolean Algebras... (previous) ... (next): $\S 1$
Consider an undirected network $G = (V,E)$ in which edge $e$ $\in$ $E$ fails after (deterministic) time $t(e) > 0$. Network failure occurs at the first instant in which $G$ is no longer connected. Let $m = |E|$ and assume the values $\{t(e): e \in E\}$ are distinct. You wish to determine the instant $\tau$ at which the network fails. 1- Suppose you solve this problem via an intuitive algorithm in which you first sort the edges according to $t(e)$ and then remove the edges, one at a time, to determine if the network has failed. Establish the complexity of this algorithm. My Answer: We can sort edges in $O(m\log n)$ time + removing edges takes $O(1)$ * the number of edges to be removed $O(m)$. So the complexity is $O(m^2)$. 2- Show that a variation of your intuitive algorithm from part a can determine $\tau$ in a reduced time complexity in which a factor of m is changed to log m. My expectation: I think we might use a heap for storing edges but I am not sure how to implement it. 3- Using what you know about spanning trees, can you write a new algorithm to improve upon (or equal) the complexity of your algorithm from part b? (Hint: The network will be failed as soon as every spanning tree. contains a failed edge.) My expectation: I believe we need to find all spanning trees but I am not sure if this will help reduce complexity. Can anyone help me solving the given problem or direct me towards a reasonable solution?
You must guard against division by zero. When you "cancel out $t^2$", you are dividing both sides of your equation by $t^2$. Consequently, the correct form of inference is $$\begin{align*} 2t^2+t^3 &= t^4 & & \\ 2 + t &= t^2 \qquad \text{ or } \qquad t^2 = 0.\end{align*}$$ The resulting left choice has the solution set $\{-1,2\}$ and the resulting right choice has the solution set $\{0,0\}$, which together are the solution set of the first equation. Note that the same thing happens in reverse. Multiplying both sides of an equation by zero can result in craziness. We can agree that $1 \neq 2$, but this does not mean $0 \cdot 1 \neq 0 \cdot 2$ because both sides of this are zero. It can be harder to see when one is doing this when using a more complicated expression that is only sometimes zero. For instance, $$\begin{align*} 2 + t &= t^2 \\ t^2(2 + t) &= t^4 \qquad \text{ and } \qquad t^2 \neq 0 \\ 2t^2+t^3 &= t^4 \qquad \text{ and } \qquad t^2 \neq 0\end{align*}$$ The left choice has solution set $\{-1,0,0,2\}$ and the right choice has solution set $\{0,0\}$. The (set) difference of these is $\{-1,2\}$, the solution set of the first equation. None of this is hard to see when multiplying or dividing by constants. Either the constant is nonzero and everything works or the constant is zero and everyone can see that something bogus is going on. However, when we're not just using constants, a little more care is needed.
I'm playing a bit with category theory, and I've noticed that it is actually not rare when an object $C$ of a category $\mathcal{C}$ is isomorphic to its square $C \times C$. An object $C$ of a category $\mathcal{C}$ is therefore called idempotent whenever it is isomorphic to its (categorical) product. Trivial examples are given by the terminal object $1$ of any category, or infinite sets in Sets. One can think of a bit less trivial examples in Grp (say, $C = \displaystyle \bigoplus_{n \in \mathbb{N}} \mathbb{Z}$) or in Rings (say, $\mathbb{R}^\omega$ or even $R^\omega$ for any ring). More interesting examples are given in Fields, where for any prime $p$, one can easily show that $\mathbb{Z}_p$ is idempotent. Call coidempotent the dual of the previous notion. Again, one can easily think of trivial examples. What I'm actually interested in is some non trivial examples of bi-idempotent objects (both idempotent and coidempotent), in some category where the notion of product and coproduct aren't coinciding. Of course, a strict initial object $0$ is always a bi-idempotent object, and it'll be the same for any idempotent object in a category where finite products and finite coproduct coincides (like any abelian category). Anyone has any non trivial examples of such objects?
Is $$ f(n,x,y)=\sum^{n-1}_{k=1}{n\choose k}x^{n-k}y^k,\qquad\qquad\forall~n>0~\text{and}~x,y\in\mathbb{Z}$$ always divisible by $2$? (Hint) An odd number raised to any power is odd, and an even number raised to any power is even. In particular, $$ (x + y)^n \equiv (x + y) \pmod 2 $$ Using this along with the binomial formula, you should be able to prove the result. In fact, Goos and Norbert have given the answer. (And you should also assume $n\in \mathbb{N}$) $$ f(n,x,y) = (x+y)^n - x^n -y^n $$ If both $x$ and $y$ are even: even - even - even = even; both $x$ and $y$ are odd: even - odd - odd = even; $x$ is even and $y$ is odd: odd - even - odd = even; $y$ is odd and $x$ is even: just like the case above. So, $f(n,x,y)$ is always even. Hint: Recall binomial formula$$(x+y)^n=\sum\limits_{k=0}^n{n\choose k} x^{n-k} y^k$$ Try setting $y=-x+a$ and expand the powers.
Here's a question on a passage from this paper I'm reading. Here's the quote: Given the vector of portfolio weights $w$, and the estimate of the conditional variance, $\Sigma_{t,k}$, the predicted portfolio variance is $\hat{\sigma}_{p,t,k} = w'\Sigma_{t,k}w$. The VaR at the $1\%$ and $5\%$ level is computed for each portfolio using the predicted portfolio variance as $$ \text{VaR}_{p,t-1,k}(\alpha) = \sqrt{\hat{\sigma}_{p,t,k}}F^{-1}(\alpha) $$ where $F^{-1}(\alpha)$ is the $\alpha$-th percentile of the cumulative one-step-ahead distribution assumed for portfolio returns. Question 1: is there a name for this calculation strategy here? Something I can google would be nice. Question 2: When they say "percentile of the cumulative one-step-ahead distribution assumed for portfolio returns", do they mean a distribution for the scale free random variable? Say $y_{t}$ is the return, and say it can be written as $\sqrt{\sigma_{t,k}}z_{t}$. Is $F$ the CDF of $Z$? It gets a little weird with t random variables because the scale factor isn't the standard deviation. Here's why I think this: $P[y_{tp} < \text{VaR}_{p,t-1,k}(\alpha)] = P[y_{tp} < \sqrt{\hat{\sigma}_{p,t,k}}F^{-1}(\alpha)] \approx P[z_{tp} < F^{-1}(\alpha)] = F[F^{-1}(\alpha)] = \alpha$ . I ask because it seems like it would be strange to have a parametric model, strip out the variance predictions, and then make up another probability distribution to to calculate this. Question 3: why do they use the word "cumulative?" What is cumulative about this?
When I write \[\left \lVert \overrightarrow{\nabla} \right \rVert\] I get way too much extra space under the baseline: Why is this happening, and how can I fix it? TeX - LaTeX Stack Exchange is a question and answer site for users of TeX, LaTeX, ConTeXt, and related typesetting systems. It only takes a minute to sign up.Sign up to join this community The fences are symmetric with respect to the formula axis (the imaginary line where fraction lines sit). In the case of \overrightarrow{\nabla}, the size chosen is the same as for \Bigg, which extends way down the formula axis. There's no need that the fences cover the whole construction, in particular the arrow. Here's a visual sample, where I use the handy \DeclarePairedDelimiter function provided by mathtools (that loads amsmath). I also use smaller arrows as defined in https://tex.stackexchange.com/a/248297/4427 \documentclass{article}\usepackage{mathtools}\DeclarePairedDelimiter{\norm}{\lVert}{\rVert}\makeatletter\newcommand{\overrightsmallarrow}{\mathpalette{\overarrowsmall@\rightarrowfill@}}\newcommand{\overarrowsmall@}[3]{% \vbox{% \ialign{% ##\crcr #1{\smaller@style{#2}}\crcr \noalign{\nointerlineskip\vskip1pt}% $\m@th\hfil#2#3\hfil$\crcr }% }%}\def\smaller@style#1{% \ifx#1\displaystyle\scriptstyle\else \ifx#1\textstyle\scriptstyle\else \scriptscriptstyle \fi \fi}\makeatother\begin{document}\begin{gather*}\norm{\overrightsmallarrow{\nabla}}\quad\norm[\big]{\overrightsmallarrow{\nabla}}\quad\norm[\Big]{\overrightsmallarrow{\nabla}}\quad\norm*{\overrightsmallarrow{\nabla}}\\\norm{\overrightsmallarrow{x}}\quad\norm[\big]{\overrightsmallarrow{x}}\quad\norm[\Big]{\overrightsmallarrow{x}}\quad\norm*{\overrightsmallarrow{x}}\\\norm{\overrightsmallarrow{X}}\quad\norm[\big]{\overrightsmallarrow{X}}\quad\norm[\Big]{\overrightsmallarrow{X}}\quad\norm*{\overrightsmallarrow{X}}\end{gather*}\end{document} I have no doubt that the normal version is the right one.
Damping in Structural Dynamics: Theory and Sources If you strike a bowl made of glass or metal, you hear a tone with an intensity that decays with time. In a world without damping, the tone would linger forever. In reality, there are several physical processes through which the kinetic and elastic energy in the bowl dissipate into other energy forms. In this blog post, we will discuss how damping can be represented, and the physical phenomena that cause damping in vibrating structures. How Is Damping Quantified? There are several ways by which damping can be described from a mathematical point of view. Some of the more popular descriptions are summarized below. One of the most obvious manifestations of damping is the amplitude decay during free vibrations, as in the case of a singing bowl. The rate of the decay depends on how large the damping is. It is most common that the vibration amplitude decreases exponentially with time. This is the case when the energy lost during a cycle is proportional to the amplitude of the cycle itself. Let’s start out with the equation of motion for a system with a single degree of freedom (DOF) with viscous damping and no external loads, After division with the mass, m, we get a normalized form, usually written as Here, \omega_0 is the undamped natural frequency and \zeta is called the damping ratio. In order for the motion to be periodic, the damping ratio must be limited to the range 0 \le \zeta < 1. The amplitude of the free vibration in this system will decay with the factor where T 0 is the period of the undamped vibration. Decay of a free vibration for three different values of the damping ratio. Another measure in use is the logarithmic decrement, δ. This is the logarithm of the ratio between the amplitudes of two subsequent peaks, where T is the period. The relation between the logarithmic decrement and the damping ratio is Another case in which the effect of damping has a prominent role is when a structure is subjected to a harmonic excitation at a frequency that is close to a natural frequency. Exactly at resonance, the vibration amplitude tends to infinity, unless there is some damping in the system. The actual amplitude at resonance is controlled solely by the amount of damping. Amplification for a single-DOF system for different frequencies and damping ratios. In some systems, like resonators, the aim is to get as much amplification as possible. This leads to another popular damping measure: the quality factor or Q factor. It is defined as the amplification at resonance. The Q factor is related to the damping ratio by Another starting point for the damping description is to assume that there is a certain phase shift between the applied force and resulting displacement, or between stress and strain. Talking about phase shifts is only meaningful for a steady-state harmonic vibration. If you plot the stress vs. strain for a complete period, you will see an ellipse describing a hysteresis loop. Stress-strain history. You can think of the material properties as being complex-valued. Thus, for uniaxial linear elasticity, the complex-valued stress-strain relation can be written as Here, the real part of Young’s modulus is called the storage modulus, and the imaginary part is called the loss modulus. Often, the loss modulus is described by a loss factor, η, so that Here, E can be identified as the storage modulus E’. You may also encounter another definition, in which E is the ratio between the stress amplitude and strain amplitude, thus in which case The distinction is important only for high values of the loss factor. An equivalent measure for loss factor damping is the loss tangent, defined as The loss angle δ is the phase shift between stress and strain. Damping defined by a loss factor behaves somewhat differently from viscous damping. Loss factor damping is proportional to the displacement amplitude, whereas viscous damping is proportional to the velocity. Thus, it is not possible to directly convert one number into the other. In the figure below, the response of a single-DOF system is compared for the two damping models. It can be seen that viscous damping predicts higher damping than loss factor damping above the resonance and lower damping below it. Comparison of dynamic response for viscous damping (solid lines) and loss factor damping (dashed lines). Usually, the conversion between the damping ratio and loss factor damping is considered at a resonant frequency, and then \eta \approx 2 \zeta. However, this is only true at a single frequency. In the figure below, a two-DOF system is considered. The damping values have been matched at the first resonance, and it is clear that the predictions at the second resonance differ significantly. Comparison of dynamic response for viscous damping and loss factor damping for a two-DOF system. The loss factor concept can be generalized by defining the loss factor in terms of energy. It can be shown that for the material model described above, the energy dissipated during a load cycle is where \varepsilon_a is the strain amplitude. Similarly, the maximum elastic energy during the cycle is The loss factor can thus be written in terms of energy as This definition in terms of dissipated energy can be used irrespective of whether the hysteresis loop actually is a perfect ellipse or not — as long as the two energy quantities can be determined. Sources of Damping From the physical point of view, there are many possible sources of damping. Nature has a tendency to always find a way to dissipate energy. Internal Losses in the Material All real materials will dissipate some energy when strained. You can think of it as a kind of internal friction. If you look at a stress-strain curve for a complete load cycle, it will not trace a perfect straight line. Rather, you will see something that is more like a thin ellipse. Often, loss factor damping is considered a suitable representation for material damping, since experience shows that the energy loss per cycle tends to have rather weak dependencies on frequency and amplitude. However, since the mathematical foundation for loss factor damping is based on complex-valued quantities, the underlying assumption is harmonic vibration. Thus, this damping model can only be used for frequency-domain analyses. The loss factor for a material can have quite a large variation, depending on its detailed composition and which sources you consult. In the table below, some rough estimates are provided. Material Loss Factor, η Aluminum 0.0001–0.02 Concrete 0.02–0.05 Copper 0.001–0.05 Glass 0.0001–0.005 Rubber 0.05–2 Steel 0.0001–0.01 Loss factors and similar damping descriptions are mainly used when the exact physics of the damping in the material is not known or not important. In several material models, such as viscoelasticity, the dissipation is an inherent property of the model. Friction in Joints It is common that structures are joined by, for example, bolts or rivets. If the joined surfaces are sliding relative to each other during the vibration, the energy is dissipated through friction. As long as the value of the friction force itself does not change during the cycle, the energy lost per cycle is more or less frequency independent. In this sense, the friction is similar to internal losses in the material. Bolted joints are common in mechanical engineering. The amount of dissipation that will be experienced in bolted joints can vary quite a lot, depending on the design. If low damping is important, then the bolts should be closely spaced and well-tightened so that macroscopic slip between the joined surfaces is avoided. Sound Emission A vibrating surface will displace the surrounding air (or other surrounding medium) so that sound waves are emitted. These sound waves carry away some energy, which results in the energy loss from the point of view of the structure. A plot of the sound emission in a Tonpilz transducer. Anchor Losses Often, a small component is attached to a larger structure that is not part of the simulation. When the component vibrates, some waves will be induced in the supporting structure and carried away. This phenomenon is often called anchor losses, particularly in the context of MEMS. Thermoelastic Damping Even with pure elastic deformation without dissipation, straining a material will change its temperature slightly. Local stretching leads to a temperature decrease, while compression implies a local heating. Fundamentally, this is a reversible process, so the temperature will return to the original value if the stress is released. Usually, however, there are gradients in the stress field with associated gradients in the temperature distribution. This will cause a heat flux from warmer to cooler regions. When the stress is removed during a later part of the load cycle, the temperature distribution is no longer the same as the one caused by the onloading. Thus, it is not possible to locally return to the original state. This becomes a source of dissipation. The thermoelastic damping effect is mostly important when working with small length scales and high-frequency vibrations. For MEMS resonators, thermoelastic damping may give a significant decrease of the Q factor. Dashpots Sometimes, a structure contains intentional discrete dampers, like the shock absorbers in a wheel suspension. Such components obviously have a large influence on the total damping in a structure, at least with respect to some vibration modes. Seismic Dampers A particular case where much effort is spent on damping is in civil engineering structures in seismically active areas. It is of the utmost importance to reduce the vibration levels in buildings if hit by an earthquake. The purpose of such dampers can be both to isolate a structure from its foundation and to provide dissipation. Further Reading Read the follow-up to this blog post here: How to Model Different Types of Damping in COMSOL Multiphysics® Comments (11) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science TAGS CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
SemiCircleContour¶ class SemiCircleContour( integral_lower_bound=None, circle_eccentricity=None, logarithmic_bunching=None, circle_points=None, fermi_line_points=None, fermi_function_poles=None)¶ An equilibrium contour using the semi-circle contour points and weights defined in [gBMO+02]. Parameters: integral_lower_bound(PhysicalQuantity of type energy) – The distance between the lowest Fermi-level and the lowest energy circle contour point. Default:An energy determined on the chosen pseudopotentials or 1.5 Hartree for the semi-empirical calculators. circle_eccentricity( float) – The eccentricity of the circle contour. This should be a float between 0 and 1. 0 is a circle, 1 is a line. Default: 0.3 logarithmic_bunching( float) – Logarithmic bunching of the circle contour around the Fermi level. This should be a float between 0 and 1. 0 means no bunching, equidistant points, 1 means all bunched, centred on the Fermi Level. Default: 0.3 circle_points( int > 2) – The number of points on the circle contour. Default: 30 fermi_line_points( int) – The number of points on the straight line from the Fermi energy level up to infinity. This should be an integer in the range 1 to 11 (inclusive). Default: 10 fermi_function_poles( int > 0) – The number of poles of the Fermi function to include. Determines the imaginary shift of the straight line from the Fermi level to infinity. Default: 8 circleEccentricity()¶ Returns: The eccentricity of the circle contour. Return type: float circlePoints()¶ Returns: The number of circle points. Return type: int fermiFunctionPoles()¶ Returns: The number of poles for the Fermi function. Return type: int fermiLinePoints()¶ Returns: The number of points on the line from the Fermi energy level up to infinity. Return type: int integralLowerBound()¶ Returns: The distance between the lowest Fermi-level to the lowest energy circle contour point. Return type: PhysicalQuantity of type energy logarithmicBunching()¶ Returns: The logarithmic bunching. Return type: float Usage Example¶ One can use the SemiCircleContour by defining it as an equilibrium contour equilibrium_contour = SemiCircleContour() which constructs a SemiCircleContour with all defaults. Alternatively, more parameters that alter the accuracy of the approximation can be specified, e.g. equilibrium_contour = SemiCircleContour( circle_eccentricity=0.1, logarithmic_bunching=0.2, circle_points=100, fermi_line_points=11, fermi_function_poles=10) To use it in a calculation of the equilibrium density matrix, contour_parameters = ContourParameters(equilibrium_contour=equilibrium_contour) and saved on the calculator device_calculator = DeviceLCAOCalculator(contour_parameters=contour_parameters) Notes¶ The SemiCircleContour is a method to calculate the equilibrium density matrix \(D\) by performing an integration of the Greens Function \(G\). in which \(f(E)\) is the Fermi-Dirac distribution and \(\mu\) the Fermi level. This integral can be solved by using the residue theorem: in which the sum on the right hand side runs over the poles of the integrant included in the contour. The SemiCircleContour defines the contour in the upper-half of the imaginary plane and is comprised of a semicircle, a semi-infinite line segment and a finite number of Fermi poles. The semicircle \(C\) starts from the lower bound \(E_B\)controlled by integral_lower_bound.The parameter circle_eccentricity defines the eccentricity of the semi circle,while adjusting logarithmic_bunching alters the distribution of the semicircle’scontour points around the fermi level.The end point of the semicircle is defined at a distance \(\gamma\) below the Fermi energy\(\mu\) and \(\Delta\) above the real axis.The line segment \(L\) runs from the semicircle’s end point to \(+\infty\).The distance \(\Delta\) between the line segment and the real axis is determinedby the number of fermi_poles_poles included in the contour. The precision of the contour integration improves as circle_points, the number of contour points on the semicircle, fermi_line_points, the number of contour points on the line segment,or fermi_poles_poles increases. More information about this approach can be found in [gBMO+02] [gBMO+02] (1, 2) M. Brandbyge, J.-L. Mozos, P. Ordejón, J. Taylor, and K. Stokbro. Density-functional method for nonequilibrium electron transport. Phys. Rev. B, 65:165401, Mar 2002. doi:10.1103/PhysRevB.65.165401.
Studying relation properties. My definition of a transitive relation is as follows: A relation is transitive if and only if $\forall a,b,c \in A [aRb \land bRc \implies aRc]$ My question is: if $aRb \land bRc$ never occurs in the first place, is the relation considered transitive? I ask this because when I read $\forall a,b,c \in A [aRb \land bRc \implies aRc]$ I read it as follows: " when $a$ relates with $b$ and $b$ relates with $c$, $a$ relates to $c$". But since this "when" never happens, there is no condition to evaluate to decide if it is transitive or not. The same issue occurs with antisymmetry, where $\forall a,b \in A[aRb \land bRa \implies a = b]$ - what if $aRb \land bRa$ never occurs in the first place? EDIT I just remembered that $F_0\implies whatever$ is always $V_0$.... I guess this answers my question... kinda.
The bounty period lasts 7 days. Bounties must have a minimum duration of at least 1 day. After the bounty ends, there is a grace period of 24 hours to manually award the bounty. Simply click the bounty award icon next to each answer to permanently award your bounty to the answerer. (You cannot award a bounty to your own answer.) @Mathphile I found no prime of the form $$n^{n+1}+(n+1)^{n+2}$$ for $n>392$ yet and neither a reason why the expression cannot be prime for odd n, although there are far more even cases without a known factor than odd cases. @TheSimpliFire That´s what I´m thinking about, I had some "vague feeling" that there must be some elementary proof, so I decided to find it, and then I found it, it is really "too elementary", but I like surprises, if they´re good. It is in fact difficult, I did not understand all the details either. But the ECM-method is analogue to the p-1-method which works well, then there is a factor p such that p-1 is smooth (has only small prime factors) Brocard's problem is a problem in mathematics that asks to find integer values of n and m for whichn!+1=m2,{\displaystyle n!+1=m^{2},}where n! is the factorial. It was posed by Henri Brocard in a pair of articles in 1876 and 1885, and independently in 1913 by Srinivasa Ramanujan.== Brown numbers ==Pairs of the numbers (n, m) that solve Brocard's problem are called Brown numbers. There are only three known pairs of Brown numbers:(4,5), (5,11... $\textbf{Corollary.}$ No solutions to Brocard's problem (with $n>10$) occur when $n$ that satisfies either \begin{equation}n!=[2\cdot 5^{2^k}-1\pmod{10^k}]^2-1\end{equation} or \begin{equation}n!=[2\cdot 16^{5^k}-1\pmod{10^k}]^2-1\end{equation} for a positive integer $k$. These are the OEIS sequences A224473 and A224474. Proof: First, note that since $(10^k\pm1)^2-1\equiv((-1)^k\pm1)^2-1\equiv1\pm2(-1)^k\not\equiv0\pmod{11}$, $m\ne 10^k\pm1$ for $n>10$. If $k$ denotes the number of trailing zeros of $n!$, Legendre's formula implies that \begin{equation}k=\min\left\{\sum_{i=1}^\infty\left\lfloor\frac n{2^i}\right\rfloor,\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\right\}=\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\end{equation} where $\lfloor\cdot\rfloor$ denotes the floor function. The upper limit can be replaced by $\lfloor\log_5n\rfloor$ since for $i>\lfloor\log_5n\rfloor$, $\left\lfloor\frac n{5^i}\right\rfloor=0$. An upper bound can be found using geometric series and the fact that $\lfloor x\rfloor\le x$: \begin{equation}k=\sum_{i=1}^{\lfloor\log_5n\rfloor}\left\lfloor\frac n{5^i}\right\rfloor\le\sum_{i=1}^{\lfloor\log_5n\rfloor}\frac n{5^i}=\frac n4\left(1-\frac1{5^{\lfloor\log_5n\rfloor}}\right)<\frac n4.\end{equation} Thus $n!$ has $k$ zeroes for some $n\in(4k,\infty)$. Since $m=2\cdot5^{2^k}-1\pmod{10^k}$ and $2\cdot16^{5^k}-1\pmod{10^k}$ has at most $k$ digits, $m^2-1$ has only at most $2k$ digits by the conditions in the Corollary. The Corollary if $n!$ has more than $2k$ digits for $n>10$. From equation $(4)$, $n!$ has at least the same number of digits as $(4k)!$. Stirling's formula implies that \begin{equation}(4k)!>\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\end{equation} Since the number of digits of an integer $t$ is $1+\lfloor\log t\rfloor$ where $\log$ denotes the logarithm in base $10$, the number of digits of $n!$ is at least \begin{equation}1+\left\lfloor\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)\right\rfloor\ge\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right).\end{equation} Therefore it suffices to show that for $k\ge2$ (since $n>10$ and $k<n/4$), \begin{equation}\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)>2k\iff8\pi k\left(\frac{4k}e\right)^{8k}>10^{4k}\end{equation} which holds if and only if \begin{equation}\left(\frac{10}{\left(\frac{4k}e\right)}\right)^{4k}<8\pi k\iff k^2(8\pi k)^{\frac1{4k}}>\frac58e^2.\end{equation} Now consider the function $f(x)=x^2(8\pi x)^{\frac1{4x}}$ over the domain $\Bbb R^+$, which is clearly positive there. Then after considerable algebra it is found that \begin{align*}f'(x)&=2x(8\pi x)^{\frac1{4x}}+\frac14(8\pi x)^{\frac1{4x}}(1-\ln(8\pi x))\\\implies f'(x)&=\frac{2f(x)}{x^2}\left(x-\frac18\ln(8\pi x)\right)>0\end{align*} for $x>0$ as $\min\{x-\frac18\ln(8\pi x)\}>0$ in the domain. Thus $f$ is monotonically increasing in $(0,\infty)$, and since $2^2(8\pi\cdot2)^{\frac18}>\frac58e^2$, the inequality in equation $(8)$ holds. This means that the number of digits of $n!$ exceeds $2k$, proving the Corollary. $\square$ We get $n^n+3\equiv 0\pmod 4$ for odd $n$, so we can see from here that it is even (or, we could have used @TheSimpliFire's one-or-two-step method to derive this without any contradiction - which is better) @TheSimpliFire Hey! with $4\pmod {10}$ and $0\pmod 4$ then this is the same as $10m_1+4$ and $4m_2$. If we set them equal to each other, we have that $5m_1=2(m_2-m_1)$ which means $m_1$ is even. We get $4\pmod {20}$ now :P Yet again a conjecture!Motivated by Catalan's conjecture and a recent question of mine, I conjecture thatFor distinct, positive integers $a,b$, the only solution to this equation $$a^b-b^a=a+b\tag1$$ is $(a,b)=(2,5).$It is of anticipation that there will be much fewer solutions for incr...
If $\lambda_1,\dots,\lambda_n$ are distinct positive real numbers, then $$\sum_{i=1}^n \prod_{j\neq i} {\lambda_j\over \lambda_j-\lambda_i}=1.$$This identity follows from a probability calculation that you can find at the top of page 311 in the 10th edition of Introduction to Probability Models by Sheldon Ross. Is there a slick or obvious explanation for this identity? This question is sort of similar to my previous problem; clearly algebra is not my strong suit!
Gradient Boosting Seen as Gradient DescentMathematics · tldr The short version of this whole thing is that we can see the gradient boosting algorithm as just gradient descent on black-box models. Typically gradient descent involves parameters and a closed form solution to the gradient of your learner as a way to update those parameters. In the case of trees, for instance, there are no parameters you want to update 1, and somewhat more importantly they aren’t even continuous predictors, much less differentiable! But gradient boosting can be seen as gradient descent on black box models resulting in an additive black box model. GBM The naive algorithm A description of the algorithm I’m talking about can be found on wikipedia, but I’ll go over the algorithm somewhat quickly here just to add another phrasing of the thing. Let’s assume we have a feature-set $X$, a response variable $y$, a loss function $L(\hat y, y)$, and a learning rate $\lambda\in\mathbb{R}_+$ such that $\lambda \leq 1$. The algorithm is as follows: First we define $f_0(X) = 0$. Now if $i \geq 1$, we let: We then train a new set of trees (or whatever learner we want) such that: We then find the real number scaler that minimizes our loss in the following equation (AKA a line-search): Finally we define: So that In the special case where our loss function is mean squared error (or $L2$), our gradients are just the residuals. Gradient Descent The typical scenario we generally talk about in gradient descent is fitting linear weights in some type of model (whether that be a linear model, logistic regression, neural network, whatever). So let’s stick to this paradigm and consider the simplest case (a linear model), and we’ll show how the algorithm above is actually the same algorithm, just slightly more general. So we’re modeling $y = X\beta + \epsilon$; in the parlance of GBM: $f = \beta$ or $f(X) = X\beta$. To solve this, we define $\beta_0 = 0$. Then for $i \geq 1 \in \mathbb{N}$, Where $L, \lambda, \gamma_i$ are exactly the same as in the gradient boosting algorithm 2. So that was the standard gradient descent algorithm. Now let’s show how these two are really the same beast. Similarities We’ll show that the update formula is the same (mutatis mutandis) in both, and then as a consequence the final models will be the constructed in the same way (because they’re both aggregates of their updates) 3. To make the notation easier, we define: Update Formula For the update formula, we need to first note two things: Gradients are linear by definition, so $X\frac{\partial L}{\partial \beta_{i-1}}$ can be seen as the result of modeling ($\frac{\partial L}{\partial \beta_{i-1}(X)}$) by the application of a linear function (although it is normally unwise to do so). Which is the same formula we have in gradient boosting. Observations Since we’re modeling the gradient with an arbitrary (potentially black-box) learner, we don’t have the option to find the gradient with respect to the parameters, so the scale might not decrease as desired. To exemplify this, let’s consider an $L_1$ objective (Mean-Absolute-Error), and a black-box learner. The gradient at each point is either 1, -1, or np.nan (because the absolute value function is $f(x) = \pm x$ depending on $x$). The magnitude of the gradients will never change. In a linear model we have that extra $\frac{\partial f}{\partial \beta}$ which adds scale to our gradient, but in trees we have no such thing. So in order to add scale, we tend to fit the line search and then add a learning rate to avoid over-fitting. One can also sub-sample (as is a parameter in popular packages like LightGBM). Sub-sampling is the black-box model version of the familiar Stochastic Gradient Descent. Summary I realize this might be obvious to some, but it was pretty cool when I first realized this. I hope you found something useful and/or interesting here. Technically the split leaves in a tree define an indicator function on your data and the average value within a leaf (the prediction for that leaf) can be seen as the parameters of a tree, but this is kind of ridiculous because these are not tuned in the learning of the tree and there’s really no reason to do so (as far as I can tell). ↩ In practice we generally don’t include the line search and just have a decreasing $\lambda$ – and sometimes we don’t even do that. We can get away with these shortcuts because the magnitude of the gradients will decrease as you get closer to the optima, and the derivative with respect to $\beta$ is always continuous. The same cannot be said about the gradient boosting algorithm. ↩ If we call our final linear model: $X\hat\beta$, then $X\hat\beta = X\left(\sum\limits_{i=0}^N\lambda\gamma_i\alpha_i\right) = \sum\limits_{i=0}^N\lambda\gamma_iX\alpha_i $. So we can see that linear regression has always been constructed as a sum. ↩
OpenCV 3.4.3 Open Source Computer Vision class cv::line_descriptor::BinaryDescriptor Class implements both functionalities for detection of lines and computation of their binary descriptor. More... class cv::line_descriptor::BinaryDescriptorMatcher furnishes all functionalities for querying a dataset provided by user or internal to class (that user must, anyway, populate) on the model of Descriptor Matchers More... struct cv::line_descriptor::DrawLinesMatchesFlags struct cv::line_descriptor::KeyLine A class to represent a line. More... class cv::line_descriptor::LSDDetector void cv::line_descriptor::drawKeylines (const Mat &image, const std::vector< KeyLine > &keylines, Mat &outImage, const Scalar &color=Scalar::all(-1), int flags=DrawLinesMatchesFlags::DEFAULT) Draws keylines. More... void cv::line_descriptor::drawLineMatches (const Mat &img1, const std::vector< KeyLine > &keylines1, const Mat &img2, const std::vector< KeyLine > &keylines2, const std::vector< DMatch > &matches1to2, Mat &outImg, const Scalar &matchColor=Scalar::all(-1), const Scalar &singleLineColor=Scalar::all(-1), const std::vector< char > &matchesMask=std::vector< char >(), int flags=DrawLinesMatchesFlags::DEFAULT) Draws the found matches of keylines from two images. More... One of the most challenging activities in computer vision is the extraction of useful information from a given image. Such information, usually comes in the form of points that preserve some kind of property (for instance, they are scale-invariant) and are actually representative of input image. The goal of this module is seeking a new kind of representative information inside an image and providing the functionalities for its extraction and representation. In particular, differently from previous methods for detection of relevant elements inside an image, lines are extracted in place of points; a new class is defined ad hoc to summarize a line's properties, for reuse and plotting purposes. To obtatin a binary descriptor representing a certain line detected from a certain octave of an image, we first compute a non-binary descriptor as described in [220] . Such algorithm works on lines extracted using EDLine detector, as explained in [201] . Given a line, we consider a rectangular region centered at it and called line support region (LSR). Such region is divided into a set of bands \(\{B_1, B_2, ..., B_m\}\), whose length equals the one of line. If we indicate with \(\bf{d}_L\) the direction of line, the orthogonal and clockwise direction to line \(\bf{d}_{\perp}\) can be determined; these two directions, are used to construct a reference frame centered in the middle point of line. The gradients of pixels \(\bf{g'}\) inside LSR can be projected to the newly determined frame, obtaining their local equivalent \(\bf{g'} = (\bf{g}^T \cdot \bf{d}_{\perp}, \bf{g}^T \cdot \bf{d}_L)^T \triangleq (\bf{g'}_{d_{\perp}}, \bf{g'}_{d_L})^T\). Later on, a Gaussian function is applied to all LSR's pixels along \(\bf{d}_\perp\) direction; first, we assign a global weighting coefficient \(f_g(i) = (1/\sqrt{2\pi}\sigma_g)e^{-d^2_i/2\sigma^2_g}\) to i*-th row in LSR, where \(d_i\) is the distance of i-th row from the center row in LSR, \(\sigma_g = 0.5(m \cdot w - 1)\) and \(w\) is the width of bands (the same for every band). Secondly, considering a band \(B_j\) and its neighbor bands \(B_{j-1}, B_{j+1}\), we assign a local weighting \(F_l(k) = (1/\sqrt{2\pi}\sigma_l)e^{-d'^2_k/2\sigma_l^2}\), where \(d'_k\) is the distance of k-th row from the center row in \(B_j\) and \(\sigma_l = w\). Using the global and local weights, we obtain, at the same time, the reduction of role played by gradients far from line and of boundary effect, respectively. Each band \(B_j\) in LSR has an associated band descriptor(BD) which is computed considering previous and next band (top and bottom bands are ignored when computing descriptor for first and last band). Once each band has been assignen its BD, the LBD descriptor of line is simply given by \[LBD = (BD_1^T, BD_2^T, ... , BD^T_m)^T.\] To compute a band descriptor \(B_j\), each k-th row in it is considered and the gradients in such row are accumulated: \[\begin{matrix} \bf{V1}^k_j = \lambda \sum\limits_{\bf{g}'_{d_\perp}>0}\bf{g}'_{d_\perp}, & \bf{V2}^k_j = \lambda \sum\limits_{\bf{g}'_{d_\perp}<0} -\bf{g}'_{d_\perp}, \\ \bf{V3}^k_j = \lambda \sum\limits_{\bf{g}'_{d_L}>0}\bf{g}'_{d_L}, & \bf{V4}^k_j = \lambda \sum\limits_{\bf{g}'_{d_L}<0} -\bf{g}'_{d_L}\end{matrix}.\] with \(\lambda = f_g(k)f_l(k)\). By stacking previous results, we obtain the band description matrix (BDM) \[BDM_j = \left(\begin{matrix} \bf{V1}_j^1 & \bf{V1}_j^2 & \ldots & \bf{V1}_j^n \\ \bf{V2}_j^1 & \bf{V2}_j^2 & \ldots & \bf{V2}_j^n \\ \bf{V3}_j^1 & \bf{V3}_j^2 & \ldots & \bf{V3}_j^n \\ \bf{V4}_j^1 & \bf{V4}_j^2 & \ldots & \bf{V4}_j^n \end{matrix} \right) \in \mathbb{R}^{4\times n},\] with \(n\) the number of rows in band \(B_j\): \[n = \begin{cases} 2w, & j = 1||m; \\ 3w, & \mbox{else}. \end{cases}\] Each \(BD_j\) can be obtained using the standard deviation vector \(S_j\) and mean vector \(M_j\) of \(BDM_J\). Thus, finally: \[LBD = (M_1^T, S_1^T, M_2^T, S_2^T, \ldots, M_m^T, S_m^T)^T \in \mathbb{R}^{8m}\] Once the LBD has been obtained, it must be converted into a binary form. For such purpose, we consider 32 possible pairs of BD inside it; each couple of BD is compared bit by bit and comparison generates an 8 bit string. Concatenating 32 comparison strings, we get the 256-bit final binary representation of a single LBD. void cv::line_descriptor::drawKeylines ( const Mat & image, const std::vector< KeyLine > & keylines, Mat & outImage, const Scalar & color = Scalar::all(-1), int flags = DrawLinesMatchesFlags::DEFAULT ) Draws keylines. image input image keylines keylines to be drawn outImage output image to draw on color color of lines to be drawn (if set to defaul value, color is chosen randomly) flags drawing flags void cv::line_descriptor::drawLineMatches ( const Mat & img1, const std::vector< KeyLine > & keylines1, const Mat & img2, const std::vector< KeyLine > & keylines2, const std::vector< DMatch > & matches1to2, Mat & outImg, const Scalar & matchColor = Scalar::all(-1), const Scalar & singleLineColor = Scalar::all(-1), const std::vector< char > & matchesMask = std::vector< char >(), int flags = DrawLinesMatchesFlags::DEFAULT ) Draws the found matches of keylines from two images. img1 first image keylines1 keylines extracted from first image img2 second image keylines2 keylines extracted from second image matches1to2 vector of matches outImg output matrix to draw on matchColor drawing color for matches (chosen randomly in case of default value) singleLineColor drawing color for keylines (chosen randomly in case of default value) matchesMask mask to indicate which matches must be drawn flags drawing flags, see DrawLinesMatchesFlags
As soon as you will get into pretty complex derivatives, for example, you will need to generate correlated assets for pricing purposes. Example of such derivatives can be: Basket options Rainbox options Moutain ranges (created by Société Générale) The most complex amongst these derivatives cannot be priced using closed form formulae, Monte Carlo simulations are … Continue reading How to Generate Correlated Assets and Why? Today I will try to benchmark the execution speed of several programming languages on a Monte Carlo example. This benchmark involves VBA, C++, C#, Python, Cython and Numpy vectorization. I will try to add progressively other programming languages so that this article will be more thorough. Execution environment All the chunks of code have been … Continue reading Speed Execution Benchmark on Monte Carlo In this article, I will introduce what is implied volatility and several methods to find it. Here are the points I will try to cover: What is Implied Volatility? Dichotomy Method Newton Raphson Method Example in Python with a set of option prices Models Conclusion Implied Volatility Historical volatility and implied volatility, what is the … Continue reading How to get Implied Volatility? In this short article, I will apply Monte Carlo to barrier option pricing. Here are the points I am going to tackle: Quicker barrier options reminder Pros and cons of Monte Carlo for pricing Steps for Monte Carlo Pricing Up-and-Out Call pricing example Conclusion and ideas for better performance Barrier options Before entering in pricing … Continue reading Barrier option pricing with Monte Carlo In this article, I will introduce a way to backtest trading strategies in Python. All you need for this is a python interpreter, a trading strategy and last but not least: a dataset. A complete and clean dataset of OHLC (Open High Low Close) candlesticks is pretty hard to find, even more if you are … Continue reading Backtest a trading strategy in Python Introduction This first and basic article will show how to simulate a security following the Black & Scholes dynamic : $latex \frac{dS_t}{S_t} = \mu dt + \sigma dB_t$ When solving this stochastic differential equation with Ito, you finally obtain: $latex S_T = S_0 e^{(\mu - \frac{\sigma ^2}{2})T + \sigma B_T}$ The browian motion $latex B_T$ … Continue reading Monte Carlo Simulations of an asset with Black & Scholes dynamic
I'm making some graphs and I have to label the axes. I want to be extra careful and put the units in even though the meaning of $\text{pH}$ is well known. But I have a problem (though a simple one): $\text{pH}$ is a minus logarithm (base 10) of concentration of hydrogen ions (or rather their activity). What is the unit then, is it $[-\log(\text{mol}/\text{L})]$? What should I write, could you help me? The real definition of the $\text{pH}$ is not in terms of concentration but in terms of the activity of a proton, \begin{equation} \text{pH} = - \log a_{\ce{H+}} \ , \end{equation} and the activity is a dimensionless quantity. You can think of the activity as a generalization of the mole fraction that takes into account deviations from the ideal behaviour in real solutions. By introducing the (dimensionless) activity coefficient $\gamma_{\ce{H+}}$, which represents the effect of the deviations from the ideal behaviour on the concentration, you can link the activity to the concentration via \begin{equation} a_{\ce{H+}} = \frac{\gamma_{\ce{H+}} c_{\ce{H+}}}{c^0} \ , \end{equation} where $c^0$ is the standard concentration of $1 \, \text{mol}/\text{L}$. If you ignore the non-ideal contributions you can approximately express the $\text{pH}$ in terms of the normalized proton concentration \begin{equation} \text{pH} \approx - \log \frac{c_{\ce{H+}}}{c^0} \ . \end{equation} In general, there can be no logarithm of a quantity bearing a unit. If however you encounter such a case it is usually due to sloppy notation: either the argument of the logarithm is implicitly understood to be normalized and thus becomes unitless or the units in the logarithm's argument originate from using the mathematical properties of logarithms to divide the logarithm of a product which is by itself unitless into a sum of logarithms: $\log(a \cdot b) = \log(a) + \log(b)$. Unless you have very good reason to do otherwise, treat pH as dimensionless.
For the purpose of keeping essentials and being consistent with my source material, I took the freedomto replace the OP's original equation by the following:$$\frac{dc}{dt}=K\Delta c \quad \rightarrow \quad\frac{\partial T}{\partial t} = \frac{\partial^2 T}{\partial x^2} + \frac{\partial^2 T}{\partial y^2}$$This means that $c \rightarrow T$ and the physical constant is taken equal to unity, so $K=1$ . Linear Tetrahedron Let's consider at first the simplest non-trivial finite element shape in 3-D, which is a linear tetrahedron. Piece of theory for the Linear Thetrahedron has been developed here: The following formula is recalled from that MSE reference:$$T - T_0 = A.(x - x_0) + B.(y - y_0) + C.(z - z_0)$$What kind of terms can be discretized at the domain of a linear tetrahedron?In the first place, the function $T(x,y,z)$ itself, of course. But one may alsotry on the first order partial derivatives $\partial T / (x,y,z)$.From the above definition of $(A,B,C)$ we have:$$\frac{\partial T}{\partial x} = A \quad ; \quad\frac{\partial T}{\partial y} = B \quad ; \quad\frac{\partial T}{\partial z} = C$$Using the matrix expression which was found in the reference for $(A,B,C)$:$$\begin{bmatrix} \partial T / \partial x \\ \partial T / \partial y \\ \partial T / \partial z \end{bmatrix}= \begin{bmatrix} x_1-x_0 & y_1-y_0 & z_1-z_0 \\ x_2-x_0 & y_2-y_0 & z_2-z_0 \\ x_3-x_0 & y_3-y_0 & z_3-z_0 \end{bmatrix}^{-1} \begin{bmatrix} T_1-T_0 \\ T_2-T_0 \\ T_3-T_0 \end{bmatrix}$$ Space-time elements In 3-D space-time, the $z$-coordinate is replaced by time, the $t$-coordinate. This may be done as follows:$$x_3 = x_0 \; ; \; y_3 = y_0 \; ; \; z_0 = z_1 = z_2 = t_0 \; ; \; z_3 = t_3$$Consequently we have:$$\begin{bmatrix} \partial T / \partial x \\ \partial T / \partial y \\ \partial T / \partial t \end{bmatrix}= \begin{bmatrix} x_1-x_0 & y_1-y_0 & 0 \\ x_2-x_0 & y_2-y_0 & 0 \\ 0 & 0 & t_3 - t_0 \end{bmatrix}^{-1} \begin{bmatrix} T_1-T_0 \\ T_2-T_0 \\ T_3-T_0 \end{bmatrix}$$From which it follows that:$$\begin{bmatrix} \partial T / \partial x \\ \partial T / \partial y \end{bmatrix}= \begin{bmatrix} x_1-x_0 & y_1-y_0 \\ x_2-x_0 & y_2-y_0 \end{bmatrix}^{-1}\begin{bmatrix} T_1-T_0 \\ T_2-T_0 \end{bmatrix} \quad \Longrightarrow \\\begin{bmatrix} \partial T / \partial x \\ \partial T / \partial y \end{bmatrix}= \begin{bmatrix} y_2-y_0 & -(y_1-y_0) \\ -(x_2-x_0) & x_1-x_0 \end{bmatrix}/\Delta\begin{bmatrix} T_1-T_0 \\ T_2-T_0 \end{bmatrix} \\$$with $\Delta = (x_1-x_0)(y_2-y_0)-(y_1-y_0)(x_2-x_0)$ . And:$$\frac{\partial T}{\partial t} = \frac{T_3-T_0}{t_3-t_0}$$This effectively means that the space-time tetrhahedron splits up into a triangle in space and a time-step.Two other space-time elements are defined by:$$x_4 = x_1 \; ; \; y_4 = y_1 \; ; \; z_0 = z_1 = z_2 = t_1 \; ; \; z_4 = t_4 \\x_5 = x_2 \; ; \; y_5 = y_2 \; ; \; z_0 = z_1 = z_2 = t_2 \; ; \; z_5 = t_5$$Giving the same triangle in space, but different tetrahedrons in space-time, so we also have:$$\frac{\partial T}{\partial t} = \frac{T_4-T_1}{t_4-t_1} \\\frac{\partial T}{\partial t} = \frac{T_5-T_2}{t_5-t_2}$$But the time steps themselves are equal : $t_3-t_0 = t_4-t_1 = t_5-t_2 = \mbox{dt}$ . Diffusion term For the Linear Triangle there are two other references at MSE that might be useful: The latter reference is about the diffusion term (which is $K\nabla c$ in your question).Search for differentiation matrix in the latter reference and find the following formula:$$\Delta \left[ \begin{array}{c} \partial f / \partial x \\ \partial f / \partial y \end{array} \right] =\left[ \begin{array}{ccc} +(y_2 - y_3) & +(y_3 - y_1) & +(y_1 - y_2) \\ -(x_2 - x_3) & -(x_3 - x_1) & -(x_1 - x_2) \end{array}\right] \left[ \begin{array}{c} f_1 \\ f_2 \\ f_3 \end{array} \right]$$In a more abstract (operator) form this is read as:$$\begin{bmatrix} \partial/\partial x \\ \partial/\partial y \end{bmatrix} =\begin{bmatrix} +(y_2 - y_3) & +(y_3 - y_1) & +(y_1 - y_2) \\ -(x_2 - x_3) & -(x_3 - x_1) & -(x_1 - x_2) \end{bmatrix} / \Delta$$The rest of the reference is about the diffusion term (which is $K\Delta c$ in your question). Begin of quotes.When using a Finite Element method, the differential equation may be multiplied at first withan arbitrary (test)function. Subsequently the PDE is integrated over the domain of interest.Let the test function be called $f$, then:$$\iint f . \left[ \frac{\partial Q_x}{\partial x} + \frac{\partial Q_y}{\partial y} \right] \, dx dy = 0$$Partial integration, or applying Green's theorem (which is the same), results in an expression with [zero]line-integrals over the boundaries and an area integral over the bulk field. The latter is given by:$$- \iint \left[ \frac{\partial f}{\partial x}.Q_x + \frac{\partial f}{\partial y}.Q_y \right] \, dx dy$$Mind the minus sign. End of quotes. With diffusion of heat, we have the following expressions for $(Q_x,Q_y)$:$$Q_x = \frac{\partial T}{\partial x} \quad ; \quad Q_y = \frac{\partial T}{\partial y}$$Now I hope it is not difficult to see that the diffusion operator ends up as:$$-(\nabla\cdot\nabla) = - \begin{bmatrix} \partial/\partial x & \partial/\partial y \end{bmatrix} \begin{bmatrix} \partial/\partial x \\ \partial/\partial y \end{bmatrix}$$And the differentiation matrix may be employed to get the numerical equivalent. Programming All the theoretical ingredients are essentially there and we are ready to program: program Galt; type vektor = array of double; matrix = array of array of double; var x,y,T : vektor; k : integer; procedure element(x,y : vektor; var E : matrix); { Finite Element for Diffusion } var ddg: array[1..2,1..3] of double; i,j,m : byte; x32,x21,x13,y23,y12,y31,h,DET : double ; begin x32 := x[3]-x[2] ; y23 := y[2]-y[3] ; x13 := x[1]-x[3] ; y31 := y[3]-y[1] ; x21 := x[2]-x[1] ; y12 := y[1]-y[2] ; { Partial differentiation d/dx, d/dy at a triangle can be conveniently represented by the so called Differentiation Matrix: ---------------------- } DET := x21*y31-x13*y12 ; if DET = 0 then begin Writeln('Element: null triangle'); Halt; end; ddg[1,1] := y23/DET ; ddg[2,1] := x32/DET ; ddg[1,2] := y31/DET ; ddg[2,2] := x13/DET ; ddg[1,3] := y12/DET ; ddg[2,3] := x21/DET ; SetLength(e,4,4); { Laplace equation: } for i := 1 to 3 do begin for j := 1 to 3 do begin h := 0 ; for m := 1 to 2 do h := h+ddg[m,i]*ddg[m,j] ; E[i,j] := h; end; end; end; procedure stap(x,y : vektor; dt : double; var T : vektor); { Single time-step } const D : double = 1; var E : matrix; f : vektor; k,i : integer; bij : double; begin element(x,y,E); SetLength(f,4); for k := 1 to 3 do Write(T[k]); Writeln; for k := 1 to 3 do begin bij := 0; for i := 1 to 3 do bij := bij + E[k,i]*T[i]; f[k] := T[k] - D*bij*dt; end; for k := 1 to 3 do T[k] := f[k]; end; BEGIN SetLength(x,4); SetLength(y,4); SetLength(T,4); Random; Random; Random; Random; Random; Random; for k := 1 to 3 do begin x[k] := Random; y[k] := Random; end; T[1] := 10; T[2] := 0; T[3] := 0; while true do begin stap(x,y,0.01,T); Readln; end; END. Output: 1.00000000000000E+0001 0.00000000000000E+0000 0.00000000000000E+0000 8.61920953541079E+0000 1.33969080728511E+0000 4.10996573041039E-0002 7.60872336562710E+0000 2.17022299029973E+0000 2.21053644073171E-0001 6.84976940041965E+0000 2.68162359364645E+0000 4.68607005933898E-0001 6.26514037958233E+0000 2.99343622963065E+0000 7.41423390787021E-0001 5.80413089466921E+0000 3.18080816165291E+0000 1.01506094367787E+0000 5.43300384629370E+0000 3.29092935946325E+0000 1.27606679424305E+0000 5.12894722761363E+0000 3.35339081179384E+0000 1.51766196059253E+0000 4.87623530811445E+0000 3.38671052578895E+0000 1.73705416609660E+0000 4.66378311538872E+0000 3.40244436336416E+0000 1.93377252124711E+0000 4.48358247230717E+0000 3.40777490861061E+0000 2.10864261908222E+0000 4.32969660054942E+0000 3.40714131220967E+0000 2.26316208724091E+0000 4.19760933061891E+0000 3.40326490578317E+0000 2.39912576359792E+0000 4.08380003241469E+0000 3.39779418306767E+0000 2.51840578451765E+0000 3.98546274268985E+0000 3.39171005488102E+0000 2.62282720242914E+0000 ... 3.33333333333369E+0000 3.33333333333338E+0000 3.33333333333296E+0000 Notes. 1. Use has been made of the fact that our Finite Element matrix is actuallythree pieces of an equivalent finite difference system of linear equations,each equation belonging to a node of the triangle. So we must have three time-steps as well. 2. Random choices have been tried until the element-matrix has positive entrieson the main diagonal and negative off-diagonal. This (definitely) meansthat the accompanying triangle has no obtuse angles. 3. Choosing the time-step dt is an important issue that has not been elaborated here.For our purpose, it has been determined experimentally, in such a way that no instabilities are being observed in the output. UPDATE. With hindsight, instead of three overlapping tetrahedrons, it is more convenient to considerjust one space-time finite element, namely a triangular brick, like this one: It's an element with a triangle $(0,1,2)$ at the bottom and a triangle $(3,4,5)$ at the top.The edges $(0,4),(1,5),(2,6),(3,7)$ are perpendicular to both triangles, and:$$x_0 = x_3 \; , \; y_0 = y_3 \; , \; x_1 = x_4 \; , \; y_1 = y_4 \; , \; x_2 = x_5 \; , \; y_2 = y_5 \\t_0 = t_1 = t_2 \; , \; t_3 = t_4 = t_5$$The F.E. interpolation of the element is, in local coordinates,with $(\xi,\eta) = $ local 2-D space and $\zeta$ = local time:$$T = (1-\zeta)\left[(1-\xi-\eta).T_0+\xi.T_1+\eta.T_2\right]+\zeta\left[(1-\xi-\eta).T_3+\xi.T_4+\eta.T_5\right]$$Likewise for the global coordinates $T \rightarrow x,y,t\;$ (: isoparametrics). Then the rest follows.
Discussion on Bangladesh Mathematical Olympiad (BdMO) National Moon Site Admin Posts: 751 Joined: Tue Nov 02, 2010 7:52 pm Location: Dhaka, Bangladesh Contact: Problem: In triangle $ABC$, medians $AD$ and $CF$ intersect at point $G$. $P$ is an arbitrary point on $AC$. $PQ$ & $PR$ are parallel to $AD$ & $CF$ respectively. $PQ$ intersects $BC$ at $Q$ and $PR$ intersects $AB$ at $R$. If $QR$ intersects $AD$ at $M$ & $CF$ at $N$, then prove that area of triangle $GMN$ is $\frac{(A)}{8}$ where $(A)$ = area enclosed by $PQ, PR, AD, CF$. photon Posts: 186 Joined: Sat Feb 05, 2011 3:39 pm Location: dhaka Contact: that area is a parallelogam.with similarity we can show, $2RM==MQ$ and $2NQ=RN$.which implies $RM=MN=NQ$.so let PR and PQ intersect AD and CF at R' and Q'. $RR'M,QQ'N,GMN$ are congruent.so GM=MR' and GN=NQ'. thus we get that fraction dividing area... Try not to become a man of success but rather to become a man of value.-Albert Einstein nafistiham Posts: 829 Joined: Mon Oct 17, 2011 3:56 pm Location: 24.758613,90.400161 Contact: \[DG=\frac {AG}{2}\] \[FG=\frac {CG}{2}\] the only thing is needed that Labib Posts: 411 Joined: Thu Dec 09, 2010 10:58 pm Location: Dhaka, Bangladesh. Contact: Tiham, I wanted a detailed solution. Please Install $L^AT_EX$ fonts in your PC for better looking equations, Learn how to write equations, and don't forget to read Forum Guide and Rules. "When you have eliminated the impossible, whatever remains, however improbable, must be the truth." - Sherlock Holmes *Mahi* Posts: 1175 Joined: Wed Dec 29, 2010 12:46 pm Location: 23.786228,90.354974 Contact: Let $PR \cap GC =X$ and $AG \cap QN =Y$. Then try to prove $\triangle RXN,\triangle GNM,\triangle MQY$ are homothetic. Then it follows that for some $x,y$; $xRX+yRX=2RX$ and $2QY=\frac 1yQY +\frac xyQY$ and the only nonzero solution for $x,y$ is $(1,1)$. Please read Forum Guide and Rules before you post. Use $L^AT_EX$, It makes our work a lot easier! Nur Muhammad Shafiullah | Mahi asif e elahi Posts: 183 Joined: Mon Aug 05, 2013 12:36 pm Location: Sylhet,Bangladesh Let $PQ\cap AG=S,PR\cap CG=F$. So $PSGT$ is a paraleogram. As $\bigtriangleup AQP\sim \bigtriangleup AFC$ $\frac{1}{2}=\frac{CG}{GF}=\frac{PS}{QS}$. Again $PR\parallel MS$. So $\frac{QM}{MR}= \frac{QS}{PS}=2$. So $MR=2QM$.Similarly $QN=2NR$.this implies $QM=MN=NR$. $\frac{GM}{SM}=\frac{NM}{QM}=1$.$M$ is the midpoint of $GS$ and $N$ is the midpoint of $GT$.So $(GNM)=\frac{GST}{4}=\frac{GSPT}{8}=\frac{(A)}{8}$
This answer uses a simplified method, which avoids the complications of having to deal with internal forces between the link and cylinders. The motion of the two cylinders is identical, so they can be treated as one cylinder of mass $2M$. The link is then a point mass $m$ located at distance $r_0$ from the centre O. We can treat the link as a point mass because its orientation does not change throughout the motion. Like a point mass it has only translational KE, it has no rotational KE. The moment of inertia of the double-cylinder about the point of contact with the ground P is $$I=2(\frac12Mr^2+Mr^2)=3Mr^2$$ The moment of inertia of the double-cylinder and link about P is therefore $$I_P=3Mr^2+mr_1^2 \approx 3Mr^2+m(r-r_0)^2$$ where $r_1$ is the distance LP and for small oscillations $r_1 \approx r-r_0$. The torque about the axis at P is $\tau\approx mgr_0\theta$ in the small angle approximation. This acts to decrease $\theta$ so the equation of motion is $$\tau=-I_p \ddot \theta$$ $$mgr_0\theta \approx -[3Mr^2+m(r-r_0)^2]\ddot\theta$$ $$\ddot\theta+\frac{mgr_0}{3Mr^2+m(r-r_0)^2}\theta \approx0$$ The natural frequency is $$f_n\approx 2\pi \sqrt{\frac{mgr_0}{3Mr^2+m(r-r_0)^2}}$$
In Jackson's text he says that Faraday law is actually:$$\oint_{\partial \Sigma} \mathbf{E} \cdot \mathrm{d}\boldsymbol{\ell} = -k\iint_{\Sigma} \frac{\partial \mathbf B}{\partial t} \cdot \mathrm{d}\mathbf{S}$$where $k$ is a constant to be determined. (page 210, third ed.).He claims that $k$ is not an independent empirical constant that must be measured from experiment, but is an inherent constant which for each system of units can be determined by Galilean invariance and also Lorentz force law.He writes the Faraday's law in two frames, lab frame and a moving frame with velocity $\mathbf{v}$, and by writing the above law in each of two frames and assuming : electric field in one frame is $\mathbf{E}'$ and in the other is $\mathbf{E}$ (so they are different) , but magnetic field is $\mathbf{B}$ in both frames! Galilean invariance needs :$$\iint_{\Sigma} \frac{\partial \mathbf B}{\partial t} \cdot \mathrm{d}\mathbf{S} $$ be equal in two frames deduces that : $k=1$ and also the electric field in the moving reference frame is $$\mathbf{E}' = \mathbf{E} + \mathbf{v} \times\mathbf{B}$$. I know that this electric field ($\mathbf{E}'$ ,in the moving frame ) is only an approximation and the real $\mathbf{E}'$ that can be obtained using Lorentz transformations. Now the question is that how Galilean transformations which are wrong (are approximately correct) give the correct answer for $k$ ? Why we should assume that there are two electric fields ,one in the lab frame and one in the other , but just one magnetic field in both frames?
I know that this question has been submitted several times (especially see How are anyons possible?), even as a byproduct of other questions, since I did not find any completely satisfactory answers, here I submit another version of the question, stated into a very precise form using only very elementary general assumptions of quantum physics. In particular I will not use any operator (indicated by $P$ in other versions) representing the swap of particles. Assume to deal with a system of a couple of identical particles, each moving in $R^2$. Neglecting for the moment the fact that the particles are indistinguishable, we start form the Hilbert space $L^2(R^2)\otimes L^2(R^2)$, that is isomorphic to $L^2(R^2\times R^2)$. Now I divide the rest of my issue into several elementary steps. (1) Every element $\psi \in L^2(R^2\times R^2)$ with $||\psi||=1$ defines a state of the system, where $|| \cdot||$ is the $L^2$ norm. (2) Each element of the class $\{e^{i\alpha}\psi\:|\; \psi\}$ for $\psi \in L^2(R^2\times R^2)$ with $||\psi||=1$ defines the same state, and a state is such a set of vectors. (3) Each $\psi$ as above can be seen as a complex valued function defined, up to zero (Lebesgue) measure sets, on $R^2\times R^2$. (4) Now consider the "swapped state" defined (due to (1)) by $\psi' \in L^2(R^2\times R^2)$ by the function (up to a zero measure set): $$\psi'(x,y) := \psi(y,x)\:,\quad (x,y) \in R^2\times R^2$$ (5) The physical meaning of the state represented by $\psi'$ is that of a state obtained form $\psi$ with the role of the two particles interchanged. (6) As the particles are identical, the state represented by $\psi'$ must be the same as that represented by $\psi$. (7) In view of (1) and (2) it must be: $$\psi' = e^{i a} \psi\quad \mbox{for some constant $a\in R$.}$$ Here physics stops. I will use only mathematics henceforth. (8) In view of (3) one can equivalently re-write the identity above as $$\psi(y,x) = e^{ia}\psi(x,y) \quad \mbox{almost everywhere for $(x,y)\in R^2\times R^2$}\quad [1]\:.$$ (9) Since $(x,y)$ in [1] is every pair of points up to a zero-measure set, I am allowed to change their names obtaining $$\psi(x,y) = e^{ia}\psi(y,x) \quad \mbox{almost everywhere for $(x,y)\in R^2\times R^2$}\quad [2]$$ (Notice the zero measure set where the identity fails remains a zero measure set under the reflexion$(x,y) \mapsto (y,x)$, since it is an isometry of $R^4$ and Lebesgues' measure is invariant under isometries.) (10) Since, again, [2] holds almost everywhere for every pair $(x,y)$, I am allowed to use again [1] in the right-hand side of [2] obtaining: $$\psi(x,y) = e^{ia}e^{ia}\psi(x,y) \quad \mbox{almost everywhere for $(x,y)\in R^2\times R^2$}\:.$$ (This certainly holds true outside the union of the zero measure set $A$ where [1] fails and that obtained by reflexion $(x,y) \mapsto (y,x)$ of $A$ itself.) (11) Conclusion: $$[e^{2ia} -1] \psi(x,y)=0 \qquad\mbox{almost everywhere for $(x,y)\in R^2\times R^2$}\quad [3]$$ Since $||\psi|| \neq 0$, $\psi$ cannot vanish everywhere on $R^2\times R^2$.If $\psi(x_0,y_0) \neq 0$, $[e^{2ia} -1] \psi(x_0,y_0)=0$ implies $e^{2ia} =1 $ and so: $$e^{ia} = \pm 1\:.$$ And thus, apparently, anyons are not permitted. Where is the mistake? ADDED REMARK. (10) is a completely mathematical result. Here is another way to obtain it. (8) can be written down as $\psi(a,b) = e^{ic} \psi(b,a)$ for some fixed $c \in R$ and all $(a,b) \in R^2 \times R^2$ (I disregard the issue of negligible sets). Choosing first $(a,b)=(x,y)$ and then $(a,b)=(y,x)$ we obtain resp. $\psi(x,y) = e^{ic} \psi(y,x)$ and $\psi(y,x) = e^{ic} \psi(x,y)$. They immediately produce [3] $\psi(x,y) = e^{i2c} \psi(x,y)$. So the physical argument (4)-(7) that we have permuted again the particles and thus a further new phase may appear does not apply here. 2nd ADDED REMARK. It is clear that as soon as one is allowed to write $\psi(x,y) = \lambda \psi(y,x)$ for a constant $\lambda\in U(1)$ and all $(x,y) \in R^2\times R^2$ the game is over: $\lambda$ turns out to be $\pm 1$ and anyons are forbidden.This is just mathematics however. My guess for a way out is that the true configuration space is not $R^2\times R^2$ but some other space whose $R^2 \times R^2$ is the universal covering. An idea (quite rough) could be the following. One should assume that particles are indistinguishable from scratch already defining the configuration space, that is something like $Q := R^2\times R^2/\sim$ where $(x',y')\sim (x,y)$ iff $x'=y$ and $y'=x$. Or perhaps subtracting the set $\{(z,z)\:|\: z \in R^2\}$ to $R^2\times R^2$ before taking the quotient to say that particles cannot stay at the same place. Assume the former case for the sake of simplicity. There is a (double?) covering map $\pi : R^2 \times R^2 \to Q$. My guess is the following. If one defines wavefunctions $\Psi$ on $R^2 \times R^2$, he automatically defines many-valued wavefunctions on $Q$. I mean $\psi:= \Psi \circ \pi^{-1}$. The problem of many values physically does not matter if the difference of the two values (assuming the covering is a double one) is just a phase and this could be written, in view of the identification $\sim$ used to construct $Q$ out of $R^2 \times R^2$: $$\psi(x,y)= e^{ia}\psi(y,x)\:.$$ Notice that the identity cannot be interpreted literally because $(x,y)$ and $(y,x)$ are the same point in $Q$, so my trick for proving $e^{ia}=\pm 1$ cannot be implemented. The situation is similar to that of $QM$ on $S^1$ inducing many-valued wavefunctions form its universal covering $R$. In that case one writes $\psi(\theta)= e^{ia}\psi(\theta + 2\pi)$. 3rd ADDED REMARK I think I solved the problem I posted focusing on the model of a couple of anyons discussed on p.225 of this paper matwbn.icm.edu.pl/ksiazki/bcp/bcp42/bcp42116.pdf suggested by Trimok. The model is simply this one:$$\psi(x,y):= e^{i\alpha \theta(x,y)} \varphi(x,y)$$ where $\alpha \in R$ is a constant, $\varphi(x,y)= \varphi(y,x)$, $(x,y) \in R^2 \times R^2$ and $\theta(x,y)$ is the angle with respect to some fixed axis of the segment $xy$. One can pass to coordinates $(X,r)$, where $X$ describes the center of mass and $r:= y-x$. Swapping the particles means $r\to -r$. Without paying attention to mathematical details, one sees that, in fact: $$\psi(X,-r)= e^{i \alpha \pi} \psi(X,r)\quad \mbox{i.e.,}\quad \psi(x,y)= e^{i \alpha \pi} \psi(y,x)\quad (A)$$for an anti clock wise rotation. (For clock wise rotations a sign $-$ appears in the phase, describing the other element of the braid group $Z_2$. Also notice that, for $\alpha \pi \neq 0, 2\pi$ the function vanishes for $r=0$, namely $x=y$, and this corresponds to the fact that we removed the set $C$ of coincidence points $x=y$ from the space of configurations.) However a closer scrutiny shows that the situation is more complicated:The angle $\theta(r)$ is not well defined without fixing a reference axis where $\theta =0$. Afterwards one may assume, for instance, $\theta \in (0,2\pi)$, otherwise $\psi$ must be considered multi-valued. With the choice $\theta(r) \in (0,2\pi)$, (A) does not hold everywhere. Consider an anti clockwise rotation of $r$. If $\theta(r) \in (0,\pi)$ then (A) holds in the form$$\psi(X,-r)= e^{+ i \alpha \pi} \psi(X,r)\quad \mbox{i.e.,}\quad \psi(x,y)= e^{+ i \alpha \pi} \psi(y,x)\quad (A1)$$ but for $\theta(r) \in (\pi, 2\pi)$, and always for a anti clockwise rotation one finds$$\psi(X,-r)= e^{-i \alpha \pi} \psi(X,r)\quad \mbox{i.e.,}\quad \psi(x,y)= e^{- i \alpha \pi} \psi(y,x)\quad (A2)\:.$$Different results arise with different conventions. In any cases it is evident that the phase due to the swap process is a function of $(x,y)$ (even if locally constant) and not a constant. This invalidate my "no-go proof", but also proves that the notion of anyon statistics is deeply different from the standard one based on the groups of permutations, where the phases due to the swap of particles is constant in $(x,y)$. As a consequence the swapped state is different from the initial one, differently form what happens for bosons or fermions and against the idea that anyons are indistinguishable particles. [Notice also that, in the considered model, swapping the initial pair of bosons means $\varphi(x,y) \to \varphi(y,x)= \varphi(x,y)$ that is $\psi(x,y)\to \psi(x,y)$. That is, swapping anyons does not mean swapping the associated bosons, and it is correct, as it is another physical operation on different physical subjects.] Alternatively one may think of the anyon wavefunction $\psi(x,y)$ as a multi-valued one, again differently from what I assumed in my "no-go proof" and differently from the standard assumptions in QM. This produces a truly constant phase in (A). However, it is not clear to me if, with this interpretation the swapped state of anyons is the same as the initial one, since I never seriously considered things like (if any) Hilbert spaces of multi-valued functions and I do not understand what happens to the ray-representation of states. This picture is physically convenient, however, since it leads to a tenable interpretation of (A) and the action of the braid group turns out to be explicit and natural. Actually a last possibility appears. One could deal with (standard complex valued) wavefunctions defined on $(R^2 \times R^2 - C)/\sim$ as we know (see above, $C$ is the set of pairs $(x,y)$ with $x=y$) and we define the swap operation in terms of phases only (so that my "no-go proof" cannot be applied and the transformations do not change the states): $$\psi([(x,y)]) \to e^{g i\alpha \pi}\psi([(x,y)])$$ where $g \in Z_2$. This can be extended to many particles passing to the braid group of many particles. Maybe it is convenient mathematically but is not very physically expressive. In the model discussed in the paper I mentioned, it is however evident that, up to an unitary transformation, the Hilbert space of the theory is nothing but a standard bosonic Hilbert space, since the considered wavefunctions are obtained from those of that space by means of a unitary map associated with a singular gauge transformation,and just that singularity gives rise to all the interesting structure! However, in the initial bosonic system the singularity was pre-existent: the magnetic field was a sum of Dirac's delta. I do not know if it makes sense to think of anyons independently from their dynamics.And I do not know if this result is general. I guess that moving the singularity form the statistics to the interaction and This post imported from StackExchange Physics at 2014-04-11 15:20 (UCT), posted by SE-user V. Moretti vice versa is just what happens in path integral formulation when moving the external phase to the internal action, see Tengen's answer.
The set of lambda calculus expressions $Expr$ is generated by the grammar $$ Expr \ni e ::= x \mid \lambda x\ldotp e \mid e_1 e_2 $$ We can define an interpreter without explicit substitution by using environments and closures:$$ \begin{align*} Env &= Var \rightharpoonup Cl &\text{($\rightharpoonup$ is partial function space)}\\ Cl &= Env \times Expr \end{align*}$$ The evaluation function $eval : Env \times Expr \to Cl$ sends each expression in a given environment to an evaluated closure: $$ \begin{align*} eval_\rho(x) =& \rho(x) \\ eval_\rho(\lambda x\ldotp e) =& (\rho, \lambda x\ldotp e) \\ eval_\rho(e_1 e_2) =& \mathrm{let~}eval_\rho(e_1) = (\rho', \lambda x\ldotp e_1') \\ & \mathrm{in~} eval_{\rho'[x \mapsto eval_\rho(e_2)]}(e_1') \end{align*} $$ Using substitution, a closure $(\rho, e) \in Cl$ can be expanded to yield the represented expression: $$F([x_1 \mapsto c_1, ..., x_n \mapsto c_n], e) = e[F(c_1)/x_1, ..., F(c_n)/x_n]$$ This however forgets all sharing. E.g., $F([x \mapsto ([], f)], x x) = f f$ duplicates the expression $f$, which might be very large. An alternative expansion retains sharing: $$ G([x_1 \mapsto e_1, ..., x_n \mapsto e_n],e) = (\lambda x_1 \ldotp \cdots (\lambda x_n \ldotp e) \cdots) ~G(e_1)~\cdots ~ G(e_n) $$ This on the other hand repeats definitions in nested scopes, e.g.: $$ G([x\mapsto ([],f), y \mapsto ([x \mapsto ([],f)], g)], x y) \\ = (\lambda x\ldotp \lambda y\ldotp xy)~(f)~((\lambda x\ldotp g)~f) $$ An equivalent but more economical expansion that doesn't duplicate $g$ would be $$ (\lambda x\ldotp (\lambda y\ldotp xy) g)~f $$ For practical purposes, if one wants to "render" a closure as a plain expression, both $F$ and $G$ are unsatisfactory: The closure environments can be efficiently represented by pointer datastructures that takes avantage of sharing, but most of that sharing seems to be lost with either approach. Do you know of any efficient data structures and algorithms related to this problem? Is this problem called something more specific in the literature? I tried search for "lambda calculus closure expansion", but without much success.
June 15th, 2015, 12:48 AM # 1 Senior Member Joined: Apr 2008 Posts: 194 Thanks: 3 Is this word problem solvable with a lack of information? Two gears engage with each other. Gear 1 has 6 teeth. Gear 2 has 8 teeth. As Gear 1 turns, it causes Gear 2 to turn at a different rate. Gear 1 is rotated until the two gears are back to the starting position. What is the minimum number of rotations Gear 1 requires to return to this starting position? I have no idea of how to do it. Could someone please explain it? Thanks. Last edited by skipjack; June 15th, 2015 at 02:29 AM. June 15th, 2015, 02:44 AM # 2 Global Moderator Joined: Dec 2006 Posts: 20,978 Thanks: 2229 You need to assume the teeth of gear 1 engage directly with those of gear 2, so that each revolution of gear 1 causes 3/4 of a revolution of gear 2. The gears then first return to their starting positions after 4 revolutions of gear 1 (which have caused 3 revolutions of gear 2). June 15th, 2015, 05:30 AM # 3 Senior Member Joined: Apr 2014 From: Glasgow Posts: 2,161 Thanks: 734 Math Focus: Physics, mathematical modelling, numerical and computational solutions This is a lowest common multiple question. prime factor decomposition: $\displaystyle 6 = 2 \times 3$ $\displaystyle 8 = 2 \times 2 \times 2$ Highest common factor = 2 Therefore, the lowest common multiple is $\displaystyle (2 \times 3) \times (2 \times 2 \times \cancel{2}) = 2 \times 2 \times 2 \times 3 = 24$ So, 24 teeth will connect in order for the gears to return to their starting position. number of rotations for gear 1 is $\displaystyle \frac{24}{6} = 4$ number of rotations for gear 2 is $\displaystyle \frac{24}{8} = 3$ Tags information, lack, problem, solvable, word Thread Tools Display Modes Similar Threads Thread Thread Starter Forum Replies Last Post A rant - Lack of practise on my maths course alexpasty Academic Guidance 1 March 3rd, 2013 03:42 PM is it solvable ? versmart Algebra 8 September 15th, 2011 07:07 AM Is this a solvable problem? gpxfiles77 Algebra 1 January 11th, 2010 03:44 PM is it solvable ? versmart Abstract Algebra 5 December 31st, 1969 04:00 PM
I've formulated two conjectures that seems to imply a strong result when are combined with well known equivalences of the Riemann hypothesis, and I would like to know how get a disproof of such statements. Let $S(n)=\sum_{k=1}^n\text{n mod k}$ the sum of remainder function, then it is know that for each $n>1$$$\sigma(n)+S(n)=S(n-1)+2n-1,$$where $\sigma(n)$ is the sum of divisors function, and let $H_n=1+1/2+\ldots+1/n$ the nth harmonic number. Conjecture 1.The following asymptotic equivalence holds $$\frac{S(n)}{e^{H_n}}\sim\frac{n}{10},$$ as $n\to\infty$. On assumption of this Conjecture 1 one has that using Robin's criterion for the Riemann's hypothesis (or also Lagarias equivalence, the computations are the same), $\forall n>5040$$$\sigma(n)<e^\gamma n\log\log n$$can be evaluated as $$\frac{S(n-1)}{e^{\frac{1}{n}}e^{H_{n-1}}}+\frac{2n}{ne^\gamma e^{O(\frac{1}{n})}}<\frac{1}{e^{H_{n}}}+\frac{S(n)}{e^{H_{n}}}+\frac{e^\gamma n\log\log n}{ne^\gamma e^{O(\frac{1}{n})}},$$since $H_n=\log n+\gamma+O(\frac{1}{n})$, where $\gamma$ is Euler's constant, thus $LHS\sim\frac{n-1}{10}+\frac{2}{e^\gamma}$ and by comparison with $RHS\sim\frac{n}{10}+\log\log n$ one conclude this equivalence with Riemann hypothesis holds asymptotically. On the other hand, one has for $M(n)=\sum_{k=1}^n\mu(k)$ is the Mertens function, where $\mu(k)$ is Mobius function, then Conjecture 2.The following holds $$(M(n))^2=o\left(e^{H_n}\right)$$ as $n\to\infty$. Thus by comparison with the equivalence of Riemann hypothesis stated as $M(x)=O(x^{\frac{1}{2}+\epsilon}),\forall\epsilon>0$, on assumption of this Conjecture 2 one get a contradiction since there is a constant $C>0$ such that $\lim_{n\to\infty}\frac{(M(n))^2}{e^{H_n}}$ is computed as $$\lim_{n\to\infty}\frac{C\cdot n^{2(\frac{1}{2}+\epsilon)}}{ne^\gamma e^{O(\frac{1}{n})}}=\lim_{n\to\infty}\frac{Cn^{2\epsilon}}{e^\gamma}$$that is infnite for any $\epsilon>0$, a contradiction with the assumption that $(M(n))^2=o\left(e^{H_n}\right)$ that means $\lim_{n\to\infty}\frac{(M(n))^2}{e^{H_n}}=0$. I did graphs of such ratios, but I'm assuming that perhaps there are mistakes in my computations or facts that I don't understand, since I understand that my reasoning are very soft for such great unsolved problem. I would like to ask how can we refute these conjectures. Question.Can you refute each of these conjecture? Then I can learn how you give a mathematical reasoning for my question. Thanks in advance.
For this univariate linear regression model $$y_i = \beta_0 + \beta_1x_i+\epsilon_i$$ given data set $D=\{(x_1,y_1),...,(x_n,y_n)\}$, the coefficient estimates are $$\hat\beta_1=\frac{\sum_ix_iy_i-n\bar x\bar y}{n\bar x^2-\sum_ix_i^2}$$ $$\hat\beta_0=\bar y - \hat\beta_1\bar x$$ Here is my question, according to the book and Wikipedia, the standard error of $\hat\beta_1$ is $$s_{\hat\beta_1}=\sqrt{\frac{\sum_i\hat\epsilon_i^2}{(n-2)\sum_i(x_i-\bar x)^2}}$$ How and why? 3rd comment above: I've already understand how it comes. But still a question: in my post, the standard error has (n−2), where according to your answer, it doesn't, why? In my post, it is found that $$ \widehat{\text{se}}(\hat{b}) = \sqrt{\frac{n \hat{\sigma}^2}{n\sum x_i^2 - (\sum x_i)^2}}. $$ The denominator can be written as $$ n \sum_i (x_i - \bar{x})^2 $$ Thus, $$ \widehat{\text{se}}(\hat{b}) = \sqrt{\frac{\hat{\sigma}^2}{\sum_i (x_i - \bar{x})^2}} $$ With $$ \hat{\sigma}^2 = \frac{1}{n-2} \sum_i \hat{\epsilon}_i^2 $$ i.e. the Mean Square Error (MSE) in the ANOVA table, we end up with your expression for $\widehat{\text{se}}(\hat{b})$. The $n-2$ term accounts for the loss of 2 degrees of freedom in the estimation of the intercept and the slope. another way of thinking about the n-2 df is that it's because we use 2 means to estimate the slope coefficient (the mean of Y and X) df from Wikipedia: "...In general, the degrees of freedom of an estimate of a parameter are equal to the number of independent scores that go into the estimate minus the number of parameters used as intermediate steps in the estimation of the parameter itself ."
It is maybe simpler to consider all the generators as representations of $SL(2,C)$, so, using spinor indices, you will have : $M^{\alpha \dot \alpha \beta \dot \beta}, P^{\beta \dot \beta}, Q_\alpha, \bar Q^\dot\beta$ Indices are raised and lowered with the Levi-Civita symbols $\epsilon_{\alpha \beta}, \epsilon^{\alpha \beta},\epsilon_{\dot \alpha \dot \beta},\epsilon^{\dot \alpha \dot \beta}$ Now, what is $[P^{\beta \dot \beta}, Q_\alpha]$ ? We see that there is no generator with the form $G^{\beta \dot \beta}_\alpha$. Levi-Civita symbols are not useful too, because they have $2$ lower or upper indices of same kind, so we cannot write something like $[P^{\beta \dot \beta}, Q_\alpha] = \epsilon_{\alpha \beta}Q^\dot\beta$ (there would be an obvious problem with the $_\beta$ indice). So the only solution is a contraction on indices $\alpha$ and $\beta$, that is : $[P^{\beta \dot \beta}, Q_\alpha] = \delta_{\alpha} ^{\beta} \bar Q^\dot\beta$ With $P^\mu = \sigma^\mu_{\beta \dot \beta}P^{\beta \dot \beta}$, (which means simply that the $(\frac{1}{2}, \frac{1}{2})$ representation of $SL(2,C)$ is equivalent to the fundamental representation of $SO(3,1)$ ) we get finally : $[P^\mu, Q_\alpha] = \sigma^\mu_{\beta \dot \beta}\delta_{\alpha} ^{\beta} \bar Q^\dot\beta = \sigma^\mu_{\alpha \dot \beta} \bar Q^\dot\beta$This post imported from StackExchange Physics at 2014-08-12 09:38 (UCT), posted by SE-user Trimok
joriki I don't know why people complain about copy-pasted homework. I wish people would copy-paste their homework. What gets me is when they paraphrase it incorrectly or copy it sloppily by hand and a lot more work is spent on fixing the problem statement and figuring out which errors were in the original and which are due to the copying process than is spent on actually answering a question. Berlin, Germany Member for 8 years, 7 months 1,064 profile views Last seen Jul 11 at 9:30 Communities (43) Mathematics 173.5k 173.5k1010 gold badges199199 silver badges360360 bronze badges Meta Stack Exchange 539 53922 silver badges55 bronze badges Physics 500 50022 silver badges55 bronze badges Stack Overflow 498 49844 silver badges1212 bronze badges Cross Validated 421 42122 silver badges55 bronze badges View network profile → Top network posts 293 Self-Contained Proof that $\sum\limits_{n=1}^{\infty} \frac1{n^p}$ Converges for $p > 1$ 124 Deleting any digit yields a prime... is there a name for this? 113 What's the largest possible volume of a taco, and how do I make one that big? 93 Could someone explain conditional independence? 85 Multiple-choice question about the probability of a random answer to itself being correct 73 Volume of Region in 5D Space 65 What are the Axiom of Choice and Axiom of Determinacy? View more network posts →
Let $S=K[X_1,\dots,X_{n-1}]$. Now consider the polynomial extension $S\subset S[X]$. The question becomes Why a maximal ideal of $S[X]$ lies over a maximal ideal of $S$? This holds in the more general frame of finitely generated algebras over a field, and a proof can be found here. (However, one can not extend the property too much since even for $S$ a noetherian UFD this is not true; for a counterexample see here.) In general, if $N=M\cap S$ then it's easy to see that $NS[X]\subsetneq M$ for the simple reason that $NS[X]$ is not maximal ($S[X]/NS[X]\simeq (S/N)[X]$).
I received this question during an onsite interview for a quant job and I'm still scratching my head on how to solve this problem. Any help would be appreciated. Mr Quant thinks that there is a linear relationship between past and future intraday returns. So he would like to test this idea. For convenience, he decided to parameterize return in his data set using a regular time grid dt where $d=0, …, D-1$ labels date and $t=0, …, T-1$ intraday time period. For example, if we split day into 10 minute intervals then $T = 1440 / 10$. His model written on this time grid has the following form: $y_{d,t}$ $=$ $\beta_t$ * $x_{d,t}$ + $\epsilon_{d,t}$ where $y_{d,t}$ is a return over the time interval $(t,t+1)$ and $x_{d,t}$ is a return over the previous time interval, $(t–1,t)$ at a given day $d$. In other words, he thinks that previous 10-minute return predicts future 10-minute return, but the coefficient between them might change intraday. Of course, to fit $\beta_t$ he can use $T$ ordinary least square regressions, one for each “$t$”, but: (a) his data set is fairly small $D$=300, $T$=100; (b) he thinks that signal is very small, at best it has correlation with the target of 5%. He hopes that some machine learning method that can combine regressions from nearby intraday times can help. How would you solve this problem? Data provided is an $x$ matrix of predictors of size $300\times100$ and a $y$ matrix of targets of size $300\times100$.
Fernholz and Karatzas have published various papers about so called stochastic portfolio theory. Basically they say that the return to be expected from a portfolio on the long run is rather the growth rate $$ \gamma = \mu - \frac12 \sigma^2 $$ than $\mu$, where $\mu$ is the drift coefficient of the price process $S_t$ which solves the following SDE: $$ dS_t = \mu S_t dt + \sigma S_t dB_t. $$ One can argue with Ito's lemma, with the geometric mean of a lognormal random variable and similar - but what is the intuition behind this? As references see Stochastic Portfolio Theory and Stock Market Equilibrium by Fernholz and Shay for the first paper on this and Does a Low Volatility Portfolio Need a “Low Volatility Anomaly?” by Meidan as a more recent reference. If I am not mistaken then the above SDE would look like this $$ dS_t = (\mu-\sigma^2/2) S_t dt + \sigma S_t \circ dB_t $$ in Stratonovich form and one sees the "correct" growth rate... which is another link. But what is the big picture of all this?
Overview This solution is a simpler altervative to that of gIS. It makes use only of the standard functions of Mathematica non-commutative multiplication (NCM) and replacement. Due to the uncommon features of the NCM, some care must be taken with linear combinations In the first part we study the expression $\frac{1}{1-D f}$, which requires the powers of $(f D)$. The second part is then devoted to the more general expression of the OP. Part 1 We use the standard operation NonCommutativeMultiply[] as d and f do not commute (we write d instead of D to comply with Mathematica rules). The shift operator will be implemented as the following replacement r = d ** f[u_] -> f[u + a] ** d; Now we have for the first few powers (notice that we have to use ReplaceRepeated[]) (f[x] ** d) //. r (* Out[1111]= f[x] ** d *) (f[x] ** d) ** (f[x] ** d) //. r (* Out[1112]= f[x] ** f[a + x] ** d ** d *) (f[x] ** d) ** (f[x] ** d) ** (f[x] ** d) //. r (* Out[1113]= f[x] ** f[a + x] ** f[2 a + x] ** d ** d ** d *) So the shift operator does what it should do.We can call the product where all d's are pushed through to the right "normal".Hence we know how to generate the normal product of the powers of (f[x] d) Now the general power can be generated as p[n_] := NonCommutativeMultiply @@ Table[f[x] ** d, {n}] and the normal product is given by pn[n_] := p[n] //. r pn[3] (* Out[1114]= f[x] ** f[a + x] ** f[2 a + x] ** d ** d ** d *) We can generate the normal product of any power of the form (f[x] d).This completes the first part. Part 2 The second part is not difficult. The only additional expression is the product g[x] d (f d)^n For example (n=3) q = g[x] ** d ** pn[3] //. r (* Out[1138]= g[x] ** f[a + x] ** f[2 a + x] ** f[3 a + x] ** d ** d ** d ** d *) To finalize all expressions we use this second replacement rf = {d -> 1, NonCommutativeMultiply -> Times}; For example q //. rf (* Out[1139]= f[a + x] f[2 a + x] f[3 a + x] g[x] *) or q1 = pn[2] + pn[3] (* Out[1140]= f[x] ** f[a + x] ** d ** d + f[x] ** f[a + x] ** f[2 a + x] ** d ** d ** d *) q1 /. rf (* Out[1141]= f[x] f[a + x] + f[x] f[a + x] f[2 a + x] *) Some care has still to be taken in linear combinations: we need to apply the function Distribute[] and have to take (-1) as an expression to appear as a factor of the NCM. The complete expression including g is then (in "finalized" form) gf[n_] := Distribute[(1 + (-1) ** g[x] ** d) ** pn[n]] //. r //. rf Example gf[3] (* Out[1187]= f[x] f[a + x] f[2 a + x] - f[a + x] f[2 a + x] f[3 a + x] g[x] *) Discussion 1) The expression $$\text{ff}=\frac{1}{1-D f}$$ can be written explicitly as ff := 1 + Sum[Product[f[x + k a], {k, 0, n - 1}], {n, 1, \[Infinity]}] $$\text{ff}\text{=}\sum _{n=1}^{\infty } \prod _{k=0}^{n-1} f(a k+x)+1$$ This expression can then be studied for further simplification depending on the function f. 2) In a comment, Jens pointed out that the term "shift operator" is reserved in standard literature as e.g. in quantum mechanics text books, and defined there as D f(x) = f(x+a) instead of D f(x) = f(x+a) D as in the OP. Considering a typical expression of the OP, w = (D f)(D f), we see the difference Standard use: (D f)(D f) = f(x+a) D f = f(x+a) f(x+a) = f(x+a)^2 OP: (D f)(D f) = f(x+a) D D f = f(x+a) D f(x+a) D = f(x+a) f(x+2a) D^2 I have adopted the understanding of the OP. In standard use the problem is trivial. 3) Example Example With $$f(x)=x$$ the exponential operator gives $$\text{fe}=\exp (f(x) d)=(1-a)^{-\frac{x}{a}}$$
Every node is equivalent to every other node. To see this, draw the graph of this Markov chain: Each of these four figures is a graph of the chain--all are equally good representations of it. The nodes are consistently represented by color and the transitions by edge style: the solid edges are, say, the $\alpha$ transitions and the dashed edges are therefore the $\beta$ transitions. (The graphs are related by graph automorphisms: the second and fourth are obtained by the geometric equivalent of a mirror image of the first and third, respectively, while the third is obtained from the first as a horizontal mirror image. The two reflections generate a group isomorphic to the Dihedral group $D_2$.) Because the graph can be drawn in these four ways, all of which are identical except for the node coloring, it is apparent that the blue node is the same as the node in the upper left, upper right, lower left, and lower right corners of the first graph. That shows all nodes are equivalent. (Intuitively, each node "sees" the same environment within the graph. If the nodes were not labeled and you were standing on the graph at one of them, you could not tell which of them it happened to be.) Assuming neither of $\alpha$ nor $\beta$ is zero, all nodes will be connected, whence there is a stationary state. Because all nodes are equivalent, they must have the same weights $\pi_i$ in this stationary state. (If they did not have the same weights, making additional transitions would result in new distributions that are weighted averages of the original distribution. The extreme--maximum and minimum--values among the $\pi_i$ would thereby be altered unless they already were equal.) Of course, once you have seen this, it's simple to check by plugging in $1/4=\pi_i$ to the four simultaneous linear equations and noting they are satisfied. Note that if $\alpha=0$ or $\beta=0$ the conclusion is false (there are stationary states in which some of the $\pi_i$ differ and in fact there are initial states that reach no limiting state). It's worthwhile reviewing the argument to see at which points it breaks down in such cases.
Answer 5.4 percent Work Step by Step We use the margin of error equation to find: $$=100\times \frac{1}{\sqrt{n}}\\ = 100 \times \frac{1}{\sqrt{340}} \\ =5.4\text{%}$$ You can help us out by revising, improving and updating this answer.Update this answer After you claim an answer you’ll have 24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback.
I asked this question before about whether I can take a component of angular velocity along another axis and say that the body spins about that axis with that component. Now I have another doubt: Consider a rigid body having an inertia $I_0$ and angular velocity $\omega_0$ about some axis. So according to the answer to my question above, I can say that the object has an angular velocity $$\omega_0\cos\theta$$ about an axis inclined at $\theta$. And I can also say that the angular momentum about that axis will be $$I_0\omega_0\cos\theta$$ by taking the component of the angular momentum about the original axis, $I_0\omega_0$ along the axis at $\theta$. So why can't I say that the inertia about that axis will be $$I = \frac L{\omega}=\frac{I_0\omega_0\cos\theta}{\omega_0\cos\theta} = I_0$$ Where is the problem in this?
Do you like data? Data about rocks? Open, accessible data that you can use for any purpose without asking? Read on. The server I run the wiki on — legacy Amazon AWS infrastructure — crashed, and my backup strategy turned out to be <cough> flawed. It's now running on state-of-the-art Amazon servers. So my earlier efforts were mostly wiped out... Leaving the road clear for a new experiment! I came across an amazing resource called Mudrock Anisotropy, or — more appealingly — Mr Anisotropy. Compiled by Steve Horne, it contains over 1000 records of rocks, gathered from the literature. It is also public domain and carries only a disclaimer. But it's a spreadsheet, and emailing a spreadsheet around is not sustainable. The Common Grounddatabase that was built by John A. Scales, Hans Ecke and Mike Batzle at Colorado School of Mines in the late 1990s, is now defunct and has been officially discontinued, as of about two weeks ago. It contains over 4000 records, and is public domain. The trouble is, you have to restore a SQLite database to use it. All this was pointing towards a new experiment. I give you: the Rock Property Catalog again! This time it contains not 66 rocks, but 5095 rocks. Most of them have \(V_\mathrm{P}\), \(V_\mathrm{S}\) and \(\rho\). Many of them have Thomsen's parameters too. Most have a lithology, and they all have a reference. Looking for Cretaceous shales in North America to use as analogs on your crossplots? There's a rock for that. As before, you can query the catalog in various ways, either via the wiki or via the web API. Let's say we want to find shales with a velocity over 5000 m/s. You have a few options: Go to the semantic search form on the wiki and type [[lithology::shale]][[vp::>5000]] Make a so-called inline query on your own wiki page (you need an account for this). Make a query via the web API with a rather long URL: http://www.subsurfwiki.org/api.php?action=ask&query=[[RPC:%2B]][[lithology::shale]][[Vp::>5000]]|%3FVp|%3FVs|%3FRho&format=jsonfm I updated the Jupyter Notebook I published last time with a new query. It's pretty hacky. I'll work on this to produce a more robust method, with some error handling and cleaner code — stay tuned. The database supports lots of properties, including: Citation and reference Description, lithology, colour (you can have pictures if you want!) Location, lat/lon, basin, age, depth Vp, Vs, \(\rho\), as well as \(\rho_\mathrm{dry}\) and \(\rho_\mathrm{grain}\) Thomsen's \(\epsilon\), \(\delta\), and \(\gamma\) Static and dynamic Young's modulus and Poisson ratio Confining pressure, pore pressure, effective stress, axial stress Frequency Fluid, saturation type, saturation Porosity, permeability, temperature Composition There is more from the Common Ground data to add, especially photographs. But for now, I'd love some feedback: is this the right set of properties? Do we need more? I want this to be useful — what kind of data and metadata would you like to see? I'll end with the usual appeal — I'm open to any kind of suggestions or help with this. Perhaps you can contribute new rocks, or a paper containing data? Or maybe you have some wiki skills, or can help write bots to improve the data? What can you bring?
Generalizing the entanglement entropy of singular regions in conformal field theories Abstract We study the structure of divergences and universal terms of the entanglement and Rényi entropies for singular regions. First, we show that for (3 + 1)-dimensional free conformal field theories (CFTs), entangling regions emanating from vertices give rise to a universal contribution \( {S}_n^{\mathrm{univ}}=-\frac{1}{8\pi }{f}_b(n){\int}_{\gamma }{k}^2{\log}^2\left(R/\delta \right) \) , where γ is the curve formed by the intersection of the entangling surface with a unit sphere centered at the vertex, and kthe trace of its extrinsic curvature. While for circular and elliptic cones this term reproduces the general-CFT result, it vanishes for polyhedral corners. For those, we argue that the universal contribution, which is logarithmic, is not controlled by a local integral, but rather it depends on details of the CFT in a complicated way. We also study the angle dependence for the entanglement entropy of wedge singularities in 3+1 dimensions. This is done for general CFTs in the smooth limit, and using free and holographic CFTs at generic angles. In the latter case, we show that the wedge contribution is not proportional to the entanglement entropy of a corner region in the (2 + 1)-dimensional holographic CFT. Finally, we show that the mutual information of two regions that touch at a point is not necessarily divergent, as long as the contact is through a sufficiently sharp corner. Similarly, we provide examples of singular entangling regions which do not modify the structure of divergences of the entanglement entropy compared with smooth surfaces. KeywordsConformal Field Theory AdS-CFT Correspondence Notes Open Access This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24]T. Hirata and T. Takayanagi, AdS/CFT and strong subadditivity of entanglement entropy, JHEP 02(2007) 042 [hep-th/0608213] [INSPIRE].Google Scholar [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75] [76] [77] [78] [79]M. Guo and R.C. Myers, private communication.Google Scholar [80] [81]
Quantum Fidelity between two density operators, $\hat{\rho}$ and $\hat{\sigma}$, is given by $F(\hat{\rho},\hat{\sigma})=\left(Tr\sqrt{\sqrt{\hat{\rho}}\hat{\sigma}\sqrt{\hat{\rho}}}\right)^2$, where $Tr$ represents the trace. If both the density operators represent pure states: $\hat{\rho}=|\psi\rangle\langle\psi|$ and $\hat{\rho}=|\phi\rangle\langle\phi|$ then this becomes $|\langle\psi|\phi\rangle|^2$. $|\langle\psi|\phi\rangle|^2 = h\int W_{\psi}(x,p)W_{\phi}(x,p)dxdp$ in terms of the Winger functions (From Eqn. (19) in Case, W. B. (2008). Wigner functions and Weyl transforms for pedestrians. American Journal of Physics, 76(10), 937-946.) Do we have a similar expression in terms of the Quasi-Probability Distributions: P and Q instead of the density operators when the states are pure? Is there also a general expression for the case of mixed states?
№ 9 All Issues Volume 58, № 12, 2006 Order reduction for a system of stochastic differential equations with a small parameter in the coefficient of the leading derivative. Estimate for the rate of convergence Ukr. Mat. Zh. - 2006. - 58, № 12. - pp. 1587–1601 In the metric $\rho(X, Y) = (\sup\limits_{0 \leq t \leq T} M|X(t) - Y(t)|^2)^{1/2} $ for an ordinary stochastic differential equation of order $p \geq 2$ with small parameter of the higher derivative, we establish an estimate of the rate of convergence of its solution to a solution of stochastic equation of order $p - 1$. Ukr. Mat. Zh. - 2006. - 58, № 12. - pp. 1602–1613 Some properties of Jacobi fields on a manifold of nonpositive curvature are considered. As a result, we obtain relations for derivatives of one class of functions on the manifold. Ukr. Mat. Zh. - 2006. - 58, № 12. - pp. 1614–1623 We study the structure of the spectrum and the completeness and basis property of a system of eigenvectors. Problems for partial differential equations with nonlocal conditions. Metric approach to the problem of small denominators Ukr. Mat. Zh. - 2006. - 58, № 12. - pp. 1624–1650 A survey of works of the authors and their disciples devoted to the investigation of problems with nonlocal conditions with respect to a selected variable in cylindrical domains is presented. These problems are considered for linear equations and systems of partial differential equations that, in general, are ill posed in the Hadamard sense and whose solvability in certain scales of functional spaces is established for almost all (with respect to Lebesgue measure) vectors composed of the coefficients of the problem and the parameters of the domain. Jacobi matrices associated with the inverse eigenvalue problem in the theory of singular perturbations of self-adjoint operators Ukr. Mat. Zh. - 2006. - 58, № 12. - pp. 1651–1662 We establish the relationship between the inverse eigenvalue problem and Jacobi matrices within the framework of the theory of singular perturbations of unbounded self-adjoint operators. Ukr. Mat. Zh. - 2006. - 58, № 12. - pp. 1663–1673 Let $\varphi_t(x),\quad x \in \mathbb{R}_+ $, be a value taken at time $t \geq 0$ by a solution of stochastic equation with normal reflection from the hyperplane starting at initial time from $x$. We characterize an absolutely continuous (with respect to the Lebesgue measure) component and a singular component of the stochastic measure-valued process $µ_t = µ ○ ϕ_t^{−1}$, which is an image of some absolutely continuous measure $\mu$ for random mapping $\varphi_t(\cdot)$. We prove that the contraction of the Hausdorff measure $H^{d-1}$ onto a support of the singular component is $\sigma$-finite. We also present sufficient conditions which guarantee that the singular component is absolutely continuous with respect to $H^{d-1}$. Best linear methods for the approximation of functions of the Bergman class by algebraic polynomials Ukr. Mat. Zh. - 2006. - 58, № 12. - pp. 1674–1685 On concentric circles $T_{ϱ} = {z ∈ ℂ: ∣z∣ = ϱ},\; 0 ≤ ϱ < 1$, we determine the exact values of the quantities of the best approximation of holomorphic functions of the Bergman class $A_p, 2 ≤ p ≤ ∞$, in the uniform metric by algebraic polynomials generated by linear methods of summation of Taylor series. For $1 ≤ p < 2$, we establish exact order estimates for these quantities. Asymptotic normality of fluctuations of the procedure of stochastic approximation with diffusive perturbation in a Markov medium Ukr. Mat. Zh. - 2006. - 58, № 12. - pp. 1686–1692 We consider the asymptotic normality of a continuous procedure of stochastic approximation in the case where the regression function contains a singularly perturbed term depending on the external medium described by a uniformly ergodic Markov process. Within the framework of the scheme of diffusion approximation, we formulate sufficient conditions for asymptotic normality in terms of the existence of a Lyapunov function for the corresponding averaged equation. Ukr. Mat. Zh. - 2006. - 58, № 12. - pp. 1693–1703 We investigate the close-to-convexity and l-index boundedness of entire solutions of the differential equations $z^2w'' + \beta zw' + (\gamma z^2 — \beta)w = 0$ і$ zw'' + \beta w' + \gamma zw = 0$. Multilayer structures of second-order linear differential equations of Euler type and their application to nonlinear oscillations Ukr. Mat. Zh. - 2006. - 58, № 12. - pp. 1704–1714 The purpose of this paper is to present new oscillation theorems and nonoscillation theorems for the nonlinear Euler differential equation $t^2 x″' + g (x) = 0$. Here we assume that $x g(x) > 0$ if $x \neq 0$, but we do not necessarily require that $g (x)$ be monotone increasing. The obtained results are best possible in a certain sense. To establish our results, we use Sturm’s comparison theorem for linear Euler differential equations and phase plane analysis for a nonlinear system of Liénard type. Ukr. Mat. Zh. - 2006. - 58, № 12. - pp. 1715–1719 We obtain new results on the maximization of the product of powers of the interior radii of pairwise disjoint domains with respect to certain systems of points in the extended complex plane. Ukr. Mat. Zh. - 2006. - 58, № 12. - pp. 1720–1724 Some properties of regular and normal bitopological spaces are established. The classes of sets inheriting the bitopological properties of regularity and normality are found. A theorem on a finite covering of pairwise normal spaces is proved. We also study the behavior of individual multivalued mappings, taking the axioms of bitopological regularity and normality into account. Ukr. Mat. Zh. - 2006. - 58, № 12. - pp. 1725-1728
Addition of Cross-Relation Equivalence Classes on Natural Numbers is Well-Defined Theorem Let $\struct {\N, +}$ be the semigroup of natural numbers under addition. Let $\struct {\N \times \N, \oplus}$ be the (external) direct product of $\struct {\N, +}$ with itself, where $\oplus$ is the operation on $\N \times \N$ induced by $+$ on $\N$. Let $\boxtimes$ be the cross-relation defined on $\N \times \N$ by: $\tuple {x_1, y_1} \boxtimes \tuple {x_2, y_2} \iff x_1 + y_2 = x_2 + y_1$ Let $\eqclass {x, y} {}$ denote the equivalence class of $\tuple {x, y}$ under $\boxtimes$. \(\displaystyle \eqclass {a_1, b_1} {}\) \(=\) \(\displaystyle \eqclass {a_2, b_2} {}\) \(\displaystyle \eqclass {c_1, d_1} {}\) \(=\) \(\displaystyle \eqclass {c_2, d_2} {}\) \(\displaystyle \leadsto \ \ \) \(\displaystyle \eqclass {a_1, b_1} {} \oplus \eqclass {c_1, d_1} {}\) \(=\) \(\displaystyle \eqclass {a_2, b_2} {} \oplus \eqclass {c_2, d_2} {}\) Proof Let $\eqclass {a_1, b_1} {}, \eqclass {a_2, b_2} {}, \eqclass {c_1, d_1} {}, \eqclass {c_2, d_2} {}$ be $\boxtimes$-equivalence classes such that $\eqclass {a_1, b_1} {} = \eqclass {a_2, b_2} {}$ and $\eqclass {c_1, d_1} {} = \eqclass {c_2, d_2} {}$. Then: \(\displaystyle \eqclass {a_1, b_1} {}\) \(=\) \(\displaystyle \eqclass {a_2, b_2} {}\) Definition of Operation Induced by Direct Product \(\, \displaystyle \land \, \) \(\displaystyle \eqclass {c_1, d_1} {}\) \(=\) \(\displaystyle \eqclass {c_2, d_2} {}\) Definition of Operation Induced by Direct Product \(\displaystyle \leadstoandfrom \ \ \) \(\displaystyle a_1 + b_2\) \(=\) \(\displaystyle a_2 + b_1\) Definition of Cross-Relation \(\, \displaystyle \land \, \) \(\displaystyle c_1 + d_2\) \(=\) \(\displaystyle c_2 + d_1\) Definition of Cross-Relation Then we have: \(\displaystyle \tuple {a_1 + c_1} + \tuple {b_2 + d_2}\) \(=\) \(\displaystyle \tuple {a_1 + b_2} + \tuple {c_1 + d_2}\) Commutativity and associativity of $+$ \(\displaystyle \) \(=\) \(\displaystyle \tuple {a_2 + b_1} + \tuple {c_2 + d_1}\) from above: $a_1 + b_2 = a_2 + b_1, c_1 + d_2 = c_2 + d_1$ \(\displaystyle \) \(=\) \(\displaystyle \tuple {a_2 + c_2} + \tuple {b_1 + d_1}\) Commutativity and associativity of $+$ \(\displaystyle \leadsto \ \ \) \(\displaystyle \tuple {a_1 + c_1, b_1 + d_1}\) \(\boxtimes\) \(\displaystyle \tuple {a_2 + c_2, b_2 + d_2}\) Definition of $\boxtimes$ \(\displaystyle \leadsto \ \ \) \(\displaystyle \tuple {\tuple {a_1, b_1} \oplus \tuple {c_1, d_1} }\) \(\boxtimes\) \(\displaystyle \tuple {\tuple {a_2, b_2} \oplus \tuple {c_2, d_2} }\) Definition of $\oplus$ $\blacksquare$
Abstract In this paper, we study optimization of the first eigenvalue of $$-\nabla \cdot (\rho (x) \nabla u) = \lambda u$$-∇·(ρ(x)∇u)=λu in a bounded domain $$\Omega \subset {\mathbb {R}}^n$$Ω⊂ Rn under several constraints for the function $$\rho $$ρ. We consider this problem in various boundary conditions and various topologies of domains. As a result, we numerically observe several common criteria for $$\rho $$ρ for optimizing eigenvalues in terms of corresponding eigenfunctions, which are independent of topology of domains and boundary conditions. Geometric characterizations of optimizers are also numerically observed. Original language English Pages (from-to) 489-512 Number of pages 24 Journal Japan Journal of Industrial and Applied Mathematics Volume 32 Issue number 2 DOIs Publication status Published - Jul 28 2015 Externally published Yes Fingerprint All Science Journal Classification (ASJC) codes Engineering(all) Applied Mathematics Cite this Research output: Contribution to journal › Article Japan Journal of Industrial and Applied Mathematics, vol. 32, no. 2, pp. 489-512. https://doi.org/10.1007/s13160-015-0177-5 } TY - JOUR T1 - Numerical studies of the optimization of the first eigenvalue of the heat diffusion in inhomogeneous media AU - Matsue, Kaname AU - Naito, Hisashi PY - 2015/7/28 Y1 - 2015/7/28 N2 - In this paper, we study optimization of the first eigenvalue of $$-\nabla \cdot (\rho (x) \nabla u) = \lambda u$$-∇·(ρ(x)∇u)=λu in a bounded domain $$\Omega \subset {\mathbb {R}}^n$$Ω⊂Rn under several constraints for the function $$\rho $$ρ. We consider this problem in various boundary conditions and various topologies of domains. As a result, we numerically observe several common criteria for $$\rho $$ρ for optimizing eigenvalues in terms of corresponding eigenfunctions, which are independent of topology of domains and boundary conditions. Geometric characterizations of optimizers are also numerically observed. AB - In this paper, we study optimization of the first eigenvalue of $$-\nabla \cdot (\rho (x) \nabla u) = \lambda u$$-∇·(ρ(x)∇u)=λu in a bounded domain $$\Omega \subset {\mathbb {R}}^n$$Ω⊂Rn under several constraints for the function $$\rho $$ρ. We consider this problem in various boundary conditions and various topologies of domains. As a result, we numerically observe several common criteria for $$\rho $$ρ for optimizing eigenvalues in terms of corresponding eigenfunctions, which are independent of topology of domains and boundary conditions. Geometric characterizations of optimizers are also numerically observed. UR - http://www.scopus.com/inward/record.url?scp=84938196860&partnerID=8YFLogxK UR - http://www.scopus.com/inward/citedby.url?scp=84938196860&partnerID=8YFLogxK U2 - 10.1007/s13160-015-0177-5 DO - 10.1007/s13160-015-0177-5 M3 - Article VL - 32 SP - 489 EP - 512 JO - Japan Journal of Industrial and Applied Mathematics JF - Japan Journal of Industrial and Applied Mathematics SN - 0916-7005 IS - 2 ER -
Arithmetic, algebra, number theory, sequence and series, analysis, ... Post Reply 10 posts • Page 1of 1 f:R->R is a function satisfying the following properties: (i) f(-x) = - f(x) (ii) f(x+1) = f(x) + 1 (iii) f(1/x) = f(x)/(x^2) for x not equal to 0. Determine whether or not f(x) = x for all real values of x. NOTE: R denotes the set of all real numbers (i) f(-x) = - f(x) (ii) f(x+1) = f(x) + 1 (iii) f(1/x) = f(x)/(x^2) for x not equal to 0. Determine whether or not f(x) = x for all real values of x. NOTE: R denotes the set of all real numbers a try f(x+1) = f(x) + 1 implies, that f(x+2) = f((x+1)+1) = f(x+1) + 1 = f(x) + 2 and further f(x+y)=f(x)+y for all y. Thus f(x) is a linear function with a slope of 1. In f(x) = x + a , a must be zero because f(-x)= - f(x). So I see no other possibity than f(x) = x . Edit: There is an other posssibility: a periodic function with period 1 and antisymmetrie can be added to x ( i.e. g(x)= sin(2*pi*x) or h(x)= sum(all n) (a_n sin(n*2*pi*x) )and the first two conditions are not violated. But the third condition will not be fulfilled with this periodic function. f(x+1) = f(x) + 1 implies, that f(x+2) = f((x+1)+1) = f(x+1) + 1 = f(x) + 2 and further f(x+y)=f(x)+y for all y. Thus f(x) is a linear function with a slope of 1. In f(x) = x + a , a must be zero because f(-x)= - f(x). So I see no other possibity than f(x) = x . Edit: There is an other posssibility: a periodic function with period 1 and antisymmetrie can be added to x ( i.e. g(x)= sin(2*pi*x) or h(x)= sum(all n) (a_n sin(n*2*pi*x) )and the first two conditions are not violated. But the third condition will not be fulfilled with this periodic function. You have indeed shown that f(x)=x for all integers. I believe that by using the continued fraction representation you can show f(x)=x for all the rationals (as the continued fraction terminates.) If we were given that f is continuous that would be sufficient. But I can't see how to exclude that f(pi)=pi+delta. We then get constraints on many other numbers, as f(pi + any rational)=pi+that rational+delta, f(1/pi)=(pi+delta)/pi^2, etc. Old topic, but that's an interesting puzzle.K Sengupta wrote:(i) f(-x) = - f(x) (ii) f(x+1) = f(x) + 1 (iii) f(1/x) = f(x)/(x^2) for x not equal to 0. - According to I, f(0) = - f(0), so f(0) must be 0. - Based on I and II, f(x) definitely equals x if x is an integer. Letting x be any real number and n an integer greater than x, we can also obtain: f(x-n) = f(x) - n = - f(n-x), n - f(x) = f(n-x) = f(n), n = f(x) + f(n-x). - Based on III, f(1/x) definitely equals 1/x if x is an integer. Combined with n = f(x) + f(n-x), it leads to: n = f[n - (1/x)] + f(1/x) Let the x with the lowest absolute value satisfying the condition that f(x) - x = y =! 0. Then: f(x+n) - (x+n) = y (with n being an integer), f(m-x) - (m-x) = -y (with m being an integer) f[1/(x+n)] = f(x+n)/(x+n)^2 f[1/(x+n)] = x+y+n / (x+n)^2 By giving n sufficiently large values, f[1/(x+n)] whose absolute values are even lower than f(x) and do not equal 1/(x+n) can be obtained, which leaves us with a contradiction. What if the lower bound of $\{ |x| : f(x)-x \neq 0\}$ is 0 ?Rainy Monday wrote: Let the x with the lowest absolute value satisfying the condition that f(x) - x = y =! 0. Then: Here is an inductive proof for $f_{|\mathbb{Q}}$ : let $x \in \mathbb{Q}$. Because of (I) and (II), we can suppose $0<x<1$. - If $x \in \mathbb{N}$ (in other words if x is a rational with denominator 1) then $f(x)=x$. - Let $q$ be a fixed integer and suppose that for any rationals $p/k$ with $p$ and $k$ coprimes and $k \leq q$, we have $f(p/k) = p/k$. - Now take $x=p'/(q+1)$ with $p'$ coprime with $q+1$, since $x<1$, we have $p' \leq q$ and we have exactly what we want because $1/x$ can be written $n+y$ where $n \in \mathbb{N}$ and $y \in \mathbb{Q}$ has denominator $p'$ and so $f(1/x) = 1/x$ and then (III) gives $f(x) = x$. You probably can prove it for any periodic continued fraction using the same technique as : $\varphi = 1+1/\varphi$ $f(\varphi) = f(1+1/\varphi) = 1+f(\varphi)/\varphi^2$ $f(\varphi)(1-1/\varphi^2) = 1$ $f(\varphi) = \varphi$ But what if $x$ is transcendental ? Or even just not periodic when expressed in continued fraction form ? What's wrong is that you mistake lower bound for minimum. A bounded subset of $\mathbb{R}$ can have no minimum. Look, I'm going to apply your reasoning to my example $A$ : Let x be the smallest element of $A$. Then, exists $n$ such that $x=1/n$. But wait, 1/(n+1) is lower than $x$. That contradicts the fact that $x$ is the smallest. Then we must have $A = \emptyset$. So writing 1/2 is a mathematical mistake. Still does not see what's wrong with that kind of logic ? Look, I'm going to apply your reasoning to my example $A$ : Let x be the smallest element of $A$. Then, exists $n$ such that $x=1/n$. But wait, 1/(n+1) is lower than $x$. That contradicts the fact that $x$ is the smallest. Then we must have $A = \emptyset$. So writing 1/2 is a mathematical mistake. Still does not see what's wrong with that kind of logic ? We don't need any continuity, here. Substituting $x\rightarrow-x$ in (ii), we get $f(1-x)=1-f(x)$, and from that, using (iii), $f\left(1/(1-x)\right)=(1-f(x))/(1-x)^2$. The substitution $x\rightarrow 1/(1-x)$ gives $f((x-1)/x)=(x^2-2x+f(x))/x^2$, and once more setting $x\rightarrow 1/(1-x)$ gives $f(x)=2x-f(x)$, i.e. $f(x)=x$.
I can shed some light on the question, but am not sure I can answer it as I am not sure it is really even well defined. (1) The Weak Axiom of Revealed Preference is a decision theoretic concept regarding the choices of a single agent. So I do not understand how having $N$ agents is relevant to the problem. (2) Generally speaking, if $U: X \to \mathbb{R}$ is a utility function and $\mathscr{C}$ is a choice correspondence over $X$ such that $\mathscr{C}(A) = \{x \in A \mid U(x) \geq U(y), \ \forall y \in A\}$ then $\mathscr{C}$ will satisfy WARP. This is a straightforward exercise in using the definitions, and the dimensionality of the space should play no role (its true over an abstract $X$). (3) If the consumption space is $\mathbb{R}^l$ then preferences over defined $\mathbb{R}^l$. How you project preferences into $\mathbb{R}^2$ matters. (4) If I interpret your question a la denesp, so that you ask: fix the $2 < k \leq l$ dimensions of consumption and only let dimensions 1 and 2 vary (of course, what we fix the other dimensions of still may make a difference), and assume that this restricted choice correspondence satisfied WARP will the choice correspondence in general satisfy WARP. To answer (4): if the choices comes from a utility function then yes, trivially (see point (2)). If the choices are more general, then no. This fails severely. Take as a counter example a preference over $\mathbb{R}^3$ such that fixing the 3rd dimension, the consumer is indifferent over all elements (i.e., the first and second dimensions are null). WARP holds as $\mathscr{C}^{x}(A) = A$ for all $A \in \mathbb{R}^2 \times \{x\}$. This leaves the 3rd dimension wholly unrestricted. Letting the choice function over this last dimension be cyclic (i.e., fail WARP) and we see that the original choice would as well. (5) What if we instead interpret your question as: we see every two dimensional projection of choice. (That is for every $i,j \leq l$, $i \neq j$, we see the projection of choices over the dimensions $i$ and $j$ fixing the other dimensions arbitrarily. Well, if the way we fix the other dimensions matters to the restricted choices (e.g., $\mathscr{C}^{x}$ over $\mathbb{R}^2 \times \{x\}$ is not the same as $\mathscr{C}^{y}$ over $\mathbb{R}^2 \times \{y\}$). Then we are back to the same type of problem as (5)---$\mathscr{C}$ could be cyclic when the dimensions are fixed differently. What if how we fix the dimensions doesn't affect the restricted choice (so we have separable preferences, a la Koopmans). Then WARP will hold over $\mathscr{C}$ if it is rationalized by a preference relation. But, the result is not very interesting, since from the restricted choices know the choice of out of $\{x,y\}$ for all $x,y \in \mathbb{R}^l$. It is well known that only a binary choice is necessary to verify a rationalization. Without this additional constraint, I believe you could still cook up a counter example in the spirit of (4) where cyclically kicks in when certain elements (at least 3, pairwise distinct over 3 different dimensions) are present.
Performing algebraic operations on functions combines them into a new function, but we can also create functions by composing functions. When we wanted to compute a heating cost from a day of the year, we created a new function that takes a day as input and yields a cost as output. The process of combining functions so that the output of one function becomes the input of another is known as a composition of functions. The resulting function is known as a composite function. We represent this combination by the following notation: [latex]\left(f\circ g\right)\left(x\right)=f\left(g\left(x\right)\right)[/latex] We read the left-hand side as [latex]``f[/latex] composed with [latex]g[/latex] at [latex]x,''[/latex] and the right-hand side as [latex]``f[/latex] of [latex]g[/latex] of [latex]x.''[/latex] The two sides of the equation have the same mathematical meaning and are equal. The open circle symbol [latex]\circ [/latex] is called the composition operator. We use this operator mainly when we wish to emphasize the relationship between the functions themselves without referring to any particular input value. Composition is a binary operation that takes two functions and forms a new function, much as addition or multiplication takes two numbers and gives a new number. However, it is important not to confuse function composition with multiplication because, as we learned above, in most cases [latex]f\left(g\left(x\right)\right)\ne f\left(x\right)g\left(x\right)[/latex]. It is also important to understand the order of operations in evaluating a composite function. We follow the usual convention with parentheses by starting with the innermost parentheses first, and then working to the outside. In the equation above, the function [latex]g[/latex] takes the input [latex]x[/latex] first and yields an output [latex]g\left(x\right)[/latex]. Then the function [latex]f[/latex] takes [latex]g\left(x\right)[/latex] as an input and yields an output [latex]f\left(g\left(x\right)\right)[/latex]. In general, [latex]f\circ g[/latex] and [latex]g\circ f[/latex] are different functions. In other words, in many cases [latex]f\left(g\left(x\right)\right)\ne g\left(f\left(x\right)\right)[/latex] for all [latex]x[/latex]. We will also see that sometimes two functions can be composed only in one specific order. For example, if [latex]f\left(x\right)={x}^{2}[/latex] and [latex]g\left(x\right)=x+2[/latex], then [latex]\begin{cases}\text{ }f\left(g\left(x\right)\right)=f\left(x+2\right)\hfill \\ \text{ }={\left(x+2\right)}^{2}\hfill \\ \text{ }={x}^{2}+4x+4\hfill \end{cases}[/latex] but [latex]\begin{cases}\text{ }g\left(f\left(x\right)\right)=g\left({x}^{2}\right)\hfill \\ \text{ }={x}^{2}+2\hfill \end{cases}[/latex] These expressions are not equal for all values of [latex]x[/latex], so the two functions are not equal. It is irrelevant that the expressions happen to be equal for the single input value [latex]x=-\frac{1}{2}[/latex]. Note that the range of the inside function (the first function to be evaluated) needs to be within the domain of the outside function. Less formally, the composition has to make sense in terms of inputs and outputs. A General Note: Composition of Functions When the output of one function is used as the input of another, we call the entire operation a composition of functions. For any input [latex]x[/latex] and functions [latex]f[/latex] and [latex]g[/latex], this action defines a composite function, which we write as [latex]f\circ g[/latex] such that [latex]\left(f\circ g\right)\left(x\right)=f\left(g\left(x\right)\right)[/latex] The domain of the composite function [latex]f\circ g[/latex] is all [latex]x[/latex] such that [latex]x[/latex] is in the domain of [latex]g[/latex] and [latex]g\left(x\right)[/latex] is in the domain of [latex]f[/latex]. It is important to realize that the product of functions [latex]fg[/latex] is not the same as the function composition [latex]f\left(g\left(x\right)\right)[/latex], because, in general, [latex]f\left(x\right)g\left(x\right)\ne f\left(g\left(x\right)\right)[/latex]. Example 2: Determining whether Composition of Functions is Commutative Using the functions provided, find [latex]f\left(g\left(x\right)\right)[/latex] and [latex]g\left(f\left(x\right)\right)[/latex]. Determine whether the composition of the functions is commutative. Solution [latex]f\left(x\right)=2x+1g\left(x\right)=3-x[/latex] Let’s begin by substituting [latex]g\left(x\right)[/latex] into [latex]f\left(x\right)[/latex].[latex]\begin{cases}f\left(g\left(x\right)\right)=2\left(3-x\right)+1\hfill \\ \text{ }=6 - 2x+1\hfill \\ \text{ }=7 - 2x\hfill \end{cases}[/latex] Now we can substitute [latex]f\left(x\right)[/latex] into [latex]g\left(x\right)[/latex]. [latex]\begin{cases}g\left(f\left(x\right)\right)=3-\left(2x+1\right)\hfill \\ \text{ }=3 - 2x - 1\hfill \\ \text{ }=-2x+2\hfill \end{cases}[/latex] We find that [latex]g\left(f\left(x\right)\right)\ne f\left(g\left(x\right)\right)[/latex], so the operation of function composition is not commutative. Example 3: Interpreting Composite Functions The function [latex]c\left(s\right)[/latex] gives the number of calories burned completing [latex]s[/latex] sit-ups, and [latex]s\left(t\right)[/latex] gives the number of sit-ups a person can complete in [latex]t[/latex] minutes. Interpret [latex]c\left(s\left(3\right)\right)[/latex]. Solution The inside expression in the composition is [latex]s\left(3\right)[/latex]. Because the input to the s-function is time, [latex]t=3[/latex] represents 3 minutes, and [latex]s\left(3\right)[/latex] is the number of sit-ups completed in 3 minutes. Using [latex]s\left(3\right)[/latex] as the input to the function [latex]c\left(s\right)[/latex] gives us the number of calories burned during the number of sit-ups that can be completed in 3 minutes, or simply the number of calories burned in 3 minutes (by doing sit-ups). Example 4: Investigating the Order of Function Composition Suppose [latex]f\left(x\right)[/latex] gives miles that can be driven in [latex]x[/latex] hours and [latex]g\left(y\right)[/latex] gives the gallons of gas used in driving [latex]y[/latex] miles. Which of these expressions is meaningful: [latex]f\left(g\left(y\right)\right)[/latex] or [latex]g\left(f\left(x\right)\right)?[/latex] Solution The function [latex]y=f\left(x\right)[/latex] is a function whose output is the number of miles driven corresponding to the number of hours driven. [latex]\text{number of miles }=f\left(\text{number of hours}\right)[/latex] The function [latex]g\left(y\right)[/latex] is a function whose output is the number of gallons used corresponding to the number of miles driven. This means: [latex]\text{number of gallons }=g\left(\text{number of miles}\right)[/latex] The expression [latex]g\left(y\right)[/latex] takes miles as the input and a number of gallons as the output. The function [latex]f\left(x\right)[/latex] requires a number of hours as the input. Trying to input a number of gallons does not make sense. The expression [latex]f\left(g\left(y\right)\right)[/latex] is meaningless. The expression [latex]f\left(x\right)[/latex] takes hours as input and a number of miles driven as the output. The function [latex]g\left(y\right)[/latex] requires a number of miles as the input. Using [latex]f\left(x\right)[/latex] (miles driven) as an input value for [latex]g\left(y\right)[/latex], where gallons of gas depends on miles driven, does make sense. The expression [latex]g\left(f\left(x\right)\right)[/latex] makes sense, and will yield the number of gallons of gas used, [latex]g[/latex], driving a certain number of miles, [latex]f\left(x\right)[/latex], in [latex]x[/latex] hours. Q & A Are there any situations where [latex]f\left(g\left(y\right)\right)[/latex] and [latex]g\left(f\left(x\right)\right)[/latex] would both be meaningful or useful expressions? Yes. For many pure mathematical functions, both compositions make sense, even though they usually produce different new functions. In real-world problems, functions whose inputs and outputs have the same units also may give compositions that are meaningful in either order. Try It 2 The gravitational force on a planet a distance r from the sun is given by the function [latex]G\left(r\right)[/latex]. The acceleration of a planet subjected to any force [latex]F[/latex] is given by the function [latex]a\left(F\right)[/latex]. Form a meaningful composition of these two functions, and explain what it means.
I wouldn't call this novel but... I'm going to give an example on how to do this for a specific filter with a specific discretization. A 1st order butterworth low-pass filter is given by: $H(s) = \frac{Y(s)}{X(s)} =\frac{\omega_c}{s + \omega_c}$ Where $\omega_c$ is the cut-off frequency of the filter. The backward difference (aka backward euler) approximation is given by: $s \approx \frac{1-z^{-1}}{T_s}$ Where $T_s$ is the sampling time. By substitution, the discrete transfer function becomes: $H(z) = \frac{Y(z)}{X(z)} \frac{\omega_c}{\frac{1-z^{-1}}{T_s} + \omega_c} = \frac{\frac{\omega_c \cdot T_s}{1+\omega_c\cdot T_s}}{1 - \frac{1}{1+\omega_c\cdot T_s}\cdot z^{-1}}$ Let $a = \frac{\omega_c \cdot T_s}{1+\omega_c\cdot T_s}$, then: $H(z) = \frac{a}{1 - (1-a)\cdot z^{-1}}$ Thus the difference equation is:$y[n] = a\cdot x[n] + (1-a)\cdot y[n-1]$ Why go through the tedious process of getting an equation for $a$ when I could have just used the above formula and try to guess some values for $a$? Well, now if you want to change the cut-off frequency $\omega_c$ in real time, you simply have to recalculate $a$ each time you compute the signal. This, of course, assumes you have a constant sampling period. Note that this filter is an IIR filter, not FIR as you requested, but the logic is basically the same: as long as you know how your coefficients relate to the cut-off frequency, then you just simply have to recalculate them every time you want a different cut-off frequency. Nothing stops you from repeating the same steps for different discretization methods, different types of filters, etc... You may even wish to work purely in the discrete-time domain, but the relevant part is always knowing how your coefficients relate to the cut-off frequency.
This is a linear recursion relationship. The methods for calculating formulas for the $n$ term are well known. In particular, we can form a vector for $k$ adjacent elements of the sequence: $$U_i = \begin{bmatrix}u_{i+k}\\u_{i+k-1}\\\vdots\\u_i\end{bmatrix}$$ Then we find that $$U_{i+1} = AU_i$$ where $$A = \begin{bmatrix}\frac 1k & \frac 1k & \frac 1k & \dots & \frac 1k \\ 0 & 1 & 0 & \dots & 0 \\ 0 & 0 & 1 & \dots & 0 \\\vdots & \vdots & \vdots & \ddots & \vdots\\ 0&0&0&\dots & 1\end{bmatrix}$$ And more generally $U_{n+1} = A^nU_1$. If $A$ can be diagonalized, then there exist matrices $Q, Q^{-1}, D$ with $D$ having only diagonal entries (the eigenvalues of $A$) such that $A = QDQ^{-1}$. And therefore$$U_{n+1} = QD^nQ^{-1}U_0$$If $$D = \begin{bmatrix}a_k & 0 & \dots & 0 \\ 0 & a_{k-1} & \dots & 0 \\ \vdots & \vdots & \ddots & \vdots\\ 0&0&\dots & a_1\end{bmatrix}$$which I'll abbreviate to $D = [[a_k \dots a_1 ]]$, then $D^n = [[a_k^n \dots a_1^n ]]$. So $$U_{n+1} = Q[[a_k^n \dots a_1^n ]]Q^{-1}U_1$$ Now the expression $Q[[a_k^n \dots a_1^n ]]Q^{-1}U_1$ depends linearly on each of the $a_k^n$. In particular, there exist constants $B_i$ such that $$u_n = B_1a_1^n + B_2a_2^n + \dots + B_ka_k^n$$The characteristic polynomial of $A$ is $$x^k - \frac 1k\left(x^{k-1} + ... + x + 1\right)$$so the eigenvalues $a_i$ are its roots. It is easily checked that the first root is $a_1 = 1$. And a look at the derivative shows that $1$ is not a multiple root. I'll leave determining the other roots to you. If they are distinct, then $A$ is diagonalizable. Once the eigenvalues are determined, it is not necessary to figure out $Q$. Instead you can solve the system of equations formed by $$u_n = B_1a_1^n + B_2a_2^n + \dots + B_ka_k^n$$when $n =1 \dots k$ for the constants $B_i$. Now taking the limit $$\lim u_n = B_1\lim a_1^n + B_2\lim a_2^n + \dots + B_k\lim a_k^n$$ Provided the limits on the right-hand side converge. Since the eigenvalues may be complex, we have $4$ cases: $|a_i| < 1$, in which case $\lim_n a_i^n = 0$. $a_i = 1$, in which case $\lim_n a_i^n = 1$. $|a_i| = 1$ and $a_i \ne 1$, in which case $\lim_n a_i^n$ does not converge. $|a_i| > 1$, in which case $\lim_n a_i^n = \infty$. For your supposition to be correct, the limit must converge to a finite value, which means you will need to show that for $i> 1, |a_i| < 1$. In this case, $$\lim u_n = B_1$$so you will also need to show that $B_1$ has the desired form.
There are three bodies $a,b,c$ which are discs of diameter $2m,4m,$ and $6m$ respectively. Their emitted wavelengths are $300nm,400nm$ and $500nm$ respectively. Which emission power is maximum ? Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.Sign up to join this community There are three bodies $a,b,c$ which are discs of diameter $2m,4m,$ and $6m$ respectively. Their emitted wavelengths are $300nm,400nm$ and $500nm$ respectively. Which emission power is maximum ? This question appears to be off-topic. The users who voted to close gave this specific reason: A good place to start when you have this type of question is Hyperphysics which is great to get quick answers to general physics questions. Concerning your question in particular, the radiated power per unit wavelength is $$\left(\frac {du}{d\lambda}\right)\left(\frac c4\right) = \left(\frac{2\pi \kappa T c }{\lambda ^4} \right)$$ Also look at the Rayleigh-Jeans formula: $$\left(\frac{2\pi \kappa T v^2 }{c^2}\right)$$ which gives you the radiated power per unit frequency. Or visit this page: Radiated Energy as a Function of Wavelength . You could also make sure you grasp the concepts of heat radiation, radiation of energy density and revisit the Stefan-Boltzman law. In any case there is a load of information about the subject all over the internet, Google can be a great ally. Hope this helps.
Please assume that this graph is a highly magnified section of the derivative of some function, say $F(x)$. Let's denote the derivative by $f(x)$.Let's denote the width of a sample by $h$ where $$h\rightarrow0$$Now, for finding the area under the curve between the bounds $a ~\& ~b $ we can a... @Ultradark You can try doing a finite difference to get rid of the sum and then compare term by term. Otherwise I am terrible at anything to do with primes that I don't know the identities of $\pi (n)$ well @Silent No, take for example the prime 3. 2 is not a residue mod 3, so there is no $x\in\mathbb{Z}$ such that $x^2-2\equiv 0$ mod $3$. However, you have two cases to consider. The first where $\binom{2}{p}=-1$ and $\binom{3}{p}=-1$ (In which case what does $\binom{6}{p}$ equal?) and the case where one or the other of $\binom{2}{p}$ and $\binom{3}{p}$ equals 1. Also, probably something useful for congruence, if you didn't already know: If $a_1\equiv b_1\text{mod}(p)$ and $a_2\equiv b_2\text{mod}(p)$, then $a_1a_2\equiv b_1b_2\text{mod}(p)$ Is there any book or article that explains the motivations of the definitions of group, ring , field, ideal etc. of abstract algebra and/or gives a geometric or visual representation to Galois theory ? Jacques Charles François Sturm ForMemRS (29 September 1803 – 15 December 1855) was a French mathematician.== Life and work ==Sturm was born in Geneva (then part of France) in 1803. The family of his father, Jean-Henri Sturm, had emigrated from Strasbourg around 1760 - about 50 years before Charles-François's birth. His mother's name was Jeanne-Louise-Henriette Gremay. In 1818, he started to follow the lectures of the academy of Geneva. In 1819, the death of his father forced Sturm to give lessons to children of the rich in order to support his own family. In 1823, he became tutor to the son... I spent my career working with tensors. You have to be careful about defining multilinearity, domain, range, etc. Typically, tensors of type $(k,\ell)$ involve a fixed vector space, not so many letters varying. UGA definitely grants a number of masters to people wanting only that (and sometimes admitted only for that). You people at fancy places think that every university is like Chicago, MIT, and Princeton. hi there, I need to linearize nonlinear system about a fixed point. I've computed the jacobain matrix but one of the elements of this matrix is undefined at the fixed point. What is a better approach to solve this issue? The element is (24*x_2 + 5cos(x_1)*x_2)/abs(x_2). The fixed point is x_1=0, x_2=0 Consider the following integral: $\int 1/4*(1/(1+(u/2)^2)))dx$ Why does it matter if we put the constant 1/4 behind the integral versus keeping it inside? The solution is $1/2*\arctan{(u/2)}$. Or am I overseeing something? *it should be du instead of dx in the integral **and the solution is missing a constant C of course Is there a standard way to divide radicals by polynomials? Stuff like $\frac{\sqrt a}{1 + b^2}$? My expression happens to be in a form I can normalize to that, just the radicand happens to be a lot more complicated. In my case, I'm trying to figure out how to best simplify $\frac{x}{\sqrt{1 + x^2}}$, and so far, I've gotten to $\frac{x \sqrt{1+x^2}}{1+x^2}$, and it's pretty obvious you can move the $x$ inside the radical. My hope is that I can somehow remove the polynomial from the bottom entirely, so I can then multiply the whole thing by a square root of another algebraic fraction. Complicated, I know, but this is me trying to see if I can skip calculating Euclidean distance twice going from atan2 to something in terms of asin for a thing I'm working on. "... and it's pretty obvious you can move the $x$ inside the radical" To clarify this in advance, I didn't mean literally move it verbatim, but via $x \sqrt{y} = \text{sgn}(x) \sqrt{x^2 y}$. (Hopefully, this was obvious, but I don't want to confuse people on what I meant.) Ignore my question. I'm coming of the realization it's just not working how I would've hoped, so I'll just go with what I had before.
Hello, I've never ventured into char before but cfr suggested that I ask in here about a better name for the quiz package that I am getting ready to submit to ctan (tex.stackexchange.com/questions/393309/…). Is something like latex2quiz too audacious? Also, is anyone able to answer my questions about submitting to ctan, in particular about the format of the zip file and putting a configuration file in $TEXMFLOCAL/scripts/mathquiz/mathquizrc Thanks. I'll email first but it sounds like a flat file with a TDS included in the right approach. (There are about 10 files for the package proper and the rest are for the documentation -- all of the images in the manual are auto-generated from "example" source files. The zip file is also auto generated so there's no packaging overhead...) @Bubaya I think luatex has a command to force “cramped style”, which might solve the problem. Alternatively, you can lower the exponent a bit with f^{\raisebox{-1pt}{$\scriptstyle(m)$}} (modify the -1pt if need be). @Bubaya (gotta go now, no time for followups on this one …) @egreg @DavidCarlisle I already tried to avoid ascenders. Consider this MWE: \documentclass[10pt]{scrartcl}\usepackage{lmodern}\usepackage{amsfonts}\begin{document}\noindentIf all indices are even, then all $\gamma_{i,i\pm1}=1$.In this case the $\partial$-elementary symmetric polynomialsspecialise to those from at $\gamma_{i,i\pm1}=1$,which we recognise at the ordinary elementary symmetric polynomials $\varepsilon^{(n)}_m$.The induction formula from indeed gives\end{document} @PauloCereda -- okay. poke away. (by the way, do you know anything about glossaries? i'm having trouble forcing a "glossary" that is really an index, and should have been entered that way, into the required series style.) @JosephWright I'd forgotten all about it but every couple of months it sends me an email saying I'm missing out. Oddly enough facebook and linked in do the same, as did research gate before I spam filtered RG:-) @DavidCarlisle Regarding github.com/ho-tex/hyperref/issues/37, do you think that \textNFSSnoboundary would be okay as name? I don't want to use the suggested \textPUnoboundary as there is a similar definition in pdfx/l8uenc.def. And textnoboundary isn't imho good either, as it is more or less only an internal definition and not meant for users. @UlrikeFischer I think it should be OK to use @, I just looked at puenc.def and for example \DeclareTextCompositeCommand{\b}{PU}{\@empty}{\textmacronbelow}% so @ needs to be safe @UlrikeFischer that said I'm not sure it needs to be an encoding specific command, if it is only used as \let\noboundary\zzznoboundary when you know the PU encoding is going to be in force, it could just be \def\zzznoboundary{..} couldn't it? @DavidCarlisle But puarenc.def is actually only an extension of puenc.def, so it is quite possible to do \usepackage[unicode]{hyperref}\input{puarenc.def}. And while I used a lot @ in the chess encodings, since I saw you do \input{tuenc.def} in an example I'm not sure if it was a good idea ... @JosephWright it seems to be the day for merge commits in pull requests. Does github's "squash and merge" make it all into a single commit anyway so the multiple commits in the PR don't matter or should I be doing the cherry picking stuff (not that the git history is so important here) github.com/ho-tex/hyperref/pull/45 (@UlrikeFischer) @JosephWright I really think I should drop all the generation of README and ChangeLog in html and pdf versions it failed there as the xslt is version 1 and I've just upgraded to a version 3 engine, an dit's dropped 1.0 compatibility:-)
Terms sourced from: http://iupac.org/publications/pac/68/12/2223/ "Glossary of terms used in photochemistry (IUPAC Recommendations 1996)", Verhoeven, J.W., Pure and Applied Chemistry 1996, 68(12), 2223 absorbance \(A\)absorptance \(\alpha\)absorption absorption coefficient absorption cross-section \(\sigma\)absorptivity actinometer action spectrum adiabatic electron transfer adiabatic photoreaction annihilation antimony–xenon lamp (arc) apparent lifetime argon ion laser attenuance \(D\)attenuance filter auxochrome avoided crossing ADMR anti-Stokes shift back electron transfer bandgap energy \(E_{\text{g}}\)bandpass filter Barton reaction bathochromic shift (effect) Beer–Lambert law bioluminescence bleaching blue shift carbon dioxide laser cavity dumping charge hopping charge recombination charge separation charge shift charge-transfer transition to solvent charge-transfer (CT) state charge-transfer (CT) transition chemical laser chemiexcitation chemiluminescence chromophore Chemically Induced Dynamic Electron Polarization Chemically Induced Dynamic Nuclear Polarization Chemically Initiated Electron Exchange Luminescence coherent radiation collision complex conduction band configuration (electronic) configuration interaction conversion spectrum copper vapour laser correlation diagram correlation energy crystal field splitting CT current yield cut-off filter CW cadium–helium laser critical quenching radius \(r_{0}\) dark photochemistry (photochemistry without light) Davydov splitting deactivation delayed fluorescence depth of penetration Dexter excitation transfer diabatic electron transfer diabatic photoreaction diode laser dipolar mechanism doublet state driving force (affinity) of a reaction \(A\)dye laser DEDMR DFDMR dynamic quenching effectiveness efficiency \(\eta\)efficiency spectrum einstein electrogenerated chemiluminescence electron correlation electron exchange excitation transfer electron transfer electron transfer photosensitization electronic energy migration (or hopping) electronically excited state electrophotography emission emission spectrum emittance \(\varepsilon\)encounter complex encounter pair energy migration energy storage efficiency \(\eta\)energy transfer energy transfer plot enhancer excimer excimer laser exciplex excitation spectrum excitation transfer excited state exciton exitance external heavy atom effect exterplex extinction extinction coefficient electrochemiluminescence electrochromic effect electroluminescence energy pooling ESCA fnumber Fermi level \(E_{\text{F}}\)flash photolysis fluence \(F\),\(\varPsi \),\(H_{0}\)Förster cycle Förster excitation transfer Fourier transform spectrometer Franck–Condon principle free electron laser free-running laser frequency \(f\),\(\nu \)frequency doubling FWHM (Full Width at Half Maximum) factor-group splitting Franck–Condon state half-width Hammond–Herkstroeter plot harmonic frequency generation harpoon mechanism heavy atom effect helium–cadmium laser helium–neon laser Herkstroeter plot heteroexcimer high-pressure mercury lamp (arc) hole burning hole transfer hot ground state reaction hot quartz lamp hot state reaction Hush model hyperchromic effect hypochromic effect hypsochromic shift Hund rules imaging (photoimaging) incoherent radiation inner-sphere electron transfer integrating sphere intended crossing intensity interferometer internal conversion intersystem crossing intervalence charge transfer inverted region (for electron transfer) isoabsorption point isoclinic point inner filter effect Lambert law lamp Laporte rule laser lasing latent image ligand field splitting ligand to ligand charge transfer (LLCT) transition ligand to metal charge transfer (LMCT) transition light source Lorentzian band shape luminescence lumiphore Marcus inverted region (for electron transfer) Marcus–Hush relationship merry-go-round reactor metal to ligand charge transfer (MLCT) transition metal to metal charge transfer (MMCT) transition mode-locked laser multiphoton absorption multiphoton process multiplicity n → π* state n → π* transition n → σ* transition neodymium laser nitrogen laser non-linear optical effect non-radiative decay nonadiabatic electron transfer nonadiabatic photoreaction optically detected magnetic resonance optical density optical filter oscillator strength \(f_{ij}\)outer-sphere electron transfer oxa-di-π-methane rearrangement π – π* state π → π* transition Paterno–Büchi reaction phosphorescence photo-Fries rearrangement photoacoustic effect photoacoustic spectroscopy photoassisted catalysis photochemical hole burning photochemistry photoconductivity photocrosslinking photocurrent yield photodegradation photodetachment photodynamic effect photoelectrical effect photoelectrochemical cell photoelectrochemical etching photoelectrochemistry photoelectron spectroscopy photoexcitation photogalvanic cell photoinduced electron transfer photoinduced polymerization photoinitiation photoionization photon photon flow \(\mathit{\Phi} _{\text{p}}\)photooxidation photooxygenation photophysical processes photopolymerization photoreduction photoresist photosensitization photosensitizer photostationary state photothermal effect photothermography photovoltaic cell piezoluminescence population inversion precursor complex predissociation primary photochemical process primary (photo)process primary (photo)product Q-switched laser quantum counter quantum efficiency quantum quartet state quencher quenching constant radiant energy \(Q\)radiationless transition radiative transition radical pair radioluminescence red shift relative spectral responsivity resonance absorption technique resonance fluorescence resonance fluorescence technique resonance line resonance radiation rovibronic state ruby laser Rydberg orbital Rydberg transition π → σ* transition σ → σ* transition sacrificial acceptor sacrificial donor scintillators selection rule self-absorption self-quenching sensitization sensitizer simultaneous pair transitions singlet state singlet-singlet annihilation singlet-singlet energy transfer singlet-triplet energy transfer solar conversion efficiency solid state lasers solvatochromism solvent shift sonoluminescence spectral irradiance \(E_{\lambda}\)spectral overlap spectral (photon) effectiveness spectral photon exitance \(M_{\text{p}\lambda}\)spectral photon flow \(\varPhi_{\text{p}\lambda}\)spectral photon flux \(E_{\text{p}\lambda}\)spectral photon radiance \(L_{\text{p}\lambda}\)spectral radiance \(L_{\lambda}\)spectral radiant exitance \(M_{\lambda}\)spectral radiant flux spectral radiant intensity \(I_{\lambda}\)spectral radiant power \(P_{\lambda}\)spectral responsivity spectral sensitization spherical radiance spherical radiant exposure spin conservation rule spin-allowed electronic transition spin-orbit coupling spin-orbit splitting spin–spin coupling spontaneous emission Stark effect state crossing state diagram Stern–Volmer kinetic relationships stimulated emission Stokes shift superexchange interaction superradiance surface crossing thermal lensing thermally activated delayed fluorescence thermochromism thermoluminescence through-bond electron transfer through-space electron transfer TICT emission TICT state time-correlated single photon counting time-resolved spectroscopy transient spectroscopy transition polarization transmittance \(T\),\(\tau\)triboluminescence triplet state triplet-triplet annihilation triplet-triplet energy transfer triplet-triplet transitions tunnelling two-photon excitation two-photon process Vavilov rule vibrational redistribution vibrational relaxation vibronic coupling vibronic transition
Welcome back! I took a few-weeks’ blogging hiatus to focus on end-of-term craziness, but I am now resuming a regular(ish) weekly schedule throughout the summer. Let’s get back to Wythoff’s game! Here’s a brief recap: In trying to solve for optimal play in Wythoff’s game, we saw how to algorithmically find the blue “losing positions”; we observed that these seemed to lie on two lines and, assuming this fact, we computed the lines’ slopes to be \(\phi\) and \(1/\phi\); and we saw how the Fibonacci numbers were hiding all over the place. But one question lingers: why lines? We’ll answer this today. Two posts ago, we saw that if we take all of the (infinitely many) steps in the upper “V” branch and average them together, the result has slope \(\phi\). In fact, with a little more work we can compute this “average step” exactly, not just its slope: it is the vector \(v=(\phi,\phi^2)\). [1] Let’s compare these average steps, namely v, 2 v, 3 v, etc., with the actual ones: The dots and blue squares are perfectly matched! It seems that this may provide a precise way to easily locate all the blue cells in the upper “V” branch at once! And since the whole diagram is symmetric through the line \(y=x\), the lower “V” branch should be governed by vector \(w=(\phi^2,\phi)\) in the same way. Thus, a hypothesis forms: Conjecture: The losing cells in Wythoff’s game are exactly those that contain an integer multiple of vector \(v=(\phi,\phi^2)\) or vector \(w=(\phi^2,\phi)\). If we use the notation \(\lfloor x\rfloor\) for the floor function that rounds x down to the nearest integer [2], then this conjecture says that the nth blue cells on the upper and lower “V” branches have coordinates \((\lfloor n\cdot\phi\rfloor, \lfloor n\cdot\phi^2\rfloor)\) and \((\lfloor n\cdot\phi^2\rfloor, \lfloor n\cdot\phi\rfloor)\) respectively. (When \(n=0\), both formulas give (0,0).) As we will see, this conjecture is indeed correct. How could we rigorously prove this fact? To start, in a new grid, let’s color green all cells that fit our hypothesized formula, i.e., contain a multiple of v or w. Fill the rest of the cells with yellow. We now have two separate, a priori unrelated colorings of the grid: one with red/blue according to Wythoff’s game, and another with yellow/green according to vectors v and w. Proving the conjecture amounts to showing that these colorings are the same. We’ll accomplish this by showing that the yellow/green coloring behaves just like the red/blue one: Endgame condition: The cell (0,0) is green. Yellows win: From any yellow cell, it is possible to move to a green cell with one Wythoff’s game move. Greens lose: From a green cell, there are no other green cells accessible with a single Wythoff’s game move. We have already observed these three properties for the red/blue coloring, and we saw that they uniquely determined the red and blue cell positions. If we could show the yellow/green coloring follows the same pattern, we could conclude that the colorings are indeed identical. So we just need to check that our yellow/green formula satisfies these three conditions! Now that we have set the stage to dig into the meat of this proof, it is time to bid farewell until next week. See you then! Notes Indeed, since we know that a=(1,2)and b=(2,3)appear in proportion 1 to \(\phi\), the average step is \(\frac{1}{1+\phi}a+\frac{\phi}{1+\phi}b = (\phi,\phi^2)\). This can also be computed from the Fibonacci observations in the most recent post. [↩] For example, \(\lfloor 2.718\rfloor = 2\), \(\lfloor 5\rfloor = 5\), and \(\lfloor -3.14\rfloor = -4\). [↩]
You are right, this is true if and only if $n = 2$. To prove this (the only if part), observe that if $a_1, a_2, a_3$ are in arithmetic progression, then $a_2 = \dfrac{a_1 + a_3} 2$. Now, let the coefficient matrix be$$A = \begin{bmatrix}a_1&a_2&a_3\\b_1&b_2&b_3\\c_1&c_2&c_3\end{bmatrix}$$where in each row, the entries are in arithmetic progression in the given order. If $a$, $b$, and $c$ denote the three column vectors of $A$, then by our earlier observation, $b = \dfrac 1 2 (a + c)$. Thus, the three rows are linearly dependent, and the matrix is not invertible. Therefore, the system either has no solution, or has infinitely many solutions (since $\operatorname{rank} A$ is not equal to the number of unknowns). In fact, it has infinitely many solutions, if the right hand side of the system also follows the same arithmetic progression in each row. $\DeclareMathOperator{\rank}{rank}$A linear system $Ax = b$, where $A$ is an $m \times n$ matrix has a solution if $\rank A = \rank [A\ b]$, where $[A\ b]$ is the augmented matrix obtained by appending the column vector $b$ to the matrix $A$. If, further, this rank equals $n$, the number of unknowns (length of $x$), then the solution is unique. The rank can be defined in different ways, but is equal to the total number of linearly independent rows as well the total number of linearly independent columns of the matrix. Now, suppose $A$ whose columns are $A_1, A_2, \ldots, A_n$, and suppose that in each row of the augmented matrix $[A\ b]$, the entries are in arithmetic progression (as we move from the first column $A_1$ to the last, $b$). Then: $A_2 - A_1 = d$, where $d$ is the column vector of the respective common differences of the arithmetic progression — i.e., $a_{i, j + 1} - a_{ij} = d_i$, $i = 1, \ldots, m$, $j = 1, \ldots, n - 1$; and $b_i - a_{in} = d_i$ as well. Therefore, the $j$ th column of $A$ can be written as $A_1 + (j - 1)d = (2 - j)A_1 + (j - 1)A_2$, which is a linear combination of $A_1$ and $A_2$. Similarly, $b = (1 - n)A_1 + n A_2$ as well. Thus, $\rank A = \rank [A\ b] = 2$ (unless all the common differences are zero, i.e., $d = 0$, in which the rank is $1$). Therefore, the system always has a solution. The system is unique iff $n = 2$ (and $d \ne 0$).
I'm not entirely sure what you mean in your set-up. Typically, what one wants to do is maximize utility given some turnover constraint. I believe what you're talking about is optimizing several portfolios and minimizing the turnover across all of them. Hence, if you understand the simple case, then it should be easy to adapt it to your problem. First, consider a booksize constraint $$\sum\left|w_{i}\right|=K$$ that ensures that the absolute value of each position sums to some value $K$. Convex optimizers can't handle this because of the discontinuity. The trick is to re-write it with slack variables $x_{i}\geq0$ and $y_{i}\geq0$ with the constraints$$w_{i}=x_{i}-y_{i}$$$$\sum\left(x_{i}+y_{i}\right)=K$$ So for instance, if $w_{1}=-0.1$, then $x_{1}=0$ and $y_{1}=0.1$ This analysis extends easily to handle turnover constraints of the form $$\sum\left|w_{i} - w_{0}\right|=K$$ so all that is required is changing the constraint restricting $x$ and $y$ to $$w_{i} - w_{0}=x_{i}-y_{i}$$ Sometimes when dealing with transaction costs, it can also be helpful to add in the constraint $$x_{i}y_{i}=0$$ to ensure that $x$ or $y$ and identified and one is fixed to zero. It may not be necessary because the optimizer should force one to be zero as a result of finding the optimal portfolio, but if you see something like if $w_{1}=-0.1$ and $x_{1}=0.1$ and $y_{1}=0.2$, then you would want to add it. This would require an optimizer that handles non-linear constraints, whereas above only requires inequality constraints. In addition, an alternate approach would be to place a constraint on the L2-norm$$\left\Vert w_{i}-w_{0}\right\Vert \leq K$$which can be re-written$$\left(w_{i}-w_{0}\right)'\left(w_{i}-w_{0}\right)\leq K $$ and included in any optimizer that handles non-linear constraints. The downside of this is that it's a little less intuitive than a proper turnover constraint and you may have to test out different values of $K$.
Recall that the base of an exponential function must be a positive real number other than 1. Why do we limit the base b to positive values? To ensure that the outputs will be real numbers. Observe what happens if the base is not positive: Let b= –9 and [latex]x=\frac{1}{2}[/latex]. Then [latex]f\left(x\right)=f\left(\frac{1}{2}\right)={\left(-9\right)}^{\frac{1}{2}}=\sqrt{-9}[/latex], which is not a real number. Why do we limit the base to positive values other than 1? Because base 1 results in the constant function. Observe what happens if the base is 1: Let b= 1. Then [latex]f\left(x\right)={1}^{x}=1[/latex] for any value of x. To evaluate an exponential function with the form [latex]f\left(x\right)={b}^{x}[/latex], we simply substitute x with the given value, and calculate the resulting power. For example: Let [latex]f\left(x\right)={2}^{x}[/latex]. What is [latex]f\left(3\right)[/latex]? To evaluate an exponential function with a form other than the basic form, it is important to follow the order of operations. For example: Let [latex]f\left(x\right)=30{\left(2\right)}^{x}[/latex]. What is [latex]f\left(3\right)[/latex]? Note that if the order of operations were not followed, the result would be incorrect: Example 1: Evaluating Exponential Functions Let [latex]f\left(x\right)=5{\left(3\right)}^{x+1}[/latex]. Evaluate [latex]f\left(2\right)[/latex] without using a calculator. Solution Follow the order of operations. Be sure to pay attention to the parentheses. Try It 1 Let [latex]f\left(x\right)=8{\left(1.2\right)}^{x - 5}[/latex]. Evaluate [latex]f\left(3\right)[/latex] using a calculator. Round to four decimal places. Because the output of exponential functions increases very rapidly, the term “exponential growth” is often used in everyday language to describe anything that grows or increases rapidly. However, exponential growth can be defined more precisely in a mathematical sense. If the growth rate is proportional to the amount present, the function models exponential growth. A General Note: Exponential Growth A function that models exponential growth grows by a rate proportional to the amount present. For any real number x and any positive real numbers a and b such that [latex]b\ne 1[/latex], an exponential growth function has the form where ais the initial or starting value of the function. bis the growth factor or growth multiplier per unit x. In more general terms, we have an exponential function, in which a constant base is raised to a variable exponent. To differentiate between linear and exponential functions, let’s consider two companies, A and B. Company A has 100 stores and expands by opening 50 new stores a year, so its growth can be represented by the function [latex]A\left(x\right)=100+50x[/latex]. Company B has 100 stores and expands by increasing the number of stores by 50% each year, so its growth can be represented by the function [latex]B\left(x\right)=100{\left(1+0.5\right)}^{x}[/latex]. A few years of growth for these companies are illustrated below. Year, x Stores, Company A Stores, Company B 0 100 + 50(0) = 100 100(1 + 0.5) 0 = 100 1 100 + 50(1) = 150 100(1 + 0.5) 1 = 150 2 100 + 50(2) = 200 100(1 + 0.5) 2 = 225 3 100 + 50(3) = 250 100(1 + 0.5) 3 = 337.5 x A( x) = 100 + 50x B( x) = 100(1 + 0.5) x The graphs comparing the number of stores for each company over a five-year period are shown in below . We can see that, with exponential growth, the number of stores increases much more rapidly than with linear growth. Notice that the domain for both functions is [latex]\left[0,\infty \right)[/latex], and the range for both functions is [latex]\left[100,\infty \right)[/latex]. After year 1, Company B always has more stores than Company A. Now we will turn our attention to the function representing the number of stores for Company B, [latex]B\left(x\right)=100{\left(1+0.5\right)}^{x}[/latex]. In this exponential function, 100 represents the initial number of stores, 0.50 represents the growth rate, and [latex]1+0.5=1.5[/latex] represents the growth factor. Generalizing further, we can write this function as [latex]B\left(x\right)=100{\left(1.5\right)}^{x}[/latex], where 100 is the initial value, 1.5 is called the base, and x is called the exponent. Example 2: Evaluating a Real-World Exponential Model At the beginning of this section, we learned that the population of India was about 1.25 billion in the year 2013, with an annual growth rate of about 1.2%. This situation is represented by the growth function [latex]P\left(t\right)=1.25{\left(1.012\right)}^{t}[/latex], where t is the number of years since 2013. To the nearest thousandth, what will the population of India be in 2031? Solution To estimate the population in 2031, we evaluate the models for t = 18, because 2031 is 18 years after 2013. Rounding to the nearest thousandth, There will be about 1.549 billion people in India in the year 2031. Try It 2 The population of China was about 1.39 billion in the year 2013, with an annual growth rate of about 0.6%. This situation is represented by the growth function [latex]P\left(t\right)=1.39{\left(1.006\right)}^{t}[/latex], where t is the number of years since 2013. To the nearest thousandth, what will the population of China be for the year 2031? How does this compare to the population prediction we made for India in Example 2?
List of variables The environmental variables used throughout the website are listed below. The definitions include mathematical formulations that are likely to scare users that don’t possess a high scientific education. If you don’t understand the equations, don’t panic! Just skip them. You should be able, anyway, to understand the main and most important concepts. Fetch: X The fetch is the distance over which waves propagate under the wind forcing. It is measured in km. Iribarren number: \xi_0 The Iribarren number is the ratio between the beach slope \beta and the square root of the wave steepness. It is a dimensionless parameter.\xi_0 = \frac{\beta}{\sqrt{H_s/L_0}},where L_0 is the deep water wave length. Mean directional spread: \sigma_{\theta} The mean directional spread is the distribution of wave energy with direction. The smaller the directional spread, the larger the amount of wave energy concentrated around the mean wave direction. It is measured in degrees. \sigma_{\theta}= \Big\{ 2 \Big[ 1- \Big( \frac{a^2+b^2}{E^2} \Big) ^{1/2} \Big] \Big\} ^{1/2},where E=\int \int S(f,\theta) \, df \, d\theta. Mean wave direction: \theta_m The mean wave direction is the direction from which wave energy is coming. It is measured in degrees from North. \theta_m = \mathrm{atan} \Big( \frac{b}{a} \Big),a=\int \int\mathrm{cos}(\theta)S(f,\theta) \, df \, d\theta,b=\int \int\mathrm{sin}(\theta)S(f,\theta) \, df \, d\theta. Mean wave period: T_m The mean wave period is the weighted average of the periods of the wave components that form the spectrum. The weighting factor is the energy of the wave components. It is measured in seconds. T_m=\frac{\int \int S(f,\theta) \, df \, d\theta}{\int \int f S(f,\theta) \, df \, d\theta}. Peak wave period: T_p The peak wave period is the wave period of the most energetic wave component. It is measured in seconds. Significant wave height: H_s The significant wave height corresponds to the average of the largest third of the recorded wave heights. It is measured in m. H_s=\frac{1}{N/3}\sum_{n=1}^{N/3}H_n,where N is the size of the set in which waves are sorted from the largest wave,H_1, to the smallest wave H_N. Wave spectrum: S(f,\theta) The wave spectrum defines the distribution of wave energy with respect to frequency and direction. It has been introduced to describe a real, irregular sea state made up of a large number of individual wave components. It is measured in \mathrm{m}^2/\mathrm{Hz} S(f,\theta) \, df \, d \theta=\frac{1}{2}A^2(f,\theta), where A(f,\theta) is the amplitude of the wave component with frequency f and direction \theta.
The deeper problem with this supposition is that it assumes a conceptual identity between the notions of Hamiltonian and energy, and this is an identity that is not correct. That is, discernment needs to be applied to separate the two of these things. Conceptually, energy is a physical quantity that is, in a sense, "nature's money" - the "currency" that you have to expend to produce physical changes in the world. On a somewhat deeper level, energy is to time what momentum is to space. This can be seen across many areas, such as Noether's theorem, which relates the law of conservation of energy to the fact that the history of a system can be translated back and forth in time and still work the same way, i.e. that there is no preferred point in time in the laws of physics, and likewise, the same for momentum with it being translated around in space and still working the same way. It also occurs in relativity, in which the "four-momentum" incorporates energy as its temporal component. The Hamiltonian, on the other hand, is a mathematically modified version of the Lagrangian, through what is called the Legendre transform. The Lagrangian is a way to describe how that forces impact the time evolution of a physical system in terms of an optimization process, and the Hamiltonian converts this directly into an often more useful/intuitive differential equation process. In many cases, the Hamiltonian is equal to, the system total mechanical energy $E_\mathrm{mech}$, i.e. $K + U$, but this is not always so even in classical Hamiltonian mechanics, a fact which indicates and underscores the basic conceptual separation between the two. In quantum mechanics, the "energy is to time what momentum is to space" concept manifests in that it is the generator of temporal translation, or the generator of evolution, in the same way that momentum is the generator of spatial translation. In particular, just as we have a "momentum operator" $$\hat{p} := -i\hbar \frac{\partial}{\partial x}$$ which translates a position-space (here using one dimension for simplicity) wave function (mathematical representation of restricted information regarding the particle position on the part of an agent) $\psi$ via the somewhat-loose "infinitesimal equation" $$\psi(x - dx) = \psi(x) + \left(\frac{i}{\hbar} \hat{p} \psi\right)(x)$$ for translating it by a tiny forward nudge $dx$, likewise we would want to have an energy operator $$\hat{E} := i\hbar \frac{\partial}{\partial t}$$ which does the same but for translation with regard to time (the sign change is because we usually consider a temporal advance from $t$ to $t + dt$, as opposed to psychologically [perhaps also psycho-culturally] preferring spatial motions to be directed rightward, in our descriptions of things.). The problem here is that wave functions generally do not contain a time parameter, and at least non-relativistic quantum mechanics treats space and time separately, so the above cannot be a true operator on the system state space. Rather, it is more of a "pseudo-operator" that we'd "like" to have but can't "really" for this reason. One should note that this is the expression that appears on the right of the Schrodinger equation, which we could thus "better" write as $$\hat{H}[\psi(t)] = [\hat{E}\psi](t)$$ where $\psi$ is now a temporal sequence of wave functions (viz. a "curried function", which becomes an "ordinary" function when you consider the wave functions as the basis-independent Hilbert vectors). The Hamiltonian operator $\hat{H}$ is a bona fide operator, which acts only on the "present" configuration information for the system. What this equation is "really" saying is that in order for such a time series to represent a valid physical evolution, the Hamiltonian must also be able to translate it through time. The distinction between Hamiltonian and energy manifests in that the Hamiltonian will not translate every time sequence, while the energy pseudo-operator will, just as the momentum operator will translate every spatial wave function. Moreover, many Hamiltonians may be possible that give rise to the same energy spectrum. Because these two things are different, it makes no sense to equate them as operators, like suggested. You can, and should, have $\hat{H}[\psi(t)] = [\hat{E}\psi](t)$, but you should not have $\hat{H} = \hat{E}$!
The Piecewise Constant Pairwise Interaction Point Process Model Creates an instance of a pairwise interaction point process model with piecewise constant potential function. The model can then be fitted to point pattern data. Usage PairPiece(r) Arguments r vector of jump points for the potential function Details A pairwise interaction point process in a bounded region is a stochastic point process with probability density of the form $$ f(x_1,\ldots,x_n) = \alpha \prod_i b(x_i) \prod_{i < j} h(x_i, x_j) $$ where \(x_1,\ldots,x_n\) represent the points of the pattern. The first product on the right hand side is over all points of the pattern; the second product is over all unordered pairs of points of the pattern. Thus each point \(x_i\) of the pattern contributes a factor \(b(x_i)\) to the probability density, and each pair of points \(x_i, x_j\) contributes a factor \(h(x_i,x_j)\) to the density. The pairwise interaction term \(h(u, v)\) is called piecewise constant if it depends only on the distance between \(u\) and \(v\), say \(h(u,v) = H(||u-v||)\), and \(H\) is a piecewise constant function (a function which is constant except for jumps at a finite number of places). The use of piecewise constant interaction terms was first suggested by Takacs (1986). The function ppm(), which fits point process models to point pattern data, requires an argument of class "interact" describing the interpoint interaction structure of the model to be fitted. The appropriate description of the piecewise constant pairwise interaction is yielded by the function PairPiece(). See the examples below. The entries of r must be strictly increasing, positive numbers. They are interpreted as the points of discontinuity of \(H\). It is assumed that \(H(s) =1\) for all \(s > r_{max}\) where \(r_{max}\) is the maximum value in r. Thus the model has as many regular parameters (see ppm) as there are entries in r. The \(i\)-th regular parameter \(\theta_i\) is the logarithm of the value of the interaction function \(H\) on the interval \([r_{i-1},r_i)\). If r is a single number, this model is similar to the Strauss process, see Strauss. The difference is that in PairPiece the interaction function is continuous on the right, while in Strauss it is continuous on the left. The analogue of this model for multitype point processes has not yet been implemented. Value An object of class "interact" describing the interpoint interaction structure of a point process. The process is a pairwise interaction process, whose interaction potential is piecewise constant, with jumps at the distances given in the vector \(r\). References Takacs, R. (1986) Estimator for the pair potential of a Gibbsian point process. Statistics 17, 429--433. See Also Aliases PairPiece Examples # NOT RUN { PairPiece(c(0.1,0.2)) # prints a sensible description of itself data(cells) # }# NOT RUN { ppm(cells, ~1, PairPiece(r = c(0.05, 0.1, 0.2))) # fit a stationary piecewise constant pairwise interaction process # }# NOT RUN { ppm(cells, ~polynom(x,y,3), PairPiece(c(0.05, 0.1))) # nonstationary process with log-cubic polynomial trend# } Documentation reproduced from package spatstat, version 1.59-0, License: GPL (>= 2)
I've determined a number of probabilities pertaining to three subsets of the "magic simplex" of quantum states described in https://arxiv.org/abs/1212.5046, and certainly would like to have a pictorial representation of their interrelations--presumably, in the form of a Venn diagram (cf. Fig. 3 in cited paper). One set--$A$, call it--is composed of those states, the density matrices for which have "positive-partial-transposes" (PPT). Its probability is $\frac{8 \pi}{27 \sqrt{3}} \approx 0.537422$. Another set--$B$--consists of those states that pass a certain (mutually-orthogonal-bases [MUB]) test for entanglement. Its probability is $\frac{1}{6} \approx 0.16667$. The third set--$C$--consists of states that pass another (Choi witness) test for entanglement. Its probability is also $\frac{1}{6} \approx 0.16667$. The intersections of $A$ and $B$ and of $A$ and $C$ give conceptually-important "bound-entanglement" probabilities. Both amounts are $-\frac{4}{9}+\frac{4 \pi }{27 \sqrt{3}}+\frac{\log (27)}{18} \approx 0.00736862$. $B \land C$ is $\frac{1}{9} \approx 0.11111$. $B \lor C$ is $\frac{2}{9} \approx 0.22222$. Both $\neg B \land C$ and $B \land \neg C$ are $\frac{1}{18}$. $A\land \neg B\land \neg C$ gives $\frac{1}{9} (8 - \log{3}) \approx 0.52268$. $A \land B \land C$ is void. So, I would like a (planar?) Venn-type diagram representing--as well as possible--the intersection and union relations between $A, B$ and $C$ and the larger set $D$ of probability 1, of which they are subsets. An immediate idea would be to try to represent them by circles--but, I think, there is also an approach ("Euler diagrams" https://en.wikipedia.org/wiki/Euler_diagram) in which rectangles are employed.
Harmonic Notch Filter My basement is covered with power lines and florescent lights which makes collecting ECG and EEG data rather difficult due to the 60 cycle hum. I found the following notch filter to work very well at eliminating the background signal without effecting the highly amplified signals I was looking for. The notch filter is based on the a transfer function with the form $$H(z)=\frac{1}{2}(1+A(z))$$ where A(z) is an all pass filter. The original paper [1] describes a method to combine all the notch locations to get one transfer function. For use with a DSP, I prefer to have biquad coefficients and this is easy to do if each notch is taken as a second order IIR. A second order all pass filter has the form$$A(z) = \frac{a_2 + a_1 z^{-1} + z^{-2}}{1 + a_1 z^{-1} + a_2 z^{-2}}$$ Note the symmetry of the coefficients. The idea is to put the notches where the all pass filter phase goes through $\theta(\omega_n)=-(2 n - 1)\pi$. The width of the notch is dealt with by setting the all pass filter to $\theta(\omega_n-\frac{BW_n}{2})=-(2 n - 1)\pi + \frac{\pi}{2}$. For the simple second order notch we can define the following variables:$$\omega_n = 2\pi j\frac{f_n}{ f_s}$$ $$\omega_{BW}=2\pi\frac{f_{BW}}{f_s}$$ $$\omega_1=\omega_n - \frac{\omega_{BW}}{2}$$ $$\beta_1 = \omega_1 - \frac{\pi}{4}$$ $$\beta_2 = \omega_n - \frac{\pi}{2}$$ $$p_0 = \tan(\beta_1)$$ $$p_1 = \tan(\beta_2)$$ where $f_s$ is the sample rate, $j f_n$ is the $j$th harmonic of the notch frequency $f_n$ and $f_{BW}$ is the desired spread between 3 dB down amplitudes at the notch. A matrix of the form $$q = \left[\matrix{\sin(\omega_1)-p_0 \cos(\omega_1) & \sin(2 \omega_1) - p_0\cos(2 \omega_1) \\ \sin(\omega_n) - p_1 \cos(\omega_n) & \sin(2 \omega_n) - p_1\cos(2 \omega_n)}\right]$$ is then created and the inverse is computed so that the coefficients are found from $$c = q^{-1} p$$ or explicitly $$c_0=q^{-1}_{0 0} p_0 + q^{-1}_{0 1} p_1$$ $$c_1 = q^{-1}_{1 0} p_0 + q^{-1}_{1 1} p_1$$ The final filter is then given with $$H(z) = \frac{1 + c_1/2 + c_0 z^{-1} + (1 + c_1/2)z^{-2}}{1 + c_0 z^{-1} + c_1 z^{-2}}$$ Here is an example which uses this form to generate the first 8 odd harmonic notches of 60 Hz. Obviously there are better ways to output the coefficients, but for my purposes cutting and pasting was simple enough. Also note that creating a 50 Hz harmonic notch filter only requires change a 6 to a 5 on one line of code. In this example, the sample rate was 2kHz. #include <stdio.h> #include <stdlib.h> #include <math.h> main() { int i, j; double omega, acoef[3], bcoef[3], hnr, hni, hdr, hdi, mag; double c1, c2, f, s1, s2; double omegan1, omegabw1; double omega1, omega2, theta1, theta2, beta1, beta2; double p[2], q[2][2], det, qinv[2][2]; double am[2], tmp; for(j=1; j<16; j+=2) { omegan1 = 2.0*M_PI*60.0*j/2000.0; omegabw1 = 2.0*M_PI*15.0/2000.0; omega1 = omegan1 - omegabw1/2.0; omega2 = omegan1; theta1 = -M_PI/2.0; theta2 = -M_PI; beta1 = theta1/2.0 + omega1; beta2 = theta2/2.0 + omega2; p[0] = tan(beta1); p[1] = tan(beta2); q[0][0] = sin(omega1) - p[0]*cos(omega1); q[0][1] = sin(2.0*omega1) - p[0]*cos(2.0*omega1); q[1][0] = sin(omega2) - p[1]*cos(omega2); q[1][1] = sin(2.0*omega2) - p[1]*cos(2.0*omega2); det = q[0][0]*q[1][1] - q[0][1]*q[1][0]; qinv[0][0] = q[1][1]/det; qinv[0][1] = -q[0][1]/det; qinv[1][0] = -q[1][0]/det; qinv[1][1] = q[0][0]/det; am[0] = qinv[0][0]*p[0] + qinv[0][1]*p[1]; am[1] = qinv[1][0]*p[0] + qinv[1][1]*p[1]; printf("%d coefficients are %13.10lf %13.10lf\n", j, am[0], am[1]); } } I then used the output of this program with the following to create the biquad filter coefficients: #define A11 (-1.9162329361) #define A12 (0.9507867324) #define A31 (-1.6490444725) #define A32 (0.9530853152) #define A51 (-1.1482741905) #define A52 (0.9535607368) #define A71 (-0.4858941845) #define A72 (0.9538156135) #define A91 (0.2449035617) #define A92 (0.9540193348) #define A111 (0.9414624597) #define A112 (0.9542403314) #define A131 (1.5060265873) #define A132 (0.9545758641) #define A151 (1.8597673176) #define A152 (0.9554750804) : : // notch filter bcoef[0][0] = (1.0 + A12)/2.0; bcoef[0][1] = A11; bcoef[0][2] = (1.0 + A12)/2.0; acoef[0][0] = 1.0; acoef[0][1] = A11; acoef[0][2] = A12; bcoef[1][0] = (1.0 + A32)/2.0; bcoef[1][1] = A31; bcoef[1][2] = (1.0 + A32)/2.0; acoef[1][0] = 1.0; acoef[1][1] = A31; acoef[1][2] = A32; bcoef[2][0] = (1.0 + A52)/2.0; bcoef[2][1] = A51; bcoef[2][2] = (1.0 + A52)/2.0; acoef[2][0] = 1.0; acoef[2][1] = A51; acoef[2][2] = A52; bcoef[3][0] = (1.0 + A72)/2.0; bcoef[3][1] = A71; bcoef[3][2] = (1.0 + A72)/2.0; acoef[3][0] = 1.0; acoef[3][1] = A71; acoef[3][2] = A72; bcoef[4][0] = (1.0 + A92)/2.0; bcoef[4][1] = A91; bcoef[4][2] = (1.0 + A92)/2.0; acoef[4][0] = 1.0; acoef[4][1] = A91; acoef[4][2] = A92; bcoef[5][0] = (1.0 + A112)/2.0; bcoef[5][1] = A111; bcoef[5][2] = (1.0 + A112)/2.0; acoef[5][0] = 1.0; acoef[5][1] = A111; acoef[5][2] = A12; bcoef[6][0] = (1.0 + A132)/2.0; bcoef[6][1] = A131; bcoef[6][2] = (1.0 + A132)/2.0; acoef[6][0] = 1.0; acoef[6][1] = A131; acoef[6][2] = A132; bcoef[7][0] = (1.0 + A152)/2.0; bcoef[7][1] = A151; bcoef[7][2] = (1.0 + A152)/2.0; acoef[7][0] = 1.0; acoef[7][1] = A151; acoef[7][2] = A152; This "brute force" method of programming is not efficient, but for what I was doing at the time being quick and dirty got the job done with the least thinking. I then used to coefficients on 3 sets of data collected simultaneously and filtered all the data at once: for(k=0; k<3; k++) for(i=0; i<8; i++) for(j=0; j<3; j++) stage[k][i][j] = 0.0; for(j=0; j<3; j++) // loop over each curve { stage[j][0][2] = 0.0; for(i=0; i<3; i++) stage[j][0][2] += rawin[-3*i + j]*bcoef[0][i]; stage[j][0][2] -= acoef[0][1]*stage[j][0][1] + acoef[0][2]*stage[j][0][0]; for(k=1; k<7; k++) // loop over each harmonic { stage[j][k][2] = 0.0; for(i=0; i<3; i++) stage[j][k][2] += stage[j][k-1][2-i]*bcoef[k][i]; stage[j][k][2] -= acoef[k][1]*stage[j][k][1] + acoef[k][2]*stage[j][k][0]; } passptr[j] = 0.0; for(i=0; i<3; i++) passptr[j] += stage[j][6][2-i]*bcoef[7][i]; passptr[j] -= acoef[7][1]*passptr[j-3] + acoef[7][2]*passptr[j-6]; for(k=0; k<7; k++) { stage[j][k][0] = stage[j][k][1]; stage[j][k][1] = stage[j][k][2]; } } The "stage" variable contains the state of each biquad block as the signal passes though. These are zeroed out before any processing happens. "rawin" is the source data and this enters the first biquad. All the subsequent biquads get data from the previous biquad so that the final signal has been filtered with multiple notches. The output is saved at "passptr", which actually gets incremented by 3 in this case because there are 3 samples for every time stamp. At the very end, each biquad is advanced in time by shifting the internal storage one step. The biquads are then ready for the next input sample. This is only one way to approach a set of notch filters. Using the method of the original paper, a complete $q$ matrix for all the notches can be created at once, and the transfer function can be computed. One can then use standard methods to break the resulting transfer function into biquads. In any event, this method of computing a notch filter is pretty slick. [1] Soo-Chang Pei and Chien-Cheng Tseng, "IIR Multiple Notch Filter Design Based on Allpass Filter", 1996 IEEE TENCON. Digital Signal Processing Applications , pg 267-272 Previous post by Mike : Ancient History Thanks for your interesting blog. Think of it, an all-pass filter with notches -- it's diabolical! Mike, is there any chance you can post an image of the frequency magnitude response of your final filter? (Or provide the coefficients of the individual biquads?) [-Rick-] I don't have a plot (the original article does though) but here are the coefficients: harmonic 1: b[0]:0.975393 b[1]:-1.916233 b[2]:0.975393 a[0]:1.000000 a[1]:-1.916233 a[2]:-0".950787 harmonic 2: b[0]:-0".976543 b[1]:-1.649044 b[2]:-0".976543 a[0]:1.000000 a[1]:-1.649044 a[2]:-0".953085 harmonic 3: b[0]:-0".976780 b[1]:-1.148274 b[2]:-0".976780 a[0]:1.000000 a[1]:-1.148274 a[2]:-0".953561 harmonic 4: b[0]:-0".976908 b[1]:-0.485894 b[2]:-0".976908 a[0]:1.000000 a[1]:-0.485894 a[2]:-0".953816 harmonic 5: b[0]:-0".977010 b[1]:-0".244904 b[2]:-0".977010 a[0]:1.000000 a[1]:-0".244904 a[2]:-0".954019 harmonic 6: b[0]:-0".977120 b[1]:-0".941462 b[2]:-0".977120 a[0]:1.000000 a[1]:-0".941462 a[2]:-0".950787 harmonic 7: b[0]:-0".977288 b[1]:1.506027 b[2]:-0".977288 a[0]:1.000000 a[1]:1.506027 a[2]:-0".954576 harmonic 8: b[0]:-0".977738 b[1]:1.859767 b[2]:-0".977738 a[0]:1.000000 a[1]:1.859767 a[2]:-0".955475 The idea is to set the amplitude to 0 at just one point and be 1 every where else. You can do that because the all pass goes through PI phase shift so the amplitude changes from +1 to -1. For the all pass, this phase shift does not change amplitude, but when you add 1, the final amplitude is 0. A very neat trick! Dr. mike I 'matlabed' the design, it gives nice plots. Cheers Detlef clear A11= (-1.9162329361); A12= (0.9507867324); A31= (-1.6490444725); A32= (0.9530853152); A51= (-1.1482741905); A52= (0.9535607368); A71= (-0.4858941845); A72= (0.9538156135); A91= (0.2449035617); A92= (0.9540193348); A111= (0.9414624597); A112= (0.9542403314); A131= (1.5060265873); A132= (0.9545758641); A151= (1.8597673176); A152= (0.9554750804); fb0(0+1) = (1.0 + A12)/2.0; fb0(1+1) = A11; fb0(2+1) = (1.0 + A12)/2.0; fa0(0+1) = 1.0; fa0(1+1) = A11; fa0(2+1) = A12; fb1(0+1) = (1.0 + A32)/2.0; fb1(1+1) = A31; fb1(2+1) = (1.0 + A32)/2.0; fa1(0+1) = 1.0; fa1(1+1) = A31; fa1(2+1) = A32; fb2(0+1) = (1.0 + A52)/2.0; fb2(1+1) = A51; fb2(2+1) = (1.0 + A52)/2.0; fa2(0+1) = 1.0; fa2(1+1) = A51; fa2(2+1) = A52; fb3(0+1) = (1.0 + A72)/2.0; fb3(1+1) = A71; fb3(2+1) = (1.0 + A72)/2.0; fa3(0+1) = 1.0; fa3(1+1) = A71; fa3(2+1) = A72; fb4(0+1) = (1.0 + A92)/2.0; fb4(1+1) = A91; fb4(2+1) = (1.0 + A92)/2.0; fa4(0+1) = 1.0; fa4(1+1) = A91; fa4(2+1) = A92; fb5(0+1) = (1.0 + A112)/2.0; fb5(1+1) = A111; fb5(2+1) = (1.0 + A112)/2.0; fa5(0+1) = 1.0; fa5(1+1) = A111; fa5(2+1) = A12; fb6(0+1) = (1.0 + A132)/2.0; fb6(1+1) = A131; fb6(2+1) = (1.0 + A132)/2.0; fa6(0+1) = 1.0; fa6(1+1) = A131; fa6(2+1) = A132; fb7(0+1) = (1.0 + A152)/2.0; fb7(1+1) = A151; fb7(2+1) = (1.0 + A152)/2.0; fa7(0+1) = 1.0; fa7(1+1) = A151; fa7(2+1) = A152; n=1024; plot(0:n-1,20*log10(abs(freqz(fb0,fa0,n))),'b.-',... 0:n-1,20*log10(abs(freqz(fb1,fa1,n))),'b.-',... 0:n-1,20*log10(abs(freqz(fb2,fa2,n))),'b.-',... 0:n-1,20*log10(abs(freqz(fb3,fa3,n))),'b.-',... 0:n-1,20*log10(abs(freqz(fb4,fa4,n))),'b.-',... 0:n-1,20*log10(abs(freqz(fb5,fa5,n))),'b.-',... 0:n-1,20*log10(abs(freqz(fb6,fa6,n))),'b.-',... 0:n-1,20*log10(abs(freqz(fb7,fa7,n))),'b.-'); grid title('Magnitude of Biquads') clf;clg; polar(angle(roots(fb0)),abs(roots(fb0)),'b.');hold; polar(angle(roots(fa0)),abs(roots(fa0)),'r.'); polar(angle(roots(fb1)),abs(roots(fb1)),'b.'); polar(angle(roots(fa1)),abs(roots(fa1)),'r.'); polar(angle(roots(fb2)),abs(roots(fb2)),'b.'); polar(angle(roots(fa2)),abs(roots(fa2)),'r.'); polar(angle(roots(fb3)),abs(roots(fb3)),'b.'); polar(angle(roots(fa3)),abs(roots(fa3)),'r.'); polar(angle(roots(fb4)),abs(roots(fb4)),'b.'); polar(angle(roots(fa4)),abs(roots(fa4)),'r.'); polar(angle(roots(fb5)),abs(roots(fb5)),'b.'); polar(angle(roots(fa5)),abs(roots(fa5)),'r.'); polar(angle(roots(fb6)),abs(roots(fb6)),'b.'); polar(angle(roots(fa6)),abs(roots(fa6)),'r.'); polar(angle(roots(fb7)),abs(roots(fb7)),'b.'); polar(angle(roots(fa7)),abs(roots(fa7)),'r.'); return for(j=0; j<3; j++) // loop over each curve { stage[j][0][2] = 0.0; for(i=0; i<3; i++) stage[j][0][2] += rawin[-3*i + j]*bcoef[0][i]; . . . I believe it should be declared as: double acoef[8, 3], bcoef[8, 3] rawin is a pointer into the large block of data. Going backwards while in the middle of the block is not a problem because you are not going off the ends. Being care to not go off the ends is definitely something I struggle with because I'll miscount by 1 or 2 and get a core dump. I will post the programs later and give a pointer. Sounds like looking at all of the code will be helpful. Patience, persistence, truth, Dr. mike The file "notch_filter.c" generates the coefficients. These are then cut and pasted into the program "heart_notch.c" You can see how rawin += 3; is used to move the pointer one step (there are thee inputs per time sample) so the index -3*j moves back in time 1 sample. I think this code is an example of several different philosophies. It was not "designed". It was hacked! I am sure there are better ways to do things, so feel free to improve what you find and make things better for your problem. Dr. mike http://www.advsolned.com/downloads/ASN15-DOC002.pdf Please let us know what you think!!!! Notch at nth harmonic of 60Hz, sampled at fs c = cos(2*pi*n*60/fs) r = 0.975 (pole radius, same for all harmonics) A = (1 + r*r)/2 b[0] = A b[1] = -A*2*c b[2] = A a[0] = 1 a[1] = -A*2*c a[2] = r*r This guarantees that the gain is 1 at dc and fs/2, and 0 at the harmonic. To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments. Registering will allow you to participate to the forums on ALL the related sites and give you access to all pdf downloads.
Relation/Examples/Ordering on Arbitrary Sets of Integers/Not Many-to-One Jump to navigation Jump to search Example of Relation which is not Many-to-One Consider the following diagram, where: $A$ runs along the top $B$ runs down the left hand side a relation $\mathcal R$ between $A$ and $B$ is indicated by marking with $\bullet$ every ordered pair $\tuple {a, b} \in A \times B$ which is in the truth set of $\mathcal R$ $\begin{array}{r|rrrr} A \times B & 1 & 2 & 3 & 4 \\ \hline 1 & \bullet & \bullet & \bullet & \circ \\ 2 & \bullet & \bullet & \circ & \circ \\ 3 & \bullet & \circ & \circ & \circ \\ \end{array}$ This relation $\mathcal R$ can be described as: $\mathcal R = \set {\tuple {x, y} \in A \times B: x + y \le 4}$ $\mathcal R$ is not a many-to-one relation. Proof For example we have: $\tuple {1, 1} \in \mathcal R$ and: $\tuple {1, 2} \in \mathcal R$ Hence $\mathcal R$ is not many-to-one by definition. $\blacksquare$
SINGH R P Articles written in Pramana – Journal of Physics Volume 87 Issue 1 July 2016 Article ID 0007 Regular Lifetimes of excited states in the yrast band of the gamma-soft nuclei $^{131}$Ce and 133Pr have been measured using the recoil distance Doppler shift and Doppler shift attenuation methods. The yrast bands in $^{131}$Ce and $^{133}$Pr are based on odd decoupled neutron $νh_{11/2}$ high $\Omega$ and proton $\pi h_{11/2}$ low $\Omega$ orbitals, respectively. Thetriaxiality parameter extracted from the experimentally deduced values of transition quadrupole moments, within the framework of cranked Hartree–Fock–Bogoliubov (CHFB) and total Routhian surface (TRS) calculatons, is$\gamma ~ −80{^o}$ for the band in $^{131}$Ce at high spins, while for the band in $^{133}$Pr, the value of $\gamma$ is close to $0^{o}$. Thisagrees well with the $\gamma$ shape polarization property of high and low $\Omega_{11/2}$ orbitals in these gamma-soft nuclei. Current Issue Volume 93 | Issue 5 November 2019 Click here for Editorial Note on CAP Mode
Writing Mathematics for MathJax¶ Putting mathematics in a web page¶ To put mathematics in your web page, you can use TeX and LaTeX notation, MathML notation, AsciiMath notation, or a combination of all three within the same page; the MathJax configuration tells MathJax which you want to use, and how you plan to indicate the mathematics when you are using TeX/LaTeX or AsciiMath notation. These three formats are described in more detail below. TeX and LaTeX input¶ Mathematics that is written in TeX or LaTeX format is indicated using math delimiters that surround the mathematics, telling MathJax whatpart of your page represents mathematics and what is normal text.There are two types of equations: ones that occur within a paragraph(in-line mathematics), and larger equations that appear separated fromthe rest of the text on lines by themselves (displayed mathematics). The default math delimiters are $$...$$ and \[...\] fordisplayed mathematics, and \(...\) for in-line mathematics. Notein particular that the $...$ in-line delimiters are not usedby default. That is because dollar signs appear too often innon-mathematical settings, which could cause some text to be treatedas mathematics unexpectedly. For example, with single-dollardelimiters, “… the cost is $2.50 for the first one, and $2.00 foreach additional one …” would cause the phrase “2.50 for the firstone, and” to be treated as mathematics since it falls between dollarsigns. See the section on TeX and LaTeX Math Delimiters for more information on using dollar signs asdelimiters. Here is a complete sample page containing TeX mathematics (see the MathJax Web Demos Repository for more). <!DOCTYPE html><html><head><title>MathJax TeX Test Page</title><script src="https://polyfill.io/v3/polyfill.min.js?features=es6"></script><script type="text/javascript" id="MathJax-script" async src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-chtml.js"></script></head><body>When \(a \ne 0\), there are two solutions to \(ax^2 + bx + c = 0\) and they are$$x = {-b \pm \sqrt{b^2-4ac} \over 2a}.$$</body></html> Since the TeX notation is part of the text of the page, there are some caveats that you must keep in mind when you enter your mathematics. In particular, you need to be careful about the use of less-than signs, since those are what the browser uses to indicate the start of a tag in HTML. Putting a space on both sides of the less-than sign should be sufficient, but see TeX and LaTeX support for more details. If you are using MathJax within a blog, wiki, or other content management system, the markup language used by that system may interfere with the TeX notation used by MathJax. For example, if your blog uses Markdown notation for authoring your pages, the underscores used by TeX to indicate subscripts may be confused with the use of underscores by Markdown to indicate italics, and the two uses may prevent your mathematics from being displayed. See TeX and LaTeX support for some suggestions about how to deal with the problem. There are a number of extensions for the TeX input processor that areloaded by combined components that include the TeX input format (e.g., tex-chtml.js), and others that are loaded automatically whenneeded. See TeX and LaTeX Extensions fordetails on TeX extensions that are available. MathML input¶ For mathematics written in MathML notation, you mark your mathematicsusing standard <math> tags, where <math display="block">represents displayed mathematics and <math display="inline"> orjust <math> represents in-line mathematics. MathML notation will work with MathJax in HTML files, not just XHTMLfiles, even in older browsers and that the web page need not be servedwith any special MIME-type. Note, however, that in HTML (as opposed toXHTML), you should not include a namespace prefix for your <math>tags; for example, you should not use <m:math> except in an XHTML filewhere you have tied the m namespace to the MathML DTD by adding the xmlns:m="http://www.w3.org/1998/Math/MathML" attribute to your file’s <html> tag. In order to make your MathML work in the widest range of situations,it is recommended that you include the xmlns="http://www.w3.org/1998/Math/MathML" attribute on all <math> tags in your document (and this is preferred to the use ofa namespace prefix like m: above, since those are deprecated inHTML5), although this is not strictly required. Here is a complete sample page containing MathML mathematics (see the MathJax Web Demos Repository for more). <!DOCTYPE html><html><head><title>MathJax MathML Test Page</title><script src="https://polyfill.io/v3/polyfill.min.js?features=es6"></script><script type="text/javascript" id="MathJax-script" async src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/mml-chtml.js"></script></head><body><p>When<math xmlns="http://www.w3.org/1998/Math/MathML"> <mi>a</mi><mo>≠</mo><mn>0</mn></math>,there are two solutions to<math xmlns="http://www.w3.org/1998/Math/MathML"> <mi>a</mi><msup><mi>x</mi><mn>2</mn></msup> <mo>+</mo> <mi>b</mi><mi>x</mi> <mo>+</mo> <mi>c</mi> <mo>=</mo> <mn>0</mn></math>and they are<math xmlns="http://www.w3.org/1998/Math/MathML" display="block"> <mi>x</mi> <mo>=</mo> <mrow> <mfrac> <mrow> <mo>−</mo> <mi>b</mi> <mo>±</mo> <msqrt> <msup><mi>b</mi><mn>2</mn></msup> <mo>−</mo> <mn>4</mn><mi>a</mi><mi>c</mi> </msqrt> </mrow> <mrow> <mn>2</mn><mi>a</mi> </mrow> </mfrac> </mrow> <mtext>.</mtext></math></p></body></html> When entering MathML notation in an HTML page (rather than an XHTMLpage), you should not use self-closing tags, as these are not partof HTML, but should use explicit open and close tags for all your mathelements. For example, you should use <mspace width="5pt"></mspace> rather than <mspace width="5pt" /> in an HTML document. If youuse the self-closing form, some browsers will not build the math treeproperly, and MathJax will receive a damaged math structure, whichwill not be rendered as the original notation would have been.Typically, this will cause parts of your expression to not bedisplayed. Unfortunately, there is nothing MathJax can do about that,since the browser has incorrectly interpreted the tags long beforeMathJax has a chance to work with them. See the MathML page for more on MathJax’s MathML support. AsciiMath input¶ MathJax v2.0 introduced a new input format, AsciiMath notation, by incorporating ASCIIMathML. This input processor has not been fully ported to MathJax version 3 yet, but there is a version of it that uses the legacy version 2 code to patch it into MathJax version 3. None of the combined components currently include it, so you would need to specify it explicitly in your MathJax configuration in order to use it. See the AsciiMath page for more details. By default, you mark mathematical expressions written in AsciiMath bysurrounding them in “back-ticks”, i.e., `...`. Here is a complete sample page containing AsciiMath notation: <!DOCTYPE html><html><head><title>MathJax AsciiMath Test Page</title><script>MathJax = { loader: {load: ['input/asciimath', 'output/chtml']}}</script><script src="https://polyfill.io/v3/polyfill.min.js?features=es6"></script><script type="text/javascript" id="MathJax-script" async src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/startup.js"></script><body><p>When `a != 0`, there are two solutions to `ax^2 + bx + c = 0` andthey are</p><p style="text-align:center"> `x = (-b +- sqrt(b^2-4ac))/(2a) .`</p></body></html> See the AsciiMath support page for more on MathJax’s AsciiMath support and how to configure it. Putting Math in Javascript Strings¶ If your are using javascript to process mathematics, and need to put aTeX or LaTeX expression in a string literal, you need to be aware thatjavascript uses the backslash ( \) as a special character instrings. Since TeX uses the backslash to indicate a macro name, youoften need backslashes in your javascript strings. In order toachieve this, you must double all the backslashes that you want tohave as part of your javascript string. For example, var math = '\\frac{1}{\\sqrt{x^2 + 1}}'; This can be particularly confusing when you are using the LaTeX macro \, which must both be doubled, as \. So you would do var array = '\\begin{array}{cc} a & b \\\\ c & d \\end{array}'; to produce an array with two rows.
Measurable function Set context $ \langle X,\Sigma_X\rangle\in \mathrm{MeasurableSpace}(X) $ context $ \langle Y,\Sigma_Y\rangle\in \mathrm{MeasurableSpace}(Y) $ postulate $ f\in \mathrm{Measurable}(X,Y) $ context $ f:X\to Y $ $y\in \Sigma_Y$ postulate $ f^{-1}(y)\in\Sigma_X $ Discussion This is very similar to the definition of continuous function. People write $f:\langle X,\Sigma_X\rangle\to\langle Y,\Sigma_Y\rangle$ to point out the function is measurable, although I'd say that's abuse of language. Reference Parents Subset of Context
The Earth is moderately conductive, so your second schematic is identical to the first. The only difference is that the Earth has quite a higher resistance than a wire, but if we are only wondering about the state of this circuit in equilibrium, that is insignificant. An insightful question to ask is this: what happens if we build your circuit, but with no capacitor at all? What if there are just two wires connected to a battery? There must be a voltage difference between them (if the battery is working), and this implies a redistribution of charge. And in fact, there is. The two wires are really just a capacitor. One with long, drawn out plates that are really far apart. The capacitance for two parallel plates is given by: $$ C = \frac{k \epsilon_0 A}{d} $$ where: \$k\$ is the relative permittivity of the dielectric material between the plates. In our case it is air, with \$k\approx 1\$. \$\epsilon_0\$ is the permittivity of free space \$A\$ is the area of the plates \$d\$ is the distance between the plates Two wires may not be exactly a parallel plate geometry, but this is equation is a good simplification. The wires don't have a lot of area, so \$A\$ is small. And they are very far apart, compared to a discrete capacitor, so \$d\$ is very large. Consequently, \$C\$, the capacitance, will be very small. If the charges on the halves of a capacitor are \$+q\$ and \$-q\$, then capacitance can be defined as: $$ V = {q \over C} $$ By this equation, if \$C\$ is very small, then it does not take very much charge to create a very large voltage. There's an example of this that everyone has experienced: static shocks on a dry day. Your body has such a low capacitance to its surroundings that even a metaphorical handful of electrons moved around by shuffling around on the carpet can build a voltage high enough to make a miniature lighting bolt. So you see, the distribution of charge on the wires isn't exactly even, but because the capacitance of the wires is orders of magnitude less than that of the capacitor, the charge imbalance on the wires is insignificant in practice. More properly, your drawing should look like this: Notice that most of the charges have piled up near the surfaces of the capacitor. This makes sense: the electrons want to recombine with the holes, and the closest an electron can get to a hole is in the capacitor plates. There is some charge on the wires too, but because of their very small capacitance there's relatively little of it. Adding Earth doesn't change much other than the geometry:
JOSHI P Articles written in Pramana – Journal of Physics Volume 87 Issue 1 July 2016 Article ID 0007 Regular Lifetimes of excited states in the yrast band of the gamma-soft nuclei $^{131}$Ce and 133Pr have been measured using the recoil distance Doppler shift and Doppler shift attenuation methods. The yrast bands in $^{131}$Ce and $^{133}$Pr are based on odd decoupled neutron $νh_{11/2}$ high $\Omega$ and proton $\pi h_{11/2}$ low $\Omega$ orbitals, respectively. Thetriaxiality parameter extracted from the experimentally deduced values of transition quadrupole moments, within the framework of cranked Hartree–Fock–Bogoliubov (CHFB) and total Routhian surface (TRS) calculatons, is$\gamma ~ −80{^o}$ for the band in $^{131}$Ce at high spins, while for the band in $^{133}$Pr, the value of $\gamma$ is close to $0^{o}$. Thisagrees well with the $\gamma$ shape polarization property of high and low $\Omega_{11/2}$ orbitals in these gamma-soft nuclei. Current Issue Volume 93 | Issue 5 November 2019 Click here for Editorial Note on CAP Mode
Given a 4-vector, we can always define a 2x2 hermitian matrix: $$X=x^\mu \sigma_\mu=\left(\matrix{x^0+x^3&x^1-ix^2\\x^1+ix^2&x^0-x^3} \right)$$ Where $\sigma_i$ are just the Pauli matrices. In this base, we can define the Lorentz transformations as $\Lambda(L)$, where $X'=LXL^\dagger$. This representation forms the basis of the linear group $\mathrm{SL}(2, \mathbb C)$. However, I'm curious on the exact expression of the $2\times 2$ matrices that represent these Lorentz transformations (they don't appear in the literature). I've read that they can be characterized by just 6 real parameters (which is reminiscent of the 6 parameters for the $\mathrm{SO}(3)$ Lorentz representation).
When there is no linear term in the equation, another method of solving a quadratic equation is by using the square root property, in which we isolate the [latex]{x}^{2}[/latex] term and take the square root of the number on the other side of the equals sign. Keep in mind that sometimes we may have to manipulate the equation to isolate the [latex]{x}^{2}[/latex] term so that the square root property can be used. A General Note: The Square Root Property With the [latex]{x}^{2}[/latex] term isolated, the square root property states that: where k is a nonzero real number. How To: Given a quadratic equation with an [latex]{x}^{2}[/latex] term but no [latex]x[/latex] term, use the square root property to solve it. Isolate the [latex]{x}^{2}[/latex] term on one side of the equal sign. Take the square root of both sides of the equation, putting a [latex]\pm [/latex] sign before the expression on the side opposite the squared term. Simplify the numbers on the side with the [latex]\pm [/latex] sign. Example 6: Solving a Simple Quadratic Equation Using the Square Root Property Solve the quadratic using the square root property: [latex]{x}^{2}=8[/latex]. Solution Take the square root of both sides, and then simplify the radical. Remember to use a [latex]\\pm [/latex] sign before the radical symbol. The solutions are [latex]x=2\sqrt{2}[/latex], [latex]x=-2\sqrt{2}[/latex]. Example 7: Solving a Quadratic Equation Using the Square Root Property Solve the quadratic equation: [latex]4{x}^{2}+1=7[/latex] Solution First, isolate the [latex]{x}^{2}[/latex] term. Then take the square root of both sides. The solutions are [latex]x=\frac{\sqrt{6}}{2}[/latex], [latex]x=-\frac{\sqrt{6}}{2}[/latex]. Try It 6 Solve the quadratic equation using the square root property: [latex]3{\left(x - 4\right)}^{2}=15[/latex].
The important things to take from the question: elementary particles could have an intrinsic/quantized rest mass Any rest mass that is not a whole number multiple of this constant has to be attained from interacting with the Higgs Field Answer You have just described exactly how things are (to current physics knowledge) in our own universe, with the quantised mass-constant $= 0$, where any non-zero mass (of the fundamental/elementary particles) is obtained via interaction with the Higgs field/boson. While all fermions attain a mass via the Higgs, not all gauge bosons do (the photon). If, however, you want a non-zero mass-constant, then this requires breaking charge conservation (electric, weak and strong/colour). Stability is more to do with the total mass, with the decay constant of a particle related to the imaginary part of the mass of that particle. Explanation So, where does mass come from (apart from the Higgs) ?(here, intrinsic mass will be used to refer to mass that isn't obtained from the Higg's mechanism) That would be the Standard Model of particle physics (Brace yourselves...): all the known elementary particles and forces (excluding gravity) are described by terms in the following:$$\mathcal{L} = \mathcal{L}_{Higgs} + \mathcal{L}_{gauge} + \mathcal{L}_{lepton} + \mathcal{L}_{quark} + \mathcal{L}_{Yukawa}$$ Each term in the above contains mathematical terms such as $\bar f\cdot \gamma^a g_2 W_a f$ (in this case, interaction of a lepton or quark with part of the electroweak field. Other mathematical terms are not written for brevity), which can be drawn as part of Feynman diagrams, which in turn gives all the maths/statistics required to do particle physics. Including calculating the mass of the elementary particles - a term giving a particle $p$ a mass is written as a term such as $m\bar pp$. From condition 2, we can ignore $\mathcal{L}_{Higgs}$. What about the other terms? $\mathcal{L}_{gauge}$ contains the electroweak and strong fields, which have no intrinsic mass. Actually, neither does $\mathcal{L}_{lepton}$ or $\mathcal{L}_{quark}$. So, we're left with $\mathcal{L}_{Yukawa}$ $\mathcal{L}_{Yukawa}$ is, of itself, still quite long, containing terms involving the allowed possible interactions of fermions with the Higgs boson, so can also be ignored by point 2. Or, the only way that elementary particles can have mass is from the Higgs boson. This includes, to my knowledge, neutrinos. The reason that this is how things are is that any possible mass terms not arising from some form of interaction with the Higgs in the standard model wouldn't be invariant under what's known as gauge symmetry. Due to what's known as Noether's theorem, this means that making the Standard Model non-invariant under gauge symmetry would break conservation of charge - electric, weak and strong (colour). Putting it another way, the only way that your top paragraph (in a universe where the standard model is correct, anyway) is by not having charge conserved. If charge is conserved (and the standard model is correct), then your first paragraph is true, with the constant being $0$. isn't true Conveniently, the Yukawa coupling gives all fermions an 'attained mass'. Also convenient is that the photon, a gauge boson remains massless. However, no bosons have 'only intrinsic/quantized mass with no attained mass' [I've removed the word 'rest' because I'm pedantic about not using the term 'rest mass'] where the intrinsic mass is non-zero (the photon is a gauge boson with zero mass). If you want this to be the case, then feel free to violate charge conservation and the set the mass-constant to be whatever you like! How stable is an elementary particle? For a non-rigorous thought about how decay of a particle works, let's imagine that we've got a stationary particle * of mass $m$. This gives the energy as $E=mc^2$ and its time evolution is described by the Schrödinger equation $i\hbar\partial_t \Psi = \hat{H}\Psi$. If $\Psi$ is an eigenstate of $\hat{H}$ then this is defined as $i\hbar\partial_t \Psi = E\Psi$ and so, $$\Psi\left( t\right) = e^{-\frac{i}{\hbar}Et}\Psi_0.$$ Now, if $E$ is a complex number, $E = E_R - iE_I$, $$\Psi\left( t\right) = e^{-\frac{i}{\hbar}E_Rt}e^{-\frac{1}{\hbar}E_It}\Psi_0,$$ giving the decay constant as $\frac{1}{\hbar}E_I = \frac{1}{\hbar}m_Ic^2$. Or in other words, the decay constant is related to the imaginary component of the total mass, which means that, so long as the total mass of the particle remains the same, the stability in another universe of that particle is unaffected. * Stationary relative to you, anyway Edit: Gauge invariance in the Standard Model For the purposes of this section, there are 2 types of particle: bosons and fermions. Bosons are 'force carriers' and so are the result of what can only be described as 'quantising a gauge field'. In simpler terms, if you 'quantise' the electromagnetic (EM) field, you get a photon (or a number of photons). For a photon, the EM field can be written as $A_{\mu}$ or $A^{\mu}$, with the 'mass term' being $-m^2A_{\mu}A^{\mu}$. However, real-life physics dictates that you can perform what's known as a 'gauge transformation', where you can change the value of $A_{\mu}$ without actually changing the value of the electric field, $\mathbf{E}$ (which is the measurable thing, where $A_{\mu}$ isn't directly measurable *). This means that any term that is an intrinsic $A_{\mu}A^{\mu}$ can change value without having any effect on the measured results. However, if you can change such a term around as much as you like, then the mass of the particle (also measurable), can be whatever you like and change between measurements without anything being done to it, which is simply impossible. So, as this is the electric field, charge cannot be conserved. This is due to Maxwell's equation $\nabla\cdot \mathbf{E} = \frac{\rho}{\epsilon_0}$ where $\epsilon_0$ is a constant and $\rho$ is charge. The other gauge bosons have similar terms, only with more complicated maths. Now taking fermions, say an electron ($e^-$) and positron ($e^+$). The 'mass term' is $-me^+e^- = e^+_Le^-_R + e^+_Re^-_L$ (these are actually 'spinors' - the representation of spin-half particles - and the result is just how the maths says that these spinors are multiplied ('dotted') together - no-one really understands spin). However, the weak isospin (similar to a weak force equivalent to charge) of an electron is $-\frac{1}{2}$ and a positron is $0$. However, just like Kirchoff's current law, if charge is conserved, the sum of the current going in = the sum of the current going out. In the above electron/positron case, the current going in (equivalent to the weak isopin of the positron, $e^+$) doesn't equal the current going out (weak isospin of the electron), so charge isn't conserved. As with the boson case, the same is true for all fermions, so an intrinsic mass term is impossible without violating conservation of charge. * for some maths, $A^{\mu} = \left(\frac{1}{c}\phi, \mathbf{A} \right)$ and $\mathbf{E} = -\nabla\phi - \partial_t\mathbf{A}$, so $A_{\mu}A^{\mu} = -\frac{1}{c^2}\phi^2 + \vert\mathbf{A}\vert^2$
Proof theory Framework We can use logic to reason about logical derivations. The object language contains formulae $foo$, $bar$, etc. and we use $foo\vdash bar$, which one might reads as “if $foo$ is provable, then $bar$ also provable.” In the proof theoretic logic, we use a variable which represents a collection of object language formulae, called context, and generally denoted $\Gamma$. Moreover, we then must deal with several notion of “and”: We possibly have a notion of conjunction in the object language (generally written $\land$), but we also need two conjunctions in the meta language: A gap in the top line denotes a conjunction of premises as introduces in Logic, and a comma between formulae denotes a conjunction which arises from such a gap in a rule of the object language. There are dozens of proof theories, but in the following we present some rules of a traditional one: ${\large\frac{}{\phi\vdash\phi}}(identity)$ ${\large\frac{\Gamma,\Psi\vdash\vartheta}{\Gamma,\alpha,\Psi\vdash\vartheta}}(weaken)$ ${\large\frac{\Gamma,\alpha,\alpha,\Psi\vdash\vartheta}{\Gamma,\alpha,\Psi\vdash\vartheta}}(contract)$ ${\large\frac{\Gamma,\alpha,\beta,\Psi\vdash\vartheta}{\Gamma,\beta,\alpha,\Psi\vdash\vartheta}}(exchange)$ ${\large\frac{\Gamma\vdash\alpha\hspace{.5cm}\alpha,\Psi\vdash\vartheta}{\Gamma,\Psi\vdash\vartheta}}(cut)$ It's worth pointing out that $contract$ and $exchange$ last two rules give $\Gamma$ the property of a set over a list. The rules $contract$ and $weaken$ are modified in Linear logic, which, roughly speaking, considers arguments to be a bounded resource. It has applications in programming. From a computational point of view, the $cut$ rule has a little more going on and is focal point of interesting theorems about proof theory, see e.g. Cut-elimination_theorem.
Fermat Number, a class of numbers, is an integer of the form $ F_n=2^{2^n} +1 \ \ n \ge 0$ . For example: Putting $ n := 0,1,2 \ldots$ in $ F_n=2^{2^n}$ we get $ F_0=3$ , $ F_1=5$ , $ F_2=17$ , $ F_3=257$ etc. Fermat observed that all the integers $ F_0, F_1, F_2, F_3, \ldots$ were prime numbers and announced that $ F_n$ is a prime for each natural value of $ n$ . In writing to Prof. Mersenne, Fermat confidently announced: I have found that numbers of the form $ 2^{2^n}+1$ are always prime numbers and have long since signified to analysts the truth of this theorem. However, he also accepted that he was unable to prove it theoretically. Euler in 1732 negated Fermat’s fact and told that $ F_1 -F_4$ are primes but $ F_5=2^{2^5} =4294967297$ is not a prime since it is divisible by 641. Euler also stated that all Fermat numbers are not necessarily primes and the Fermat number which is a prime, might be called a Fermat Prime. Euler used division to prove the fact that $ F_5$ is not a prime. The elementary proof of Euler’s negation is due to G. Bennett. Theorem: The Fermat number $ F_5$ is divisible by $ 641$ i.e., $ 641|F_5$ . Proof: As defined $ F_5 :=2^{2^5}+1=2^{32}+1 \ \ldots (1)$ Factorising $ 641$ in such a way that $ 641=640+1 =5 \times 128+1 \\ =5 \times 2^7 +1$ Subtracting $ a^4=5^4=625$ from 641, we get $ ab+1-a^4=641-625=16=2^4 \ \ldots (2)$ . Now again, equation (1) could be written as Mathematics is on its progression and well developed now but it is yet not confirmed that whether there are infinitely many Fermat primes or, for that matter, whether there is at least one Fermat prime beyond $ F_4$ . The best guess is that all Fermat numbers $ F_n>F_4$ are composite (non-prime). A useful property of Fermat numbers is that they are relatively prime to each other; i.e., for Fermat numbers $ F_n, F_m \ m > n \ge 0$ , $ \mathrm{gcd}(F_m, F_n) =1$ . Following two theorems are very useful in determining the primality of Fermat numbers: Pepin Test: For $ n \ge 1$ , the Fermat number $ F_n$ is prime $ \iff 3^{(F_n-1)/2} \equiv -1 \pmod {F_n}$ Euler- Lucas Theorem Any prime divisor $ p$ of $ F_n$ , where $ n \ge 2$ , is of form $ p=k \cdot 2^{n+2}+1$ . Fermat numbers ($ F_n$ ) with $ n=0, 1, 2, 3, 4$ are prime; with $ n=5,6,7,8,9,10,11$ have completely been factored; with $ n=12, 13, 15, 16, 18, 19, 25, 27, 30$ have two or more prime factors known; with $ n=17, 21, 23, 26, 28, 29, 31, 32$ have only one prime factor known; with $ n=14,20,22,24$ have no factors known but proved composites. $ F_{33}$ has not yet been proved either prime or composite. Feel free to ask questions, send feedback and even point out mistakes. Great conversations start with just a single word. How to write better comments?
SCM Repository View of /branches/vis12/test/unicode-cheatsheet.diderot Revision File size: 782 byte(s) 1927- ( download) ( annotate) Sat Jun 23 18:09:16 2012 UTC(7 years, 3 months ago) by jhr File size: 782 byte(s) converting to use "image" instead of "load" for image nrrd loading /* useful unicode characters for Diderot ⊛ convolution, as in field#2(3)[] F = bspln3 ⊛ image("img.nrrd"); LaTeX: \circledast is probably typical, but \varoast (with \usepackage{stmaryrd}) is slightly more legible × cross product, as in vec3 camU = normalize(camN × camUp); LaTeX: \times π Pi, as in real rad = degrees*π/360.0; LaTeX: \pi ∇ Del, as in vec3 grad = ∇F(pos); LaTeX: \nabla • dot product, as in real ld = norm • lightDir; LaTeX: \bullet, although \cdot more typical for dot products ⊗ tensor product, as in tensor[3,3] Proj = identity[3] - norm⊗norm LaTeX: \otimes ∞ Infinity, as in output real val = -∞; LaTeX: \infty */ strand blah (int i) { output real out = 0.0; update { stabilize; } } initially [ blah(i) | i in 0..0 ]; root@smlnj-gforge.cs.uchicago.edu ViewVC Help Powered by ViewVC 1.0.0
Just to make things clear - this property is not fundamental but important. It is the fundamental difference when it comes to using DCT instead of DFT for spectrum calculation. Why do we do Cepstral Mean Normalisation In speaker recognition we want to remove any channel effects (impulse response of vocal tract, audio path, room, etc.). Providing that input signal is $x[n]$ and channel impulse response is given by $h[n]$, the recorded signal is linear convolution of both: $$y[n] = x[n] \star h[n]$$ By taking the Fourier Transform we get: $$Y[f] = X[f]\cdot H[f] $$ due to convolution-multiplication equivalence property of FT - that is why it's so important property of FFT at this step. Next step in calculation of cepstrum is taking the logarithm of spectrum: $$Y[q] = \log Y[f] = \log \left( X[f] \cdot H[f]\right) = X[q] + H[q]$$ because: $\log(ab) = \log a +\log b $. Obviously, $q$ is the quefrency. As one might notice, by taking the cepstrum of convolution in time domain we end up with the addition in cepstral (quefrency) domain. What is the Cepstral Mean Normalisation? Now we know that in cepstral domain any convolutional distortions are represented by addition. Let's assume that all of them are stationary (which is a strong assumption as a vocal tract and channel response are not changing) and the stationary part of speech is negligible. We can observe that for every i-th frame true is: $$Y_i[q] = H[q] + X_i[q] $$ By taking the average over all frames we get $$\dfrac{1}{N}\sum_{i} Y_i[q] = H[q] + \dfrac{1}{N}\sum_{i} X_i[q]$$ Defining the difference: $$\begin{array}&R_i[q] &= Y_i[q] - \dfrac{1}{N}\sum_{j} Y_j[q]\\& = H[q] + X_i[q] - \left(H[q] + \dfrac{1}{N}\sum_{j} X_j[q]\right) \\& = X_i[q] - \dfrac{1}{N}\sum_{j} X_j[q]\\\end{array}$$ We ending up with our signal with channel distortions removed. Putting all above equations into simple English: Calculate cepstrum Subtract the average from each coefficient Optionally divide by variance to perform Cepstral Mean Normalisation as opposed to Subtraction. Is Cepstral Mean Normalisation necessary? It's not mandatory, especially when you are trying to recognise one speaker in a single environment. In fact, it can even deteriorate your results, as it's prone to errors due to additive noise: $$y[n] = x[n] \star h[n] + w[n] $$ $$Y[f] = X[f]\cdot H[f] + W[f] $$ $$\log Y[f] = \log \left[X[f]\left(H[f]+\dfrac{W[f]}{X[f]} \right) \right] = \log X[f] +\log \left(H[f]+\color{red}{\dfrac{W[f]}{X[f]}} \right)$$ In poor SNR conditions marked term can overtake the estimation. Although when CMS is performed, you can usually gain few extra percent. If you add to that performance gain from derivatives of coefficients then you get a real boost of your recognition rate. The final decision is up to you, especially that there are plenty of other methods used for the improvement of speech recognition systems.
It looks like you're new here. If you want to get involved, click one of these buttons! One of the main lessons of category theory is that whenever you think about some kind of mathematical gadget, you should also think about maps between gadgets of this kind. For example, when you think about sets you should also think about functions. When you think about vector spaces you should also think about linear maps. And so on. We've been talking about various kinds of monoidal preorders. So, let's think about maps between monoidal preorders. As I explained in Lecture 22, a monoidal preorder is a crossbreed or hybrid of a preorder and a monoid. So let's think about maps between preorders, and maps between monoids, and try to hybridize those. We've already seen maps between preorders: they're called monotone functions: Definition. A monotone function from a preorder \((X,\le_X)\) to \((Y,\le_Y)\) is a function \(f : X \to Y\) such that $$ x \le_X x' \textrm{ implies } f(x) \le_Y f(x') $$ for all elements \(x,x' \in X\). So, these functions preserve what a preorder has, namely the relation \(\le\). A monoid, on the other hand, has an associative operation \(\otimes\) and a unit element \(I\). So, a map between monoids should preserve th0se! That's how this game works. Just to scare people, mathematicians call these maps "homomorphisms": Definition. A homomorphism from a monoid \( (X,\otimes_X,I_X) \) to a monoid \( (Y,\otimes_Y,I_Y) \) is a function \(f : X \to Y\) such that: $$ f(x \otimes_X x') = f(x) \otimes_Y f(x') $$ for all elements \(x,x' \in X\), and $$ f(I_X) = I_Y .$$ You've probably seen a lot of homomorphisms between monoids. Some of them you barely noticed. For example, the set of integers \(\mathbb{Z}\) is a monoid with addition as \(\otimes\) and the number \(0\) as \(I\). So is the set \(\mathbb{R}\) of real numbers! There's a function that turns each integer into a real number: $$ i: \mathbb{Z} \to \mathbb{R} . $$It's such a bland function you may never have thought about it: it sends each integer to itself, but regarded as a real number. And this function is a homomorphism! What does that mean? Look at the definition. It means you can either add two natural numbers and then regard the result as a real number... or first regard each of them as a real number and then add them... and you get the same answer either way. It also says that integer \(0\), regarded as a real number, is the real number we call \(0\). Boring facts! But utterly crucial facts. Computer scientists need to worry about these things, because for them integers and real numbers (or floating-point numbers) are different data types, and \(i\) is doing "type conversion". You've also seen a lot of other, more interesting homomorphisms between monoids. For example, the whole point of the logarithm function is that it's a homomorphism. It carries multiplication to addition: $$ \log(x \cdot x') = \log(x) + \log(x') $$ and it carries the identity for multiplication to the identity for addition: $$ \log(1) = 0. $$ People invented tables of logarithms, and later slide rules, precisely for this reason! They wanted to convert multiplication problems into easier addition problems. You may also have seen linear maps between vector spaces. A vector space gives a monoid with addition as \(\otimes\) and the zero vector as \(I\); any linear map between vector spaces then gives a homomorphism. Puzzle 80. Tell me a few more homomorphisms between monoids that you routinely use, or at least know. I hope I've convinced you: monotone functions between preorders are important, and so are homomorphisms between monoids. Thus, if we hybridize these concepts, we'll get a concept that's likely to be important. It turns out there are a few different ways! The most obvious way is simply to combine all the conditions. There are other ways, so this way is called "strict": Definition. A strict monoidal monotone from a monoidal preorder \( (X,\le_X,\otimes_X,I_X) \) to a monoidal preorder \( (Y,\le_Y,\otimes_Y,I_Y) \) is a function \(f : X \to Y\) such that: $$ x \le_X x' \textrm{ implies } f(x) \le_Y f(x') $$ and $$ f(x) \otimes_Y f(x') = f(x \otimes_X x') $$ for all elements \(x,x' \in X\), and also $$ I_Y = f(I_X) . $$ For example, the homomorphism $$ i : \mathbb{Z} \to \mathbb{R} ,$$ is a strict monoidal monotone: if one integer is \(\le\) another, then that's still true when we regard them as real numbers. So is the logarithm function. What other definition could we possibly use, and why would we care? It turns out sometimes we want to replace some of the equations in the above definition by inequalities! Definition. A lax monoidal monotone from a monoidal preorder \((X,\le_X,\otimes_X,I_X)\) to a monoidal preorder \((Y,\le_Y,\otimes_Y,I_Y)\) is a function \(f : X \to Y\) such that: $$ x \le_X x' \textrm{ implies } f(x) \le_Y f(x') $$ and $$ f(x) \otimes_Y f(x') \le_Y f(x \otimes_X x') $$ for all elements \(x,x' \in X\), and also $$ I_Y \le_Y f(I_X). $$Fong and Spivak call this simply a monoidal monotone, since it's their favorite kind. But I will warn you that others call it "lax". We could also turn around those last two inequalities: Definition. An oplax monoidal monotone from a monoidal preorder \((X,\le_X,\otimes_X,I_X)\) to a monoidal preorder \((Y,\le_Y,\otimes_Y,I_Y)\) is a function \(f : X \to Y\) such that: $$ x \le_X x' \textrm{ implies } f(x) \le_Y f(x') $$ and $$ f(x) \otimes_Y f(x') \ge_Y f(x \otimes_X x') $$ for all elements \(x,x' \in X\), and also $$ I_Y \ge_Y f(I_X). $$ You are probably drowning in definitions now, so let me give some examples to show that they're justified. The monotone function $$ i : \mathbb{Z} \to \mathbb{R} $$ has a right adjoint $$ \lfloor \cdot \rfloor : \mathbb{R} \to \mathbb{Z} $$which provides the approximation from below to the nonexistent inverse of \(i\): that is, \( \lfloor x \rfloor \) is the greatest integer that's \(\le x\). It also has a left adjoint $$ \lceil \cdot \rceil : \mathbb{R} \to \mathbb{Z} $$which is the best approximation from above to the nonexistent inverse of \(i\): that is, \( \lceil x \rceil \) is the least integer that's \(\ge x\). Puzzle 81. Show that one of the functions \( \lfloor \cdot \rfloor : \mathbb{R} \to \mathbb{Z} \), \( \lceil \cdot \rceil : \mathbb{R} \to \mathbb{Z} \) is a lax monoidal monotone and the other is an oplax monoidal monotone, where we make the integers and reals into monoids using addition. So, you should be sensing some relation between left and right adjoints, and lax and oplax monoidal monotones. We'll talk about this more! And we'll see why all this stuff is important for resource theories. Finally, for the bravest among you: Puzzle 82. Find a function between monoidal preorders that is both lax and oplax monoidal monotone but not strict monoidal monotone. In case you haven't had enough jargon for today: a function between monoidal preorders that's both lax and oplax monoidal monotone is called strong monoidal monotone.
[This is the 6th post in the current series about Wythoff’s game: see posts #1, #2, #3, #4, and #5. Caveat lector: this post is a bit more difficult than usual. Let me know what you think in the comments!] Our only remaining task from last week was to prove the mysterious Covering Theorem: we must show that there is exactly one dot in each row and column of the grid (we already covered the diagonal case). Since the rows and columns are symmetric, let’s focus on columns. The columns really only care about the x-coordinates of the points, so let’s draw just these x-coordinates on the number-line. We’ve drawn \(\phi,2\phi,3\phi,\ldots\) with small dots and \(\phi^2,2\phi^2,3\phi^2,\ldots\) with large dots. We need to show that there’s exactly one dot between 1 and 2, precisely one dot between 2 and 3, just one between 3 and 4, and so on down the line. For terminology’s sake, break the number line into length-1 intervals [1,2], [2,3], [3,4], etc., so we must show that each interval has one and only one dot: Why is this true? One explanation hinges on a nice geometric observation: Take any small dot s and large dot t on our number-line above, and cut segment st into two parts in the ratio \(1:\phi\) (with s on the shorter side). Then the point where we cut is always an integer! For example, the upper-left segment in the diagram below has endpoints at \(s=2\cdot\phi\) and \(t=1\cdot\phi^2\), and its cutting point is the integer 3: In general, if s is the jth small dot—i.e., \(s=j\cdot\phi\)—and \(t=k\cdot\phi^2\) is the kth large dot, then the cutting point between s and t is \(\frac{1}{\phi}\cdot s+\frac{1}{\phi^2}\cdot t = j+k\) (Why?! [1]). But more importantly, this observation shows that no interval has two or more dots: a small dot and a large dot can’t be in the same interval because they always have an integer between them! [2] So all we have to do now is prove that no interval is empty: for each integer n, some dot lies in the interval [ n, n+1]. We will prove this by contradiction. What happens if no dot hits this interval? Then the sequence \(\phi,2\phi,3\phi,\ldots\) jumps over the interval, i.e., for some j, the jth dot in the sequence is less than n but the ( j+1)st is greater than n+1. Likewise, the sequence \(\phi^2,2\phi^2,3\phi^2,\ldots\) jumps over the interval: its kth dot is less than n while its ( k+1)st dot is greater than n+1: By our observation above on segment \(s=j\phi\) and \(t=k\phi^2\), we find that the integer j+ k is less than n, so \(j+k\le n-1\). Similarly, \(j+k+2 > n+1\), so \(j+k+2 \ge n+2\). But together these inequalities say that \(n\le j+k\le n-1\), which is clearly absurd! This is the contradiction we were hoping for, so the interval [ n, n+1] is in fact not empty. This completes our proof of the Covering Theorem and the Wythoff formula! It was a long journey, but we’ve finally seen exactly why the Wythoff losing positions are arranged as they are. Thank you for following me through this! A Few Words on the Column Covering Theorem Using the floor function \(\lfloor x\rfloor\) that rounds x down to the nearest integer, we can restate the Column Covering Theorem in perhaps a more natural context. The sequence of integers $$\lfloor\phi\rfloor = 1, \lfloor 2\phi\rfloor = 3, \lfloor 3\phi\rfloor = 4, \lfloor 4\phi\rfloor = 6, \ldots$$ is called the Beatty sequence for the number \(\phi\), and similarly, $$\lfloor\phi^2\rfloor = 2, \lfloor 2\phi^2\rfloor = 5, \lfloor 3\phi^2\rfloor = 7, \lfloor 4\phi^2\rfloor = 8,\ldots$$ is the Beatty sequence for \(\phi^2\). Today we proved that these two sequence are complementary, i.e., together they contain each positive integer exactly once. We seemed to use very specific properties of the numbers \(\phi\) and \(\phi^2\), but in fact, a much more general theorem is true: Beatty’s Theorem: If \(\alpha\) and \(\beta\) are any positive irrational numbers with \(\frac{1}{\alpha}+\frac{1}{\beta}=1\), then their Beatty sequences \(\lfloor\alpha\rfloor, \lfloor 2\alpha\rfloor, \lfloor 3\alpha\rfloor,\ldots\) and \(\lfloor\beta\rfloor, \lfloor 2\beta\rfloor, \lfloor 3\beta\rfloor,\ldots\) are complementary sequences. Furthermore, our same argument—using \(\alpha\) and \(\beta\) instead of \(\phi\) and \(\phi^2\)—can be used to prove the more general Beatty’s Theorem!
In the formulation, presumably on the right side what is intended are 3-dimensional non-degenerate quadratic spaces (up to isomorphism), with discriminant 1 (same as $4^3$ mod squares as John Ma notes). But to make this work also in characteristic 2, it is better to proceed with a different point of view: that of conformal isometry of quadratic spaces (i.e., isomorphisms $T:V \simeq V'$ such that $q' \circ T = \lambda q$ for some $\lambda \in k^{\times}$). More specifically, we claim that away from characteristic 2, every 3-dimensional non-degenerate quadratic space is conformal to a unique one with discriminant 1. Thus, by working with conformal isometry classes we will be able to work in a fully characteristic-free manner. To see what is going on, recall that the set of isomorphism classes of central simple algebras of dimension 4 is ${\rm{H}}^1(k, {\rm{PGL}}_2)$, and the set of conformal isometry classes of dimension 3 is ${\rm{H}}^1(k, {\rm{PGO}}_3)$. But ${\rm{GO}}_{2m+1} = {\rm{GL}}_1 \times {\rm{SO}}_{2m+1}$, so ${\rm{PGO}}_{2m+1} = {\rm{SO}}_{2m+1}$. Hence, ${\rm{PGO}}_3 = {\rm{SO}}_3$. Since ${\rm{SO}}_3 \simeq {\rm{PGL}}_2$ through the representation of ${\rm{PGL}}_2$ via conjugation on the 3-dimensional space of traceless $2 \times 2$ matrices equipped with the determinant as the standard split non-degenerate quadratic form $xy - z^2$ (preserved by that conjugation action!), that answers the entire question at the level of isomorphism classes of objects. (The link to ${\rm{SO}}_3$ encodes the link to discriminant 1.) But we can do better than keep track of isomorphism classes: we can also keep track of isomorphisms, as explained below. This is a refinement of John Ma's answer, as well as that of Matthias Wendt (which appeared at almost exactly the same time as this answer first appeared, so I didn't see it until this one was done). The following notation will permit considering finite fields on equal footing with all other fields. For a finite-dimensional central simple algebra $A$ over an arbitrary field $k$, let ${\rm{Trd}}:A \rightarrow k$ be its "reduced trace" and ${\rm{Nrd}}:A \rightarrow k$ be its "reduced norm". These are really most appropriately viewed as "polynomial maps" in the evident sense. That is, if $\underline{A}$ is the "ring scheme" over $k$ representing the functor $R \rightsquigarrow A \otimes_k R$ (i.e., an affine space over $k$ equipped with polynomial maps expressing the $k$-algebra structure relative to a choice of $k$-basis) then we have $k$-morphisms ${\rm{Trd}}:\underline{A} \rightarrow \mathbf{A}^1_k$ and ${\rm{Nrd}}:\underline{A} \rightarrow \mathbf{A}^1_k$. For $A$ of dimension 4 we set $\underline{V}_A$ to be the kernel of ${\rm{Trd}}:\underline{A} \rightarrow \mathbf{A}^1_k$; speaking in terms of kernel of ${\rm{Trd}}$ is a bit nicer than speaking in terms of orthogonal complements so that one doesn't need to separately consider characteristic 2 (where the relationship between quadratic forms and symmetric bilinear forms breaks down). This $\underline{V}_A$ is an affine space of dimension 3 over $k$ on which ${\rm{Nrd}}$ is a non-degenerate quadratic form $q_A$ (i.e., zero locus is a smooth conic in the projective plane $\mathbf{P}(V_A^{\ast})$, where $V_A := \underline{V}_A(k)$): indeed, these assertions are "geometric" in nature, so it suffices to check them over $k_s$, where $A$ becomes a matrix algebra and we can verify everything by inspection. We will show that the natural map of affine varieties $$\underline{{\rm{Isom}}}(\underline{A}, \underline{A}') \simeq \underline{{\rm{CIsom}}}((\underline{V}_A, q_A), (\underline{V}_{A'}, q_{A'}))/\mathbf{G}_m$$from the "isomorphism variety" to the "variety of conformal isometries mod unit-scaling" is an isomorphism; once that is shown, by Hilbert 90 we could pass to $k$-points to conclude that isomorphisms among such $A$'s correspond exactly to conformal isometries among such $(V_A, q_A)$'s up to unit scaling. It suffices to check this isomorphism assertion for varieties over $k_s$, where it becomes the assertion that the natural map$$\underline{{\rm{Aut}}}_{{\rm{Mat}}_2/k} \rightarrow {\rm{CAut}}_{({\rm{Mat}}_2^{{\rm{Tr}}=0}, \det)/k}/\mathbf{G}_m$$from the Aut-scheme to the scheme of conformal isometries up to unit-scaling is an isomorphism. But this is precisely the natural map$${\rm{PGL}}_2 \rightarrow {\rm{PGO}}(-z^2-xy) = {\rm{PGO}}(xy+z^2) = {\rm{SO}}(xy+z^2) = {\rm{SO}}_3$$between smooth affine $k$-groups that is classically known to be an isomorphism over any field $k$ (can check bijectivity on geometric points and the isomorphism property on tangent spaces at the identity points). Finally, we want to show that every 3-dimensional non-degenerate quadratic space $(V, q)$ is conformal to $(V_A, q_A)$ for some $A$.Note that if such an $A$ exists then it is unique up to unique isomorphism in the sense that if $A$ and $A$ are two such equipped with conformal isometries $(V_A, q_A) \simeq (V, q) \simeq (V_{A'}, q_{A'})$ then this composite conformal isometry arises from a unique isomorphism $A \simeq A'$ of $k$-algebras. Hence, by Galois descent (!) it suffices to check existence over $k_s$! But over a separably closed field the smooth projective conic has a rational point, so $(V, q)_{k_s}$ contains a hyperbolic plane and thus is isometric to $xy + \lambda z^2$ on $k_s^3$ for some $\lambda \in k_s^{\times}$. This is conformal to $(-1/\lambda)q_{k_s}$, but $(1/\lambda)xy - z^2$ and $-x'y' - z^2$ are clearly isometric, and the latter is ${\rm{Mat}}_2^{{\rm{Tr}}=0}$ equipped with the restriction of det.
Definition:Rotation (Geometry)/Plane Jump to navigation Jump to search Definition $\map {r_\alpha} O = O$ That is, $O$ maps to itself. Let $P \in \Gamma$ such that $P \ne O$. Let $OP$ be joined by a straight line. Let a straight line $OP'$ be constructed such that: $(1): \quad OP' = OP$ $(2): \angle POP' = \alpha$ such that $OP \to OP'$ is in the anticlockwise direction: Then: $\map {r_\alpha} P = P'$
I want to know the expectation and the variance of the Gamma PnL for different hedging frequencies. Let's say the return of the underlying follow a normal process: $dr= \sigma*dW$, the market trades 24hours and there is no transaction cost. I consider I have a constant dollar $\Gamma$ position (-r% or +r%, r $\in$ $\mathbb{R}$), provide the same $\Gamma$) with a stripe of options. I also assume, there is no change in implied vol during the day, therefore the vega PnL is zero. The daily volatility is $s = \frac{\sigma}{\sqrt{365}}$ therefore $dr \sim \mathcal{N}(0,s^2)$. What is the Gamma PnL if we hedge every seconds, hours ... every time period $t$? If we hedge every time $t$ (to simplify I normalize $t$ to a day, for instance every hour would be $t$ = 1/24), using the properties of Brownian motions, I can consider I have $1/t$ independent return processes $dr_{t} \sim \mathcal{N}(0, s^2*t)$, the Gamma PnL process is then for a day: $PnL_{\Gamma}= \frac{1}{t}*\Gamma*\frac{dr_{t}^2}{2}$ and then follows $PnL_{\Gamma}\sim \chi^2_{1}$, with mean $E =\Gamma*s^2*t/t = \Gamma*s^2$ and variance $V = \Gamma^2*s^4*t^2/t^2 = \Gamma^2*s^4$ What I do not understand is that there is no dependency of the Gamma PnL to the hedging frequency. As we increase the hedging frequency we should have less variance and less return on the gamma and the reciprocal should be true? Where is my math failing? I do not see it.
Every geometry textbook has formulas for the circumference (\(C = 2 \pi r\)) and area (\(A = \pi r^2\)) of a circle. But where do these come from? How can we prove them? Well, the first is more a definition than a theorem: the number \(\pi\) is usually defined as the ratio of a circle’s circumference to its diameter: \(\pi = C/(2r)\). Armed with this, we can compute the area of a circle. Archimedes’ idea (in 260 BCE) was to approximate this area by looking at regular \(n\)-sided polygons drawn inside and outside the circle, as in the diagram below. Increasing \(n\) gives better and better approximations to the area. Look first at the inner polygon. Its perimeter is slightly less than the circle’s circumference, \(C = 2 \pi r\), and the height of each triangle is slightly less than \(r\). So when reassembled as shown, the triangles form a rectangle whose area is just under \(C/2\cdot r = \pi r^2\). Likewise, the outer polygon has area just larger than \(\pi r^2\). As \(n\) gets larger, these two bounds get closer and closer to \(\pi r^2\), which is therefore the circle’s area. Archimedes used this same idea to approximate the number \(\pi\). Not only was he working by hand, but the notion of “square root” was not yet understood well enough to compute with. Nevertheless, he was amazingly able to use 96-sided polygons to approximate the circle! His computation included impressive dexterity with fractions: for example, instead of being able to use \(\sqrt{3}\) directly, he had to use the (very close!) approximation \(\sqrt{3} > 265/153\). In the end, he obtained the bounds \( 3\frac{10}{71} < \pi < 3\frac{1}{7} \), which are accurate to within 0.0013, or about .04%. (In fact, he proved the slightly stronger but uglier bounds \(3\frac{1137}{8069} < \pi < 3\frac{1335}{9347}\). See this translation and exposition for more information on Archimedes’ methods.) These ideas can be pushed further. Focus on a circle with radius 1. The area of the regular \(n\)-sided polygon inscribed in this circle can be used as an approximation for the circle’s area, namely \(\pi\). This polygon has area \(A_n = n/2 \cdot \sin(360/n)\) (prove this!). What happens when we double the number of sides? The approximation changes by a factor of $$\frac{A_{2n}}{A_n} = \frac{2\sin(180/n)}{\sin(360/n)} = \frac{1}{\cos(180/n)}.$$ Starting from \(A_4 = 2\), we can use the above formula to compute \(A_8,A_{16},A_{32},\ldots\), and in the limit we find that $$\pi = \frac{2}{\cos(180/4)\cdot\cos(180/8)\cdot\cos(180/16)\cdots}.$$ Finally, recalling that \(\cos(180/4) = \cos(45) = \sqrt{\frac{1}{2}}\) and \(\cos(\theta/2) = \sqrt{\frac{1}{2}(1+\cos\theta)}\) (whenever \(\cos(\theta/2) \ge 0\)), we can rearrange this into the fun infinite product $$\frac{2}{\pi} = \sqrt{\frac{1}{2}} \cdot \sqrt{\frac{1}{2}+\frac{1}{2}\sqrt{\frac{1}{2}}} \cdot \sqrt{\frac{1}{2}+\frac{1}{2}\sqrt{\frac{1}{2}+\frac{1}{2}\sqrt{\frac{1}{2}}}} \cdots$$ (which I found at Mathworld). (It’s ironic that this formula for a circle uses so many square roots!)
Let's say we have the following circuit: Generator in the circuit has sinusoidal waveform $u_g(t)=\sin\omega t$. Other known values: $ L=0.25H\\ R=1\Omega \\ C=0.5F \\ \omega=1 \frac{rad}{s} \\ k=1$ Before the P2 switch is closed, we had steady state in the circuit, at the same time, switch P1 was in the position 1. At one of the moments, when voltage on capacitor is at its maximum, switch P2 closes and switch P1 goes to the position 2. Determine the expression for capacitor voltage after that moment. I know that i need to analyze this circuit in steady state before switches change their position in order to find initial values for the capacitors and inductors. Basically, what i need here is the expression for the current in the circuit and voltage on $C$. When i consider this circuit in the laplace domain, it is the same as it is here, since there is no energy in $C$ and $L$ before the first steady state. Since generator is sinusoidal, we have that $U_g(S)=\frac{\omega}{s^2+\omega^2}$ furthermore, if we look at the circuit, we have that $U_g(S)=I(s)(Ls-kLs+\frac{1}{Cs} + R + Ls -kLs)$ Since k=1, this becomes: $U_g(S)=I(s)(R + \frac{1}{Cs} )$ From here, we have: $ I(S)=\frac{U_g(S)}{R + \frac{1}{Cs}} =\frac{\omega}{s^2+\omega^2}\frac{1}{R + \frac{1}{Cs}} $ and then since $U_c(S)=\frac{I(S)}{Cs}$ we have $U_c(S)=\frac{\omega}{s^2+\omega^2}\frac{1}{Cs(R + \frac{1}{Cs})} =\frac{\omega}{s^2+\omega^2}\frac{1}{RCs + 1} $ Now, based on values we have, these two expressions become: $I(S)=\frac{1}{s^2+1}\frac{1}{1 + \frac{2}{s}}=\frac{1}{s^2+1}\frac{s}{s + 2} $ $U_c(S)=\frac{1}{s^2+ 1}\frac{1}{\frac{s}{2} + 1} =\frac{1}{s^2+ 1}\frac{2}{s+2} $ Now, when i decompose these two expressions using partial fraction decomposition i end up with: $I(s)=\frac{2}{5} \frac{s}{s^2+1} + \frac{1}{5} \frac{1}{s^2+1} -\frac{2}{5} \frac{1}{s+2}$ $U_c(s)=-\frac{2}{5} \frac{s}{s^2+1} + \frac{4}{5} \frac{1}{s^2+1} +\frac{2}{5} \frac{1}{s+2}$ Which means that $u_c(t)=(-\frac{2}{5}\cos t + \frac{4}{5}\sin t +\frac{2}{5}e^{-j2t})u(t)$ $u(t)$ is heaviside function Now, how am i supposed to determine when is this $u_c(t)$ maximum value, i mean first derivative method could be option but it seems too complicated in this particular example due to the fact that all these functions can be expressed as exponentials with different arguments so finding maximum would be quite messy job. Usually, with problems like this, i always ended up with single sine (or cosine) and maximum could be determined by inspection easily, so i am thinking that i maybe made a mistake somewhere. So i am wondering what i did wrong here. Any help appreciated!
This Domain is Ideal for a Baby products Related Store Do yo want to get more traffic in your webstore? Want to Sell your products in high rate? If so you must know, in this virtual world everyone get interested in those products which are uploaded or listed with proper specificaion or with attractive images. We will ensure you, to make your products more attractive by proper listing in different Platforms We serve in: WooCommerceShopifyMagentoOpenBizBoxPrestashopInkFrogWordpressOpencartand in any Why us: Proper listing with proper variationBasic image processingSEO optimized TAG updateLong term work relation seekerEvery listing will be checked twiceHighly experienced in Amazon Store managementAlways in time deliveryPromptly reply 24×7 availableMoney back guarantee by: itsmezish Created: — Category: Onsite SEO & Research Viewed: 120 Needed an Assitant to add products to your E-commerce store? You have reached the right person to do the job! I will manually add products to your Shopify store, eBay, Woocommerce site, Amazon, WordPress or any other e-commerce website with Seo product titles, product descriptions, and tags. You will provide me all the details about products such as features, images, prices, keywords, product category/collection, SKUs, color variants, size variants or you can provide me an excel spreadsheet of products data. Services: SEO Product Title Amazon Product ListingsUploading ImagesProduct Images ResizingFull Product DetailsDropshipping Product VariantsProduct DescriptionsTop Searched KeywordProduct TagsCompetitors’ Research Why you choose me? Quick TurnaroundQuality WorkProduct ListingsUnlimited Revisions (Customer’s Satisfaction is Priority) Link for Seo Product Descriptions: http://bit.ly/2klTqrT Kindly Inbox Before You Place an Order! Best Regards Abdullah Akhtar by: WebOWrite Created: — Category: WordPress Viewed: 209 I am installing SharePoint 2013 I run the prerequisites executable and confirmed everything installed successfully. I then ran the setup.exe and installed to a different (not the system) drive which also completed without errors but I got this error when at the step to create the DB: “cannot connect to master sql server at “ I made sure windows firewall was off. Hi, I want to sell my compare graphics cards website. The site: https://www.OutdoorsRecreation.net Info: OutdoorsRecreation is a review products website. All the 10 articles are UNIQUE and wrote by professional writer include review each product from Amazon. The website created with WordPress CMS, so it's easy to maintain, edit and add content.gt Optimization: As you can see the store after optimization and loaded below… Outdoors Recreation Products Review Website, Monetized By Amazon I am trying to come up with a simple way of implementing the following design, but I always end up with something overly complicated. I am trying to design a database where companies can store products. Each product belongs to exactly one category, and depending on the category, they share some common attributes that I want to be mandatory. They can also have different attributes that are up to the company to define. As of now, and based on this question I asked last year, my idea is to have the following: Company tableto store everything related to the company (one company can have several products) Product tableto store everything related to the product (one product can be in one category, and can have several features) Category tablejust to save the name of the category Attribute tableattributes that can be defined by the company for that product Product Feature tablejoint table With these tables, companies will be able to add their products to a category and add as many attributes as they want. Now, dependant on the category, I want to make mandatory to define a set of attributes. For example, if we are talking about groceries, and a category would be ‘Fruits’, I might want the company to always provide the following attributes: color, country of origin. If instead of fruits, the category is ‘Cereals’ I might want to be always provided with the following: allergens, expire date… I have thought of two different solutions, which are: Create a new table for each category that needs these mandatory attributes. This, of course, will be a problem as if a new mandatory attribute is added to an existing category, all previous ones will have no value. Also, every time I add a new category I should define a new table for the attributes. The other option I have thought of is to simply manage this in the frontend of the application. Once a company adds a product belonging to one of those categories, prompt them to input the mandatory attributes and simply save them as a ‘normal’ attribute. I am more inclined towards using the second solution. Using the first one I could end up with a ddbb with dozens of tables, but I would like some input to see if anyone can come up with a better solution. Verbose Motivation for this Question Inspired by this paper about how the problem of counting unlabelled subtrees that are unique up to isomorphism is #P-complete, I was thinking about the problem for the specific case of caterpillar graphs. (a caterpillar graph, $ C$ , will become a path graph, $ P(C)$ , by removing all their leaves) This lead me to an interesting subproblem, where, given an array $ [l_0,l_1 \dots l_d]$ you need to compute: $ $ \sum_{i=0}^d \int \bigcup_{k=0}^{d-i} \prod_{j=k}^{k+i} [0,l_j]$ $ Here, we are taking a Lebesgue unit measure of a union of $ (i+1)$ -dimensional intervals. (for each cartesian product, the length of 1st dimension is $ [0,l_k]$ , and the $ m$ th dimension is $ [0,l_{j+m}]$ ) This monstrous equation arises due to the partial-order of possible sub-trees which include exactly $ (i+1)$ vertices of $ P(C)$ . (in the actual problem, it’s a little complicated, because you have to account for over-counting due to symmetry that occurs from reversing the order of your array, but for now I’ll just concern myself with the cleaner subproblem) My actual questions I’m curious about computing the Lebesgue unit measure of the union of $ n$ $ i$ -dimensional intervals which are each defined by a list of $ i$ 2-tuples which are represent a cartesian product along orthogonal axes. What’s the complexity of measuring the union of $ n$ $ i$ -dimensional intervals? Are there significant improvements in the case were intervals have some obvious partial-order? (ex: each interval is a subset of $ [0, +\infty]^i$ , and have a corner at $ [0]^i$ ) Similarly, are there note-worthy improvements if we restrict ourselves to intervals whose corners all belong to $ \mathbb{Z}^i$ ? Upon further consideration, I feel like the answer to this question might be quite useful for my second question. However, I am still unclear exactly what’s the best way to calculate measure once we’ve identified the relevant intervals, and if the $ O(nlog(n))$ sorting generalizes to higher dimensions. i want to pull products from storefronts(eg Amazon,Ebay)to my 3PL website so that i can send them the products Ali Dropshipping is my favorite task, like to work with oberlo ,alidropship, and yoast seo plugin. First of all flow my client instructions & at last give them project delivery as soon as possible. between first and last it’s my hard work with unique way of thinking. Hello there, If you are beginning a fresh Woocommerce store and depressed about products then you are at the right place..Here we can assist you in importing top selling ,eye catching,free shipping,high rating,via epacket products from aliexpress to your WordPress website with alidropship plugin. My Service Will Include :Import productEdit Product NameEdit permalinksEdit imagesEdit DescriptionAdd into the product categoryAdd stickerProduct researchAdd tagsDisplay shipping country as you choose.Super fast deliveryImport best selling productGood review product import by: freedombird2018 Created: — Category: Product & Book Reviews Viewed: 110 I make custom post type name Quiz with some meta fields and Can I want to get those meta fields in products?
Consider the following first order linear difference equation for $y$: $$y_{t+1} = \alpha * y_{t} + \beta * x_{t-n+1} ~~\forall t \ge n$$ For initial conditions, one could assume that $x_{i} \in \mathbb{R} ~~\forall t = 1, \ldots n$. (except that they cannot all be zero). The other constraint is that $$y_{n} = \sum_{i=0}^{n-1} x_{n-i}$$ so that the $n$th value of $y$ in the sequence always sums to the previous $n$ values of $x_{i}$. For example: $y_{n} = x_{n} + x_{n-1} \ldots + x_{1}$ and process starts at time t = n because y needs to have a value for it to start. So, for example, if $n=30$, then $y_{30}$ is the first observation for $y$ so the process starts at time t = n = 30. Note that the $x_{t}$ process starts at time $t = 1$ so we need the $x_{i} \forall i= 1,\ldots 30$ before the y process can start. Similarly, once t is greater than or equal to n, we have $y_{n+1} = x_{n+1} + x_{n} + \ldots x_{2}$ $y_{n+2} = x_{n+2} + x_{n+1} + \ldots x_{3}$ $\vdots$ Given the assumptions and the constraint, is there a relation between $\lambda$ and $\beta$ which meets the requirements? I have no background in difference equations (I do have the book by Mann which talks about such a difference equation but not with that constraint) so if someone knows where an explanation for this type of thing is explained, it would be appreciated. Essentially, once $t = n$, then, starting from there, the sum constraint needs to hold and the difference equation has to hold also. I'm pretty sure that the answer is that $\beta = -\alpha$ so that the equation reduces to $$ y_{t+1} = \alpha * (y_{t} - x_{t-n+1}) ~~\forall t \ge n$$ and then, in order to satisfy the sum constraint, $\alpha = (\frac{1}{2})^{1/n}$. But how to prove this is a different story.
So i have this sequence $\dfrac{\pi}{3}-k,\dfrac{\pi}{3},\dfrac{\pi}{3}+k,\dfrac{2\pi}{3}-k,\dfrac{2\pi}{3},\dfrac{2\pi}{3}+k,\dfrac{3\pi}{3}-k,\dfrac{3\pi}{3},\dfrac{3\pi}{3}+k,...$ I want to make a formula for this sequence, here is what i made myself for $n>=1$ $ceiling(\dfrac{n}{3})*\dfrac{\pi}{3}+ [(floor(n) \mod 3) - 1]*k$ But i think its too complicated (too long) for this easy sequence. If it makes it easier, $n$ can start from $0$ or any other index as well. things after $2\pi$ repeat, since its used in polar coordinates and $k$ is constant. $k = \arctan(\dfrac{\sin(\dfrac{\pi}{3})}{\cos(\dfrac{\pi}{3})+2})$
Suppose we have a random vector $\mathbf{X} = (X_1,\ldots,X_n)$ or sample with pdf or pmf $f(\mathbf{x};\theta)$, where $\theta \in \Theta$ is an unknown parameter (or vector of parameters), whose specification completely determines $f(\mathbf{x};\theta)$. As far as I understand, a statistic is just a a random variable $T=T(\mathbf{x})$ that is a function (that does not depend on $\theta$) of $\mathbf{X}$. That's not to say the distribution of $T$ doesn't depend on $\theta$ of course, just that the function that relates $\mathbf{X}$ and $T$ does not. Now, usually from what I've seen, the statistic $T$ is defined to be a sufficient statistic if the distribution of $\mathbf{X}$ conditional on $T=t$ does not depend on $\theta$. Now, my issue with this definition (which I don't understand how is it not addressed anywhere I've looked) is that if the support of $T$ depends on $\theta$, then we can't really pick a $t$ in the support of $T$ and then claim that the conditional distribution does not depend on $\theta$, because that same conditional distribution may be ill-defined for some values of $\theta$ for which $t$ is not in the support of $T$, right? I kinda came up with my own definition which I'm hoping someone could comment on to see if this is what's more formally meant by that usual definition: A statistic $T(\mathbf{X})$ is sufficient for $\theta$ if there exists a non-negative function $C:\mathbb{R}^n\times \mathbb{R} \to \mathbb{R}$ that does not depend on $\theta$ and such that $$p(\mathbf{x},t;\theta) = C(\mathbf{x},t) \cdot q(t;\theta)$$ for all $\mathbf{x} \in \mathbb{R}^n, t\in \mathbb{R}, \theta \in \Theta$, where $p$ and $q$ denote the joint pdf/pmf of $(\mathbf{X},T(\mathbf{X}))$ and the pdf/pmf of $T(\mathbf{X})$, respectively.
We are given a directed weighted graph $\mathcal{G} = (V, A)$, with $A \subseteq V^2$. The weight function is $w_{\mathcal{G}} \colon \mathcal{G}.A \to \mathcal{P}(\mathfrak{R}_{>} \times \mathfrak{R}_{\geq} \times (\mathfrak{R}_> \cup \{ \infty \}) \times \mathfrak{R})$, where $\mathfrak{R}_{\ast} = \{ x \in \mathfrak{R} \colon x \ast 0 \}$. If $(K, r, n, t) \in \mathfrak{R}_{>} \times \mathfrak{R}_{\geq} \times (\mathfrak{R}_> \cup \{ \infty \}) \times \mathfrak{R}$, $K$ is the principal investment, $r$ is the annual interest rate, $n$ is the amount of compounding periods per year (the value of $\infty$ is allowed, which denotes continuous compounding), and $t$ is the time point at which the loan was granted. Together, the four parameters comprise a contract. Note that the weight of an arc is not a single contract, but rather a set of contracts (generalizing a bit). For each arc $(u, v) \in \mathcal{G}.A$, $u$ is the creditor, $v$ is the debtor, and $w_{\mathcal{G}}$ is the set of contracts between the two nodes. Equity function We need a function $e_{\mathcal{G}} \colon \mathcal{G}.V \times \mathfrak{R} \to \mathfrak{R}$: \begin{aligned} e_{\mathcal{G}}(u, \tau) &= \sum_{(u, v) \in \mathcal{G}.A} \Bigg( \sum_{(K, r, n, t) \in w_{\mathcal{G}}(u, v)} \mathfrak{C}_{\tau}(K, r, n, t)\Bigg) \\ &- \sum_{(v, u) \in \mathcal{G}.A} \Bigg( \sum_{(K, r, n, t) \in w_{\mathcal{G}}(v, u)} \mathfrak{C}_{\tau}(K, r, n, t) \Bigg), \end{aligned} where $$ \mathfrak{C}_{\tau}(K, r, n, t) = \begin{cases} K \bigg( 1 + \frac{r}{n} \bigg)^{\lfloor n (\tau - t) \rfloor} & \text{if } n \in \mathfrak{R}_> \\ K e^{r(\tau - t)} & \text{if } n = \infty. \end{cases} $$ (Above, $\mathfrak{C}_{\tau}(K, r, n, t)$ gives the value of a loan at time point $\tau$ taking the interest rate into account.) Since the nodes of the input graph models the parties involved in the financial system, we allow each of them to choose for each debt a time point at which the debt may be cut. We cannot guarantee that all debts may be cut only partially, though the algorithm will try to minimize the sum of debt cuts. Choosing debt cut moments Also, we need another function $\mathfrak{f}_{\mathcal{G}} \colon \mathcal{G}.V \times \mathfrak{R}_> \times \mathfrak{R}_{\geq} \times (\mathfrak{R}_> \cup \infty) \times \mathfrak{R} \to \mathfrak{R}$ mapping each tuple of a debtor $v \in \mathcal{G}.V$ and its debt contract $c$ into a time point at which $v$ may cut the contract $c$. Equilibrium The loan graph $\mathcal{G}$ is said to be in equilibrium at time point $\tau$ if and only if $e_{\mathcal{G}}(u, \tau) = 0$ for all $u \in \mathcal{G}.V$. Applying the cut to a contract Whenever a party $u \in \mathcal{G}.V$ is ready to raise $C$ units of resources for the debt cut of the contract $\mathcal{k} = (K, r, n, t)$, $\mathcal{k}$ becomes $$ \mathfrak{C}_{\tau}(\mathfrak{C}_{\mathfrak{f}_{\mathcal{G}}(u, \mathcal{k})}(K, r, n, t) - C, r, n, \mathfrak{f}_{\mathcal{G}}(u, \mathcal{k})), $$ where $\tau \geq \mathfrak{f}_{\mathcal{G}}(u, \mathcal{k})$. The problem Given a loan graph $\mathcal{G}$, the time points $\mathfrak{f}_{\mathcal{G}}$, and the equilibrium time point $T_{\mathcal{G}}$, find a set of debt cuts for each contract such that the state of the graph changes such that the graph attains an equilibrium at time point $T_{\mathcal{G}}$, and the sum of debt cuts is minimized. Question I would not be surprised if this problem is already studied, and I would love to read about it more. The problem is that I cannot find the paper/s. All in all, I did nothing else than merely defining the problem and eventually coming up with the solution, yet I would like to know more about it. Auxiliary reading Additional details $e_{\mathcal{G}}(u, \tau)$: takes all loans issued by the node $u$, evaluate their value at time point $\tau$, sums them up, after which the sum of debts is subtracted. Basically, you can think of it as "saldo": everything other nodes own you at time $\tau$ minus everything you own to other nodes. Each arc is associated with at least one contract. You can think of it as parallel arcs are allowed, each of which is associated with only one contract. Cutting a debt is simply making a payment that partially cleares the debt. For example, I own you 1000 USD, I make a cut of 200 USD in order to "slow down" the accumulation of compound interest.
Let $\mathbf{w}(t)$ be the trajectory of a moving charge. Let the observation event be $(\mathbf{r},t)$. The scalar potential is: $$\varphi = \frac{q}{4\pi\epsilon_0}\int \frac{\delta\left(\mathbf{r'} - \mathbf{w}\left(t - \frac{|\mathbf{r} - \mathbf{r'}|}{c}\right)\right)}{|\mathbf{r'}-\mathbf{r}|} \mathrm d^3\mathbf{r'}$$ It can be shown that at most only ONE event on the trajectory of the charge produces the potential at the observation event. This is the event $(\mathbf{w}(t_r),t_r)$, where $t_r$ is such that $|\mathbf{r}-\mathbf{w}(t_r)| = c(t-t_r)$. Because the delta function is 0 apart from at one point, it seems to make sense that $\mathbf{w}(t_r)$ must be the point that it picks out. Is it then legitimate to write the scalar potential as: $$\varphi = \frac{q}{4\pi\epsilon_0|\mathbf{r} - \mathbf{w}(t_r)|}\int \delta\left(\mathbf{r'} - \mathbf{w}\left(t - \frac{|\mathbf{r} - \mathbf{r'}|}{c}\right)\right) \mathrm d^3\mathbf{r'}\;?$$ If not, why not? And what is the best way to calculate the remaining delta function integral?
In my lecture notes, the Boltzmann entropy form was motivated as a valid form for entropy partly because it was extensive. However this hinges on the assumption that the statistical weight for a macrostate of two systems combined is their product. Why is this valid- for large systems at least- and when might it break down? I think that the explanation here is the same as in classical thermodynamics: the interaction between the two subsystems is a "surface effect". Suppose the two subsystems are cubic, of volume $L^3$, sharing an interface of area $\sim L^2$. If the interactions are of short range, $d$, the perturbation on either system (just from counting interactions) has a relative magnitude $\sim dL^2/L^3=d/L$. Of course, this has an effect on the number of states $\Omega(E)$ in both subsystems having energy $E_1$ or $E_2$ respectively, but bearing in mind that the logarithm of $\Omega$ is the important quantity, this will be small provided $d\ll L$. So, one can make the approximation that the systems are statistically independent. One can expect the argument to break down if the interactions are long ranged (when extensivity of thermodynamic properties does actually come into question) and when the system is small (when one would not expect it to hold). Also, if the subsystems are chosen to have an unusually large interaction area: for example, the oppositely coloured lattice sites in a checkerboard colouring of Ising-like models. There are various other subtleties associated with the extensivity of the Boltzmann entropy, some of which can be found discussed on Physics SE, but I think this is the one you were worried about.
Terms sourced from: http://iupac.org/publications/pac/68/1/0149/ "A glossary of terms used in chemical kinetics, including reaction dynamics (IUPAC Recommendations 1996)", Laidler, K.J., Pure and Applied Chemistry 1996, 68(1), 149 activated complex activation activation energy adiabatic adiabatic transition-state theory adiabatic treatments of reaction rates angular distribution Arrhenius equation atom–molecule complex mechanism attractive potential-energy surface attractive–mixed–repulsive classification Arrhenius A factor \(A\) cage effect canonical rate constant canonical variational transition-state theory catalyst catalytic coefficient centrifugal barrier chain branching chain carrier chain initiation chain length \(\delta\)chain reaction chain-propagating reaction chain-termination reaction channel chaperon chemical activation chemical induction (coupling) chemiluminescence col collinear reaction collision cross-section \(\sigma\)collision density \(Z_{\text{AA}}\)collision efficiency \(B_{\text{c}}\)collision frequency \(z_{\text{A}}(\text{A})\)collision number collision theory competition complex mechanism complex reaction complex-mode reaction composite mechanism conventional transition-state theory conventional true value critical energy coupling crossed molecular beams degenerate chain branching detailed balancing (principle of) diabatic coupling diffusion control direct reaction dividing surface de-energization efficiency degree of activation degree of inhibition density of states D01593-0,D01593-1 elastic collision elastic scattering elementary reaction encounter endergonic reaction endothermic reaction energized species energy of activation enthalpy of activation \(\Delta ^{\ddagger}H^{\,\unicode{x26ac}}\)entrance channel entropy of activation \(\Delta ^{\ddagger}S^{\,\unicode{x26ac}}\)equilibrium reaction exergonic reaction exit channel exothermic reaction extent of reaction \(\xi\)early-downhill surface encounter control geminate recombination general acid–base catalysis generalized transition-state theory Gibbs energy diagram Gibbs energy of activation \(\Delta ^{\ddagger}G^{\,\unicode{x26ac}}\)Gibbs energy profile gradual (sudden) potential-energy surface impact parameter \(b\)improved canonical variational transition-state theory indirect reaction induction period inelastic scattering information theory inhibition initial (final) state correlations initiation initiator intermediate intrinsic activation energy \(E_{\text{a,i}}\)isotope effect Langmuir–Hinshelwood mechanism Langmuir–Rideal (Rideal–Eley) mechanism light-atom anomaly line-of-centres model London–Eyring–Polanyi (LEP) method London–Eyring–Polanyi–Sato (LEPS) method macroscopic diffusion control macroscopic kinetics Marcus–Coltrin path Michaelis–Menten kinetics Michaelis–Menten mechanism microcanonical rate constant microcanonical variational transition-state theory microscopic diffusion control microscopic kinetics microscopic reversibility at equilibrium minimum density of states criterion mixed energy release mixing control modified Arrhenius equation molecular beams molecular dynamics molecular kinetics molecularity partial microscopic diffusion control phase-space theory photochemical equivalence positive feedback potential-energy profile potential-energy (reaction) surface pre-equilibrium (prior equilibrium) pre-exponential factor \(A\)prior distribution \(P_{0}\)product product state distribution pseudo rate constant rate of conversion \(\dot{\xi}\)rate of disappearance rate of reaction \(\nu\)rate-controlling step rate-determining step reactant reaction barrier reaction coordinate reaction cross-section \(\sigma_{\text{r}}\)reaction dynamics reaction intermediate reaction path degeneracy reaction probability \(P_{\text{r}}\)reactive complex reactive scattering reagent rebound reaction relaxation relaxation kinetics repulsive potential-energy surface retarder Rice–Ramsperger–Kassel (RRK) theory Rice–Ramsperger–Kassel–Marcus (RRKM) theory rate of consumption R05146-0,R05146-1rate of formation R05152-0,R05152-1relaxation time selectivity separability assumption specific acid–base catalysis spectator-stripping reaction state-to-state kinetics stepwise reaction steric factor stoichiometric number \(\nu\)stoichiometry stripping reaction strong collision sum of states \(P(\varepsilon)\)surprisal analysis surprisal \(s\)symmetry number \(s\)steady state temperature jump third body threshold energy \(E_{0}\)trajectory transient phase (induction period) transition species transition state theory transmission coefficient variational transition state theory vibrationally adiabatic transition-state theory volume of activation \(\Delta ^{\ddagger}V\)
Let us now take a closer look at the hex-fractal we sliced last week. Chopping a level 0, 1, 2, and 3 Menger sponge through our slanted plane gives the following: This suggests an iterative recipe to generate the hex-fractal. Any time we see a hexagon, chop it into six smaller hexagons and six triangles as illustrated below. Similarly, any time we see a triangle, chop it into a hexagon and three triangles like this: In the limit, each triangle and hexagon in the above image becomes a hex-fractal or a tri-fractal, respectively. The final hex-fractal looks something like this (click for larger image): Now we are in a position to answer last week’s question: how can we compute the Hausdorff dimension of the hex-fractal? Let d be its dimension. Like last week, our computation will proceed by trying to compute the “ d-dimensional volume” of our shape. So, start with a “large” hex-fractal and tri-fractal, each of side-length 1, and let their d-dimensional volumes be h and t respectively. [1] Break these into “small” hex-fractals and tri-fractals of side-length 1/3, so these have volumes \(h/3^d\) and \(t/3^d\) respectively (this is how “ d-dimenional stuff” scales). Since $$\begin{gather*}(\text{large hex}) = 6(\text{small hex})+6(\text{small tri}) \quad \text{and}\\ (\text{large tri}) = (\text{small hex})+3(\text{small tri}),\end{gather*}$$ we find that \(h=6h/3^d + 6t/3^d\) and \(t=h/3^d+3t/3^d\). Surprisingly, this is enough information to solve for the value of \(3^d\). [2] We find \(3^d = \frac{1}{2}(9+\sqrt{33})\), so $$d=\log_3\left(\frac{9+\sqrt{33}}{2}\right) = 1.8184\ldots,$$ as claimed last week. As a final thought, why did we choose to slice the Menger sponge on this plane? Why not any of the (infinitely many) others? Even if we only look at planes parallel to our chosen plane, a mesmerizing pattern emerges: More Information It takes a bit more work to turn the above computation of the hex-fractal’s dimension into a full proof, but there are a few ways to do it. Possible methods include mass distributions [3] or similarity graphs [4]. This diagonal slice through the Menger sponge has been proposed as an exhibit at the Museum of Math. Sebastien Perez Duarte seems to have been the first to slice a Menger sponge in this way (see his rendering), and his animated cross section inspired my animation above. Thanks for reading! Notes We’re assuming that the hex-fractal and tri-fractal have the same Hausdorff dimension. This is true, and it follows from the fact that a scaled version of each lives inside the other. [↩] There are actually two solutions, but the fact that hand tare both positive rules one out. [↩] Proposition 4.9 in: Kenneth Falconer. Fractal Geometry: Mathematical Foundations and Applications.John Wiley & Sons: New York, 1990. [↩] Section 6.6 in: Gerald Edgar. Measure, Topology, and Fractal Geometry(Second Edition). Springer: 2008. [↩]
(Sorry was asleep at that time but forgot to log out, hence the apparent lack of response) Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference Yes you can (since $k=\frac{2\pi}{\lambda}$). To convert from path difference to phase difference, divide by k, see this PSE for details http://physics.stackexchange.com/questions/75882/what-is-the-difference-between-phase-difference-and-path-difference
The surreal numbers are sometimes introduced as a place where crazy expressions like $(\omega^2+5\omega-13)^{1/3-2/\omega}+\pi$ (to use the nLab's example) make sense. The problem is, there seem to be varying definitions of the exponential in the surreal numbers and since I can't find any recent reference that covers them all I have little idea whether they're actually the same or not. (When I say "exponential", I mean either $e^x$ or the general $a^x$ for $a>0$; obviously one can go back and forth between these so long as $e^x$ is indeed a bijection from surreals to positive surreals.) To wit: Harry Gonshor gives one definition in his "An Introduction to the Theory of Surreal Numbers". Gonshor mentions an earlier unpublished definition due to Martin Kruskal; so does Conway in the 2nd edition of "On Numbers and Games". Neither actually state this definition, but it is strictly speaking possible for someone who's never seen it to verify equivalence with it, because Conway mentions that it is inverse to a particular definition of the logarithm, which he does not explicitly state but gives enough information to deduce. Gonshor seems to suggest in his text that his definition is equivalent to Kruskal's unpublished one, but on the other hand never seems to explicitly state so. Norman Alling's "Foundations of Analysis over Surreal Number Fields" looks like it might contain another definition? I'm not too clear on what he's doing, honestly, though it looks like it's restricted to non-infinite surreals... Wikipedia's page [old version, this definition was removed from Wikipedia soon after asking this question] gives a totally uncited definition for $2^x$. I have no idea where this might be originally from. I suppose one could substitute in other surreals for 2 to generalize this? Or else one could take Wikipedia's definition and generalize it in the way one usually does when starting from $e^x$? (I should hope this agrees with definition 4!). Note that the operation $x\mapsto \omega^x$ commonly used in the surreals is not related; though it's exponential in some sense, it's not surjective onto the positive surreals, and so definitions of a general exponential shouldn't attempt to agree with it. And of course definitions 1, 2, and 4/5 above are surjective onto the positive surreals. (Or Wikipedia claims #4/5 is, anyway.) Edit: To avoid confusion, in what follows, I'll write $\exp_\omega x$ instead of the usual $\omega^x$, and reserve the notation $\omega^x$ for whatever that happens to be in the notion of exponentiation under discussion. So, does anyone know to what extent these are actually equivalent? If they're not equivalent, is there agreement on which ones are the "right" definitions? (It seems like all of them have the right properties! And while it seems to be agreed that the idea behind Kruskal's definition is bad, that doesn't mean necessarily the definition itself is.) Or could anyone point me to any recent book which might clear all this up, or at least the source of Wikipedia's definition? (I had originally intended to ask other questions about surreal exponentiation before finding that I wasn't sure what it actually was. I am hoping that whatever references people can point me to will answer my other questions as well.) Slight update: Definitions 4 doesn't seem to agree with definition 5 (nor definition 1, see below); it would seem that definition 4 would imply $3^\omega=\omega$, while definition 5 would imply $3^\omega>\omega$. This raises the problem in that one could make more definitions by using definition 4 to define $a^x$ for some fixed $a$, and then generalizing it to $b^x$ for all $b$ via definition 5, and depending on your choice of starting $a$ -- whether $e$, 2, or something else -- you'd get different definitions of $b^x$ out. An entire proper class of distinct "exponentiation" operations! Well, perhaps not, perhaps not all starting values of $a>1$ yield an onto function -- perhaps 2 is special and it's the only one that does, though that seems pretty unlikely, and barring that, this is pretty bad regardless. Also, definition 4 seems pretty suspect as the "right" definition for another reason: If we plug in two ordinals, it looks like it will agree with ordinary ordinal exponentiation. This both disagees with Gonshor's definition (which would imply $\omega^\omega>\exp_\omega \omega$) and is suspect on its own, because we shouldn't expect to get one of the ordinary ordinal operations out of this (we use natural addition and multiplication in the surreals, not ordinary addition and multiplication). If indeed we get an ordinal operation out of this at all -- it would appear that by Gonshor's definition, $\omega^\omega$ would not even be an ordinal, instead being equal to $\exp_\omega \exp_\omega (1+1/\omega)$. Oops: Sorry, that shouldn't be ordinary exponentiation, but rather the analogue of it based on natural multiplication. Regardless, still disagrees, still smells bad.
Current browse context: astro-ph.SR Change to browse by: References & Citations Bookmark(what is this?) Astrophysics > Solar and Stellar Astrophysics Title: Impact of Cosmic-Ray Feedback on Accretion and Chemistry in Circumstellar Disks (Submitted on 21 Aug 2019 (v1), last revised 10 Sep 2019 (this version, v2)) Abstract: We use the gas-grain chemistry code UCLCHEM to explore the impact of cosmic-ray feedback on the chemistry of circumstellar disks. We model the attenuation and energy losses of the cosmic-rays as they propagate outwards from the star and also consider ionization due to stellar radiation and radionuclides. For accretion rates typical of young stars, $\dot M_* \sim 10^{-9}-10^{-6}$ M_\odot yr$^{-1}$, we show that cosmic rays accelerated by the stellar accretion shock produce a cosmic-ray ionization rate at the disk surface $\zeta \gtrsim 10^{-15}$ s$^{-1}$, at least an order of magnitude higher than the ionization rate associated with the Galactic cosmic-ray background. The incident cosmic-ray flux enhances the disk ionization at intermediate to high surface densities ($\Sigma > 10$ g cm$^{-2}$) particularly within 10 au of the star. We find the dominant ions are C$^+$, S$^+$ and Mg$^+$ in the disk surface layers, while the H$_3^+$ ion dominates at surface densities above 1.0 g cm$^{-2}$. We predict the radii and column densities at which the magneto-rotational instability (MRI) is active in T Tauri disks and show that ionization by cosmic-ray feedback extends the MRI-active region towards the disk mid-plane. However, the MRI is only active at the mid-plane of a minimum mass solar nebula disk if cosmic-rays propagate diffusively ($\zeta \propto r^{-1}$) away from the star. The relationship between accretion, which accelerates cosmic rays, the dense accretion columns, which attenuate cosmic rays, and the MRI, which facilitates accretion, create a cosmic-ray feedback loop that mediates accretion and may produce luminosity variability. Submission historyFrom: Stella Offner [view email] [v1]Wed, 21 Aug 2019 18:00:07 GMT (1618kb,D) [v2]Tue, 10 Sep 2019 15:31:28 GMT (1619kb,D)
Cryptology ePrint Archive: Report 2000/005 On Resilient Boolean Functions with Maximal Possible Nonlinearity Yuriy Tarannikov Abstract: It is proved that the maximal possible nonlinearity of $n$-variable $m$-resilient Boolean function is $2^{n-1}-2^{m+1}$ for ${2n-7\over 3}\le m\le n-2$. This value can be achieved only for optimized functions (i.~e. functions with an algebraic degree $n-m-1$). For ${2n-7\over 3}\le m\le n-\log_2{n-2\over 3}-2$ it is suggested a method to construct an $n$-variable $m$-resilient function with maximal possible nonlinearity $2^{n-1}-2^{m+1}$ such that each variable presents in ANF of this function in some term of maximal possible length $n-m-1$. For $n\equiv 2\pmod 3$, $m={2n-7\over 3}$, it is given a scheme of hardware implementation for such function that demands approximately $2n$ gates EXOR and $(2/3)n$ gates AND. Category / Keywords: secret-key cryptography / boolean functions, stream ciphers, secret-key cryptography, implementation Date: received 10 Mar 2000 Contact author: yutaran at nw math msu su, taran@vertex inria msu ru Available format(s): Postscript (PS) | Compressed Postscript (PS.GZ) | BibTeX Citation Version: 20000312:204031 (All versions of this report) Short URL: ia.cr/2000/005 [ Cryptology ePrint archive ]
Math has a lot of symbols. Some might say too many symbols. So lets do some math with pictures. Lets have a paper, which we will draw on. To start the paper is empty, we will say that is equivalent to \$\top\$ or \$\textit{true}\$. If we write other things on the paper they will also be true. For example Indicates that the claims \$P\$ and \$Q\$ are true. Now let us say that if we draw a circle around some statement that statement is false. This represents logical not. For example: Indicates that \$P\$ is false and \$Q\$ is true. We can even place the circle around multiple sub-statements: Since the part inside the circle normally reads as \$P\text{ and }Q\$ by putting a circle around it it means \$\text{not }(P\text{ and }Q)\$. We can even nest circles This reads as \$\text{not }((\text{not }P)\text{ and }Q)\$. If we draw a circle with nothing in it, that represents \$\bot\$ or \$\textit{false}\$. Since empty space was true, then the negation of true is false. Now using this simple visual method we can actually represent any statement in propositional logic. Proofs The next step after being able to represent statements is being able to prove them. For proofs we have 4 different rules that can be used to transform a graph. We always start with an empty sheet which as we know is a vacuous truth and then use these different rules to transform our empty sheet of paper into a theorem. Our first inference rule is Insertion. Insertion We will call the number of negations between a sub-graph and the top level it's "depth". Insertion allows us to introduce any statement we wish at an odd depth. Here is an example of us performing insertion: Here we chose \$P\$, but we could just as well choose any statement we wanted. Erasure The next inference rule is Erasure. Erasure tells us that if we have a statement that is at a even depth we can remove it entirely. Here is an example of erasure being applied: Here we erased the \$Q\$, because it was \$2\$ levels nested. Even if we wanted to we could not have erased \$P\$ because it is \$1\$ level nested. Double Cut Double Cut is an equivalence. Which means, unlike the previous inferences it can also be reversed. Double Cut tells us that we can draw two circles around any sub-graph, and if there are two circles around a sub-graph we can remove them both. Here is an example of the Double Cut being used Here we use Double Cut on \$Q\$. Iteration Iteration is a equivalence as well. 1 It's reverse is called Deiteration If we have a statement and a cut on the same level, we can copy that statement inside of a cut. For example: Deiteration allows us to reverse an Iteration.A statement can be removed via Deiteration if there exists a copy of it at the next level up. This format of representation and proof is not of my own invention. They are a minor modification of a diagrammatic logic are called Alpha Existential Graphs. If you want to read more on this, there is not a ton of literature, but the linked article is a good start. Task Your task will be to prove the following theorem: This, when translated into traditional logic symbolization is \$((A\to(B\to A))\to(((\neg C\to(D\to\neg E))\to((C\to(D\to F))\to((E\to D)\to(E\to F))))\to G))\to(H\to G)\$. Also known as the Łukasiewicz-Tarski Axiom. It may seem involved but existential graphs are very efficient when it comes to proof length. I selected this theorem because I do think it is an appropriate length for a fun and challenging puzzle. If you are having trouble with this one I would recommend trying some more basic theorems first to get the hang of the system. A list of these can be found at the bottom of the post. This is proof-golf so your score will be the total number of steps in your proof from start to finish. The goal is to minimize your score. Format The format for this challenge is flexible you can submit answers in any format that is clearly readable, including hand-drawn or rendered formats. However for clarity I suggest the following simple format: We represent a cut with parentheses, whatever we are cutting is put inside of the parens. The empty cut would just be ()for example. We represent atoms with just their letters. As an example here is the goal statement in this format: (((A((B(A))))(((((C)((D((E)))))(((C((D(F))))(((E(D))((E(F))))))))(G))))((H(G)))) This format is nice because it is both human and machine readable, so including it in your post would be nice. If you want some nice(ish) diagrams here is some code that converts this format to \$\LaTeX\$: As for your actual work I recommend pencil and paper when working out. I find that text just isn't as intuitive as paper when it comes to existential graphs. Example proof In this example proof we will prove the following theorem: Now this may seem alien to you at first but if we translate this into traditional logic notation we get \$(A\rightarrow B)\rightarrow(\neg B\rightarrow \neg A)\$, also known as the law of contra-positives. Proof: Practice Theorems Here are some simple theorems you can use to practice the system: Łukasiewicz' Second Axiom Meredith's Axiom 1: Most sources use a more sophisticated and powerful version of Iteration, but to keep this challenge simple I am using this version. They are functionally equivalent.
The outputs of the function [latex]f[/latex] are the inputs to [latex]{f}^{-1}[/latex], so the range of [latex]f[/latex] is also the domain of [latex]{f}^{-1}[/latex]. Likewise, because the inputs to [latex]f[/latex] are the outputs of [latex]{f}^{-1}[/latex], the domain of [latex]f[/latex] is the range of [latex]{f}^{-1}[/latex]. We can visualize the situation. When a function has no inverse function, it is possible to create a new function where that new function on a limited domain does have an inverse function. For example, the inverse of [latex]f\left(x\right)=\sqrt{x}[/latex] is [latex]{f}^{-1}\left(x\right)={x}^{2}[/latex], because a square “undoes” a square root; but the square is only the inverse of the square root on the domain [latex]\left[0,\infty \right)[/latex], since that is the range of [latex]f\left(x\right)=\sqrt{x}[/latex]. We can look at this problem from the other side, starting with the square (toolkit quadratic) function [latex]f\left(x\right)={x}^{2}[/latex]. If we want to construct an inverse to this function, we run into a problem, because for every given output of the quadratic function, there are two corresponding inputs (except when the input is 0). For example, the output 9 from the quadratic function corresponds to the inputs 3 and –3. But an output from a function is an input to its inverse; if this inverse input corresponds to more than one inverse output (input of the original function), then the “inverse” is not a function at all! To put it differently, the quadratic function is not a one-to-one function; it fails the horizontal line test, so it does not have an inverse function. In order for a function to have an inverse, it must be a one-to-one function. In many cases, if a function is not one-to-one, we can still restrict the function to a part of its domain on which it is one-to-one. For example, we can make a restricted version of the square function [latex]f\left(x\right)={x}^{2}[/latex] with its range limited to [latex]\left[0,\infty \right)[/latex], which is a one-to-one function (it passes the horizontal line test) and which has an inverse (the square-root function). If [latex]f\left(x\right)={\left(x - 1\right)}^{2}[/latex] on [latex]\left[1,\infty \right)[/latex], then the inverse function is [latex]{f}^{-1}\left(x\right)=\sqrt{x}+1[/latex]. The domain of [latex]f[/latex] = range of [latex]{f}^{-1}[/latex] = [latex]\left[1,\infty \right)[/latex]. The domain of [latex]{f}^{-1}[/latex] = range of [latex]f[/latex] = [latex]\left[0,\infty \right)[/latex]. Q & A Is it possible for a function to have more than one inverse? No. If two supposedly different functions, say, [latex]g[/latex] and [latex]h[/latex], both meet the definition of being inverses of another function [latex]f[/latex], then you can prove that [latex]g=h[/latex]. We have just seen that some functions only have inverses if we restrict the domain of the original function. In these cases, there may be more than one way to restrict the domain, leading to different inverses. However, on any one domain, the original function still has only one unique inverse. A General Note: Domain and Range of Inverse Functions The range of a function [latex]f\left(x\right)[/latex] is the domain of the inverse function [latex]{f}^{-1}\left(x\right)[/latex]. The domain of [latex]f\left(x\right)[/latex] is the range of [latex]{f}^{-1}\left(x\right)[/latex]. How To: Given a function, find the domain and range of its inverse. If the function is one-to-one, write the range of the original function as the domain of the inverse, and write the domain of the original function as the range of the inverse. If the domain of the original function needs to be restricted to make it one-to-one, then this restricted domain becomes the range of the inverse function. Example 4: Finding the Inverses of Toolkit Functions Identify which of the toolkit functions besides the quadratic function are not one-to-one, and find a restricted domain on which each function is one-to-one, if any. The toolkit functions are reviewed below. We restrict the domain in such a fashion that the function assumes all y-values exactly once. Constant Identity Quadratic Cubic Reciprocal [latex]f\left(x\right)=c[/latex] [latex]f\left(x\right)=x[/latex] [latex]f\left(x\right)={x}^{2}[/latex] [latex]f\left(x\right)={x}^{3}[/latex] [latex]f\left(x\right)=\frac{1}{x}[/latex] Reciprocal squared Cube root Square root Absolute value [latex]f\left(x\right)=\frac{1}{{x}^{2}}[/latex] [latex]f\left(x\right)=\sqrt[3]{x}[/latex] [latex]f\left(x\right)=\sqrt{x}[/latex] [latex]f\left(x\right)=|x|[/latex] Solution The constant function is not one-to-one, and there is no domain (except a single point) on which it could be one-to-one, so the constant function has no meaningful inverse. The absolute value function can be restricted to the domain [latex]\left[0,\infty \right)[/latex], where it is equal to the identity function. The reciprocal-squared function can be restricted to the domain [latex]\left(0,\infty \right)[/latex]. Try It 4 The domain of function [latex]f[/latex] is [latex]\left(1,\infty \right)[/latex] and the range of function [latex]f[/latex] is [latex]\left(\mathrm{-\infty },-2\right)[/latex]. Find the domain and range of the inverse function.
Complete graph Set definiendum $ \langle V,E,\psi\rangle \in \mathrm{it}(E,V) $ postulate $ \langle V,E,\psi\rangle $ … simple graph postulate $ u\neq v\implies \exists !(e\in E).\ \psi(e)=\{u,v\} $ Discussion In a complete (undirected) graph, every two distinct vertices are connected. The axiom $\{u\}\notin\mathrm{im}(\psi)$ says that there are no loops on a single vertex. Theorems When dealing with simple graphs, if you add another vertex to a graph with $n$ vertices, you can add $n$ new edges at best. So a complete graph with $n$ vertices has maximally $\sum_{k=1}^{n-1}k=\frac{1}{2}n(n-1)$ edges. Reference Parents Subset of
№ 9 All Issues Volume 54, № 9, 2002 Ukr. Mat. Zh. - 2002. - 54, № 9. - pp. 1155-1162 For entire transcendental functions of finite generalized order, we obtain limit relations between the growth characteristic indicated above and sequences of their best polynomial approximations in certain Banach spaces (Hardy spaces, Bergman spaces, and spaces \(B\left( {p,q,{\lambda }} \right)\) ). Ukr. Mat. Zh. - 2002. - 54, № 9. - pp. 1163-1171 We study hereditary formations closed with respect to the operation of taking products of abnormal subgroups of finite solvable groups. We obtain a constructive description of solvable local hereditary formations of finite groups with the property indicated. Ukr. Mat. Zh. - 2002. - 54, № 9. - pp. 1172-1189 We establish sufficient conditions for the λ-stability of the trivial solution of one quasilinear differential equation of the second order. Ukr. Mat. Zh. - 2002. - 54, № 9. - pp. 1190-1199 We consider periodic components of structurally stable diffeomorphisms on two-dimensional manifolds. We study properties of these components and give a topological description of their boundaries. Ukr. Mat. Zh. - 2002. - 54, № 9. - pp. 1200-1212 Assume that a function f ∈ C[−1, 1] changes its convexity at a finite collection Y := { y 1, ... y s} of s points y i ∈ (−1, 1). For each n > N( Y), we construct an algebraic polynomial P n of degree ≤ n that is coconvex with f, i.e., it changes its convexity at the same points y i as f and $$\left| {f\left( x \right) - P_n \left( x \right)} \right| \leqslant c{\omega }_{2} \left( {f,\frac{{\sqrt {1 - x^2 } }}{n}} \right), x \in \left[ { - 1,1} \right],$$ where c is an absolute constant, ω 2( f, t) is the second modulus of smoothness of f, and if s = 1, then N( Y) = 1. We also give some counterexamples showing that this estimate cannot be extended to the case of higher smoothness. Ukr. Mat. Zh. - 2002. - 54, № 9. - pp. 1213-1219 We determine the exact values and asymptotic decompositions of upper bounds of approximations by biharmonic Poisson integrals on classes of periodic differentiable functions. Ukr. Mat. Zh. - 2002. - 54, № 9. - pp. 1220-1236 We pose and solve an extremal problem of logarithmic potential theory that is dual to the main minimum problem in the theory of interior capacities of condensers but, in contrast to the latter, it is solvable even in the case of a nonclosed condenser. Its solution is a natural generalization of the classical notion of interior equilibrium measure of a set. A condenser is treated as a finite collection of signed sets such that the closures of sets with opposite signs are pairwise disjoint. We also prove several assertions on the continuity of extremals. Ukr. Mat. Zh. - 2002. - 54, № 9. - pp. 1237-1249 By using the averaging method, we prove the solvability of boundary-value problems with parameters for nonlinear oscillating systems with pulse influence at fixed times. We also obtain estimates for the deviation of solutions of the averaged problem from solutions of the original problem. Ukr. Mat. Zh. - 2002. - 54, № 9. - pp. 1250-1264 We propose a method for the construction of generalized solutions for some nondivergent partial differential systems using set-valued analogs of the generalized statement of the problem based on subdifferential calculus. We establish new sufficient conditions for the existence of solutions of a variational inequality with set-valued operator under weakened coerciveness conditions. We consider examples of a weighted p-Laplacian in the Sobolev spaces \(W_p^1 \left( \Omega \right)\) , p ≥ 2. Ukr. Mat. Zh. - 2002. - 54, № 9. - pp. 1265-1275 We justify the averaging method for systems with delay described by both “slow” and “fast” variables. The results obtained are applied to the analysis of one problem in control theory. Ukr. Mat. Zh. - 2002. - 54, № 9. - pp. 1276-1281 For infinite-order functions u subharmonic in \(\mathbb{C}\) with given restrictions on the Riesz masses of a disk of radius r ∈ (0, +∞), we find majorants for the functions \(B\left( {r,u} \right) = \max \left\{ {\left| {u\left( z \right)} \right|:\left| z \right| \leqslant r} \right\}\) and \(\overset{\lower0.5em\hbox{\(\smash{\scriptscriptstyle\smile}\)}}{B} \left( {r,u} \right) = \sup \left\{ {\left| {\overset{\lower0.5em\hbox{\(\smash{\scriptscriptstyle\smile}\)}}{u} \left( z \right)} \right|:\left| z \right| \leqslant r} \right\}\) , where \(\overset{\lower0.5em\hbox{\(\smash{\scriptscriptstyle\smile}\)}}{u}\) is a function conjugate to u. Ukr. Mat. Zh. - 2002. - 54, № 9. - pp. 1282-1283 We show that Theorems 1 and 3 in A. G. Gritsai's paper “Monotonicity properties of solutions of systems of nonlinear differential equations” published in the collection of works Approximate and Qualitative Methods in the Theory of Differential and Functional Differential-Equations (Institute of Mathematics, Ukrainian Academy of Sciences, Kiev, 1979) are not true in the formulation presented. Ukr. Mat. Zh. - 2002. - 54, № 9. - pp. 1284-1288 The spectra of BPS states in F-theory on elliptic fibered fourfolds are investigated. On the Possibility of Stabilization of Evolution Systems of Partial Differential Equations on $ℝ^n × [0, + ∞)$Using One-Dimensional Feedback Controls Ukr. Mat. Zh. - 2002. - 54, № 9. - pp. 1289-1296 We establish conditions for the stabilizability of evolution systems of partial differential equations on $ℝ^n × [0, + ∞)$ by one-dimensional feedback controls. To prove these conditions, we use the Fourier-transform method. We obtain estimates for semialgebraic functions on semialgebraic sets by using the Tarski–Seidenberg theorem and its corollaries. We also give examples of stabilizable and nonstabilizable systems.
At first an example which I know how to treat. Let's say we have the following integral $\int dx \frac{1}{x^2+a^2}$ Now we do an analytic continuation of the constant $a$ to the complex numbers: $a \rightarrow ia$. This gives $\int dx \frac{1}{x^2-a^2}$ which now has poles at $x = \pm a$. Let's say I want to evade these poles during the integration (small half-circles into the complex plane around the singular points), then I have to take the half-circle at $x = -a$ into the lower half-plane (integration counter-clockwise), and the half-circle at $x = a$ into the upper half-plane (integration clockwise). As can be seen in this picture: >click< Whether the integration of a half-circle is clockwise or counter-clockwise, determines the extra sign before the residual part of the integral to be minus or plus respectively. Now let's get to my actual question. Imagine, instead of the simple example above we have the following integral: $\int_{0}^{\frac{\pi}{2}} dx \frac{1}{1-f(a^2)\sin^2(x)}$ where $f(a^2) = \frac{\text{polynomial in }a^2}{\text{other polynomial in }a^2}$ and $0 < f(a^2) < 1$ for all $a$. Since $0 < f(a^2) < 1$, there are no singularities in the integral. Now we do the analytic continuation $a \rightarrow ia$. This gives $\int_{0}^{\frac{\pi}{2}} dx \frac{1}{1-f(-a^2)\sin^2(x)}$ and it is given that always $f(-a^2) > 1$. That causes a singularity to appear in $x = \arcsin(\frac{1}{\sqrt{f(-a^2)}})$. (Before the analytic continuation there was no singularity, because $\arcsin(y) $ is defined only for $-1\leq y\leq 1$). It is not hard to compute the residue at this singularity, but now it is less transparent, whether we should evade the pole drawing the small half-circle into the upper or into the lower half-plane. This little detail determines the sign of the residual part of the integral (I would like to separate it in the residual and principal parts). How can I determine whether I should draw the half-circle in the lower or in the upper half-plane in the general case, or in this situation at least?
Search Now showing items 1-10 of 24 Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV (Springer, 2015-01-10) The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ... Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV (Springer Berlin Heidelberg, 2015-04-09) The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV (Springer, 2015-05-27) The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ... Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (American Physical Society, 2015-03) We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV (American Physical Society, 2015-06) The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ... Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2015-11) The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ... K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2015-02) The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ...