text
stringlengths 256
16.4k
|
|---|
This is due to the mass-energy equivalence and a phenomenon called binding energy.
Forming a nucleus releases energy because the nucleons are falling into a potential energy well. Due to Einstein's mass energy equivalence this results in the mass of the new nucleus being less than that of the particles that formed it.
The binding energy of carbon-12 is quoted on Wikipedia as $\pu{92.162 MeV}$. Therefore we can estimate the mass defect of a carbon-12 atom, $\Delta m$, using $ E = (\Delta m)c^2$:
$$\Delta m = \frac{(\pu{92.162 \times 10^6 eV})(\pu{1.6022 \times 10^-19 J eV-1})}{(\pu{2.9979 \times 10^8 m s-1})^2} = \pu{1.6430 \times 10^-28 kg} = \pu{0.098943 u}$$
The difference in the mass of carbon-12 to the mass of its constituent particles is $\pu{0.08940 u}$, so we can see that our calculation is a reasonable estimate of the mass defect. The slight difference is due to other more complicated factors that I have not taken into account, but it still illustrates the main reason for the mass defect.
|
So lets consider the following grammar
$$ \begin{align*} S &\to 0 \mid 0A \\ A &\to 1 \end{align*} $$
would the string "1" be accepted by the language or must the language start with $S$?
Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. It only takes a minute to sign up.Sign up to join this community
No $1$ will not be a part of the grammar, as only those strings generated by starting from the start variable $S$ are part of the grammar.
What you have shown is technically not a grammar, only part of it. A grammar is formally defined as the tuple $(N, \Sigma, P, S)$, where:
You have only provided $P$, but to have a grammar, you also need $N$, $\Sigma$ and $S$.
$N$ and $\Sigma$ are often omitted when defining a grammar, because they are clear from $P$ (like in your example, where $N = \{ S, A \}$ and $\Sigma = \{ 0, 1 \}$).
Explicitly specifying the start symbol (referred to as $S$ in the formal definition above) can be omitted when there is a clear convention for the name of the start symbol; naming it $S$ as you do in your example is a common convention.
What this all means for your example is that if you assume that $S$ is the start symbol, then $1$ is not a member of the language. If you instead assumed that $A$ is the start symbol, then $1$ would be a member of the language. Such grammar would be formally valid, but defining it like this wouldn't make sense from a human's point of view.
|
Note: The original answer has a flaw in "without loss of generality". The following one is based on Section 8.2.3 of the book "Computer Algorithms" (3rd edition) by Sara Baase and Allen Van Gelder.
The right part of the theorem is called the MST property.
MST Property: Let $T$ be any spanning tree. For any edge $e \notin T$, $e$ has the maximal weight on the cycle created by adding it to $T$.
In terms of the MST property, the theorem to prove can be restated as:
A spanning tree $T$ is an MST $\iff$ $T$ has the MST property.
We first prove the following lemma:
Lemma: If $T$ and $T'$ are spanning trees that have the MST property, then $w(T) = w(T')$.
Proof: Let $\Delta E = (E(T) \setminus E(T')) \cup (E(T') \setminus E(T))$. That is, $\Delta E$ is the set of edges that are in one of $T$ or $T'$ but not in both. $\Delta E \neq \emptyset$. Let $|\Delta E| = k$.
If $k = 0$, we are done.
Let $k > 0$ and $e$ be a minimum edge in $\Delta E$. Assume that $e \in T \setminus T'$ (the case of $e \in T' \setminus T$ is symmetrical).
$T' + e$ create a cycle $C_{T'}$. There must be an edge $e'$ on the cycle $C_{T}$ that $e' \neq e$ and $e' \in T' \setminus T$ (otherwise $T$ contains the cycle $C_{T'}$).
Since $T'$ has the MST property, $e$ has the maximum weight on the cycle $C_{T'}$. Thus, $w(e') \le w(e)$. On the other hand, $e$ by definition is a minimum edge in $\Delta E$. Thus, $w(e) \le w(e')$.
Therefore, $w(e) = w(e')$.
Add $e$ to $T'$, creating a cycle, then remove $e'$, leaving a spanning tree $T''$. Since $w(T'') = w(T')$, $T''$ is an MST. However, $T$ and $T''$ differ by $k-1$ edges.
Repeating the procedure above, we will end with an MST, say $T^{k+1}$, such that $T$ and $T^{k+1}$ differ by $0$ edges.
Therefore, we have $w(T) = w(T^{k+1}) = \cdots = w(T'') = w(T')$.
Given the lemma above, we proceed to prove the "$\Leftarrow$" direction as follows:
Assume $T$ has the MST property. Let $T_m$ be any MST. By the "$\Rightarrow$" direction, $T_m$ has the MST property. By the lemma above, $w(T) = w(T_m)$. Thus, $T$ is an MST.
|
I assume that you talk about the p-value on the estimated coefficient $\hat{\beta}_1$. (but the reasoning would be similar for $\hat{\beta}_0$).
The theory on linear regression tells us that, if the necessary conditions are fulfilled, then we know the distribution of that estimator namely, it is normal, it has mean equal to the ''true'' (but onknown) $\beta_1$ and we can estimate the variance $\sigma_{\hat{\beta}_1}$. I.e. $\hat{\beta}_1 \sim N(\beta_1, \sigma_{\hat{\beta}_1})$
If you want to ''demonstrate'' (see What follows if we fail to reject the null hypothesis? for more detail) that the true $\beta_1$ is non-zero, then you assume the opposite is true, i.e. $H_0: \beta_1=0$.
Then by the above, you know that, if $H_0$ is true that $\hat{\beta}_1 \sim N(\beta_1=0, \sigma_{\hat{\beta}_1})$.
In your regression result you observe a value for $\hat{\beta_1}$ and you can compute its p-value. If that p-value is smaller than the significance level that you decide (e.g. 5%) then you reject $H_0$ en consider $H_1$ as ''proven''.
In your case the ''true'' $\beta_1$ is $\beta_1=0.5$, so obviously $H_0$ is false, so you expect p-values to be below 0.05.
However, if you look at the theory on hyptothesis testing, then they define ''type-II'' errors, i.e. accepting $H_0$ when it is false. So in some cases you may accept $H_0$ even though it is false, so you may have p-values above 0.05 even though $H_0$ is false.
Therefore, even if in your true model $\beta_1=0.5$ it can be that you accept the $H_0: \beta_1=0$, or that you make a type-II error.
Of course you want to minimize the probability of making such type-II errors where you accept that $H_0: \beta_1=0$ holds while in reality it holds that $\beta=0.5$.
The size of the type-II error is linked to the power of your test. Minimizing the type-II error means maximising the power of the test.
You can simulate the type-II error as in the R-code below:
Note that:
if you take $\beta_1$ further from the value under $H_0$ (zero) then the type II error decreases (execute the R-code with e.g. beta_1=2) which means that the power increases. If you put beta_1 equal to the value under $H_0$ then you find $1-\alpha$.
R-code:
x = rnorm(100,5,1)
b = 0.5
beta_0= 2.5
beta_1= 0.5
nIter<-10000
alpha<-0.05
accept.h0<-0
for ( i in 1:nIter) {
e = rnorm(100,0,3)
y = beta_0 + beta_1*x + e
m1 = lm(y~x)
p.value<-summary(m1)$coefficients["x",4]
if ( p.value > alpha) accept.h0<- accept.h0+1
}
cat(paste("type II error probability: ", accept.h0/nIter))
|
This article provides answers to the following questions, among others:
What does Pascal’s law state? What is the difference between pneumatics and hydraulics? How does a hydraulic jack work? What are the two principles used in a jack to increase the force? How is the hydraulic fluid prevented from flowing back from the working cylinder into the pump cylinder? Pascal’s law
In the article Pressure it has already been explained that pressure in liquids (or gases) is evenly distributed in all directions. If, for example, a certain pressure is generated at one point within a liquid, then the same pressure will be present at every other point in the liquid (neglecting the hydrostatic pressure). This is often referred to as
Pascal’s law or Pascal’s principle.
Pascal’s law describes the uniform distribution of pressure in a liquid (neglecting the hydrostatic pressure)!
In some literature,
Pascal’s law is also somewhat more general and then takes hydrostatic pressure into account. In this more general sense, Pascal’s law states that the pressure at a certain depth \(h\) in a liquid results from the sum of the pressure at the liquid surface \(p_0\) and the hydrostatic pressure \(p_h\):
\begin{align}
&p(h) = p_0 + p_h~~~~~\text{and} ~~~p_h= \rho g h \\[5px] &\boxed{p(h) = p_0 + \rho g h} ~~~~~\text{Pascal’s law} \\[5px] \end{align}
For liquids in an open container, the pressure at the liquid surface corresponds to the ambient pressure (“atmospheric pressure”). If the liquid is not very deep, the hydrostatic pressure can usually be neglected compared to the larger ambient pressure at the surface. If, for example, the depth is in the order of a few centimetres, then the hydrostatic pressure is about a thousand times lower than the atmospheric pressure. In this case, it immediately becomes apparent that the same pressure exists at every depth, i.e. at every point in liquid:
\begin{align}
\require{cancel} & \text{with} ~~~~~ p_0 \gg p_h ~~~~~\text{applies:} \\[5px] &p(h) = p_0 + \bcancel{\rho g h} \approx p_0 \\[5px] &\boxed{p(h) = p_0} ~~~~~\text{valid under negligence of the hydrostatic pressure} \\[5px] \end{align}
By neglecting the hydrostatic pressure, the pressure in a liquid corresponds to the pressure that the environment exerts on the liquid surface. The pressure in a liquid can therefore be changed by increasing the pressure on the liquid surface. However, since the surrounding air pressure cannot be changed, the liquid must first be enclosed in container. Through an opening in the vessel wall, the pressure on the liquid surface can now be increased at will by means of a piston, thus forcing a certain pressure on the liquid.
This simple principle is used, for example, in
syringes. The liquid to be applied is contained in a cylindrical housing ( barrel), on which a pressure is exerted with a piston. The resulting pressure causes the liquid to be pressed out of the orifice. Hydraulic jack Hydraulics
Another use of Pascal’s principle is the
hydraulic jack or hydraulics in general. Hydraulics uses fluid to transmit energy. In addition to electrics (power transmission with electric current) and pneumatics (power transmission with air), hydraulics is of great importance in mechanical engineering.
While pneumatics refers to the power transmission with compressible gases, hydraulics refers to the power transmission with incompressible fluids!
Special oils, so-called
hydraulic fluids, are used in hydraulics. Compared to water, which could only be used in a temperature range between 0 °C and 100 °C, hydraulic fluids can be used in larger temperature ranges. In addition, the hydraulic fluids not only protect metallic components against corrosion, but also provide excellent lubrication of the moving parts. Hydraulic principle
The figure below shows a hydraulic jack. A
lever is used to pressurise a hydraulic fluid by a piston, thereby moving another piston upwards with great force. With such a hydraulic jack, loads of up to several tons can be lifted. The increase in force is in part due to the Pascal principle.
The figure below shows the simplified design of a
hydraulic bottle jack, which shows the principle of operation. The hydraulic fluid is in a closed system. The housing in which the fluid is contained is provided with two movable pistons. The oil is put under pressure by the smaller piston (called pump piston or pump plunger).
Using the applied force \(F_1\) and the surface area of the piston \(A_1\), the exerted pressure \(p\) can be determined relatively easily from the quotient of force and area:
\begin{align}
\label{p} &p = \frac{F_1}{A_1} \\[5px] \end{align}
According to Pascal’s law, this pressure can be found at any point of the liquid (note that due to the large pressures applied and the relatively small dimensions of the housing, the hydrostatic pressure can be neglected anyway). The pressure generated by the small piston therefore also acts on the second piston, called
ram ( working piston). However, since this piston has a larger surface area \(A_2\), the pressure there leads to a larger force \(F_2\):
\begin{align}
\label{f} &F_2 = p \cdot A_2 \\[5px] \end{align}
If equation (\ref{p}) is used in equation (\ref{f}), the amplification of the force is directly dependent on the ratio of the piston areas:
\begin{align}
&F_2 = p \cdot A_2 = \frac{F_1}{A_1} \cdot A_2 = F_1 \cdot \frac{A_2}{A_1} \\[5px] &\boxed{F_2 = F_1 \cdot \frac{A_2}{A_1}}\\[5px] \end{align}
If, for example, the area of the working piston is four times as large as that of the pump piston (i.e. the diameter of the working piston is twice as large), the force applied is quadrupled. This does not contradict the law of energy conservation, because the fourfold piston area means that the ram will only lift by a quarter of the pump stroke.
This is also clearly illustrated, as the pump piston displaces a certain amount of hydraulic fluid during the downward movement. Since liquids are incompressible and therefore cannot be compressed, the displaced liquid lifts the working piston by the volume of the displaced liquid. However, the ram has an area four times as large, so that this volume is already achieved with a quarter of the original stroke. As the force is increased, the lifting height is reduced accordingly.
Mechanical principle
In fact, this hydraulic amplification of the force is only one of a total of two principles applied to a jack. The much greater force amplification is due to the mechanical leverage. Usually it is a second class lever. According to the
law of the lever, the mechanical amplification of the force results from the ratio of the lever arms, which result from the pivot point to the pump piston or from the pivot point to the handle. If the lever arm \(a\) from the pivot point to the handle, for example, is 10 times as large as the distance \(b\) from the pivot point to the pump piston, then the force will be increased by a factor of 10.
If the above mentioned figures are used as an typical example for a car jack, a
mechanical amplification of factor 10 is obtained according to the law of the lever and a hydraulic amplification of factor 4 according to Pascal’s Law. In this case, a total amplification of factor 40 is obtained. Thus, an object weighing 400 kg can be lifted with an effort of 10 kg. Construction of a hydraulic jack
The figure below shows the structure and operating principle of a real hydraulic bottle jack. The hydraulic fluid is located in a
reservoir between two cylinders; an outer cylinder (oil tank) which forms the housing wall and an inner cylinder in which the working piston ( ram) slides. The hydraulic fluid inside this reservoir is not pressurized all the time! During the upward movement of the pump piston ( pump plunger), the hydraulic oil is sucked into the pump cylinder by an inlet passage.
The oil is then pressurized during the downward movement of the pump piston and flows through another passage into the
working cylinder, where it lifts the ram piston. Check valves in the form of steel balls are used so that the jack can be moved continuously upwards and the hydraulic oil is not pumped back from the working cylinder into the pump cylinder or the hydraulic oil is not pressed back into the reservoir. When the pump piston is lowered, the ball seals the way back into the reservoir. At the same time the valve ball in the working cylinder is lifted by the pressure and the hydraulic fluid can flow into it.
After the inflow, the ball in the working cylinder falls down again due to gravity. The high pressure in the working cylinder presses the ball firmly into the valve seat, thus preventing the hydraulic oil from flowing back into the pump cylinder. The pumping process can now start again from the beginning, as the ball in the pump cylinder is lifted by the suction and hydraulic fluid can be sucked into the pump cylinder. Note that due to the check valves, the hydraulic oil in the working cylinder is kept permanently under pressure, while the oil in the reservoir always remains unpressurized.
To lower the ram again, another passage is opened, which connects the working cylinder directly to the reservoir. During lifting, this passage is sealed with a steel ball which is pressed firmly into the valve seat with a screw. If this
release valve is unscrewed, the ball releases the passage and the hydraulic oil is pushed back into the reservoir under the force of gravity of the ram.
In order to protect the jack from damage in the event of overload, the release valve is designed as a safety valve and is usually provided with a spring. If the pressure is too high, the spring is pushed back and the hydraulic oil can flow directly back into the reservoir without an unacceptably high pressure building up in the working cylinder.
|
(2017-09-01 21:00: This answer just received a revenge downvote. Oh well...)
For every real number $t$,
$$\left|\sin t-t\right|\leqslant\tfrac16|t|^3\tag{$\ast$}$$
hence $$f(x,y)=\frac{\sin(xy)}{xy}$$ is such that, for every $(x,y)$ such that $xy\ne0$ (that is, where $f(x,y)$ exists), $$|f(x,y)-1|\leqslant\tfrac16|xy|^2$$ which should be enough to conclude that, indeed,
$$\lim_{(x,y)\to(0,0)}f(x,y)=1$$
To prove $(\ast)$, start from $$|\cos t|\leqslant1\tag{$\circ$}$$ and integrate thrice, that is, note that this upper bound yields $$|\sin t|=\left|\int_0^t\cos s\,ds\right|\leqslant|t|$$ which implies $$|\cos t-1|=\left|\int_0^t\sin s\,ds\right|\leqslant\tfrac12t^2$$ which implies $$|\sin t-t|=\left|\int_0^t(\cos s-1)\,ds\right|\leqslant\tfrac16|t|^3$$ as desired.
To conclude, $(\circ)$ is the only prerequisite we used, and $(\circ)$ itself follows readily, for example, from the fact that $$\cos^2t+\sin^2t=1$$
|
Question: As we know, (1) the macroscopic spatial dimension of our universe is 3 dimension, and (2) gravity attracts massive objects together and the gravitational force is isotropic without directional preferences. Why do we have the spiral 2D plane-like Galaxy(galaxies), instead of spherical or elliptic-like galaxies? Input:Gravity is (at least, seems to be) isotropic from its force law (Newtonian gravity). It should show no directional preferences from the form of force vector $\vec{F}=\frac{GM(r_1)m(r_2)}{(\vec{r_1}-\vec{r_2})^2} \hat{r_{12}}$. The Einstein gravity also does not show directional dependence at least microscopically.
If the gravity attracts massive objects together
isotropically, and the macroscopic space dimension is 3-dimensional, it seems to be natural to have a spherical shape of massive objects gather together. Such as the globular clusters, or GC, are roughly spherical groupings Star cluster, as shown in the Wiki picture:
However, my impression is that, even if we have observed some more spherical or more-ball-like Elliptical galaxy, it is more common to find more-planar Spiral galaxy such as our Milky Way? (Is this statement correct? Let me know if I am wrong.)
Also, such have a look at this more-planar
Spiral "galaxy" as this NGC 4414 galaxy:
Is there some physics or math theory explains why the Galaxy turns out to be planar-like (or spiral-like) instead of spherical-like?
p.s. Other than the classical stability of a 2D plane perpendicular to a classical angular momentum, is there an interpretation in terms of the quantum theory of vortices in a macroscopic manner (just my personal speculation)?
Thank you for your comments/answers!
|
Scattering for a mass critical NLS system below the ground state with and without mass-resonance condition
1.
Department of Mathematics, Graduate School of Science, Osaka University, Toyonaka, Osaka, 560-0043, Japan
2.
Research Institute for Mathematical Sciences, Kyoto University, Kyoto, 606-8502, Japan
3.
Department of Mathematics, Graduate School of Science, Tokyo University of Science, 1-3 Kagurazaka, Shinjuku-ku, Tokyo 162-8601, Japan
$ \left\{ \begin{split} i\partial_t u + \;\; \Delta u & = \bar{u}v,\\ i\partial_t v +\kappa \Delta v & = u^2, \end{split} \right. \qquad (t,x)\in \mathbb{R}\times \mathbb{R}^4, $
$ (u,v) $
$ \mathbb{C}^2 $
$ \kappa >0 $
$ \kappa = 1/2 $
$ M(u,v)<M(\phi ,\psi) $
$ M(u,v) $
$ (\phi ,\psi) $
$ (u,v) $ Keywords:Quadratic NLS system, mass critical, scattering, mass-resonance, concentration compactness. Mathematics Subject Classification:Primary: 35Q55; Secondary: 35P25. Citation:Takahisa Inui, Nobu Kishimoto, Kuranosuke Nishimura. Scattering for a mass critical NLS system below the ground state with and without mass-resonance condition. Discrete & Continuous Dynamical Systems - A, 2019, 39 (11) : 6299-6353. doi: 10.3934/dcds.2019275
References:
[1] [2]
M. Colin, T. Colin and M. Ohta,
Stability of solitary waves for a system of nonlinear Schrödinger equations with three wave interaction,
[3]
J. Colliander, M. Keel, G. Staffilani, H. Takaoka and T. Tao,
Global well-posedness and scattering for the energy-critical nonlinear Schrödinger equation in $\mathbb{R}^3$,
[4]
B. Dodson,
Global well-posedness and scattering for the defocusing, $L^2$-critical nonlinear Schrödinger equation when $d\geq3$,
[5]
B. Dodson,
Global well-posedness and scattering for the mass critical nonlinear Schrödinger equation with mass below the mass of the ground state,
[6]
B. Dodson,
Global well-posedness and scattering for the defocusing, $L^2$ critical, nonlinear Schrödinger equation when $d = 1$,
[7]
B. Dodson,
Global well-posedness and scattering for the defocusing, $L^2$-critical, nonlinear Schrödinger equation when $d = 2$,
[8]
T. Duyckaerts, J. Holmer and S. Roudenko,
Scattering for the non-radial 3D cubic nonlinear Schrödinger equation,
[9] [10] [11]
N. Hayashi, T. Ozawa and K. Tanaka,
On a system of nonlinear Schrödinger equations with quadratic interaction,
[12]
C. E. Kenig and F. Merle,
Global well-posedness, scattering and blow-up for the energy-critical, focusing, non-linear Schrödinger equation in the radial case,
[13] [14]
H. Koch, D. Tataru and M. Vișan,
[15] [16] [17] [18]
T. Tao, M. Visan and X. Y. Zhang,
Global well-posedness and scattering for the defocusing mass-critical nonlinear Schrödinger equation for radial data in high dimensions,
[19]
T. Tao, M. Visan and X. Y. Zhang,
The nonlinear Schrödinger equation with combined power-type nonlinearities,
[20] [21]
show all references
References:
[1] [2]
M. Colin, T. Colin and M. Ohta,
Stability of solitary waves for a system of nonlinear Schrödinger equations with three wave interaction,
[3]
J. Colliander, M. Keel, G. Staffilani, H. Takaoka and T. Tao,
Global well-posedness and scattering for the energy-critical nonlinear Schrödinger equation in $\mathbb{R}^3$,
[4]
B. Dodson,
Global well-posedness and scattering for the defocusing, $L^2$-critical nonlinear Schrödinger equation when $d\geq3$,
[5]
B. Dodson,
Global well-posedness and scattering for the mass critical nonlinear Schrödinger equation with mass below the mass of the ground state,
[6]
B. Dodson,
Global well-posedness and scattering for the defocusing, $L^2$ critical, nonlinear Schrödinger equation when $d = 1$,
[7]
B. Dodson,
Global well-posedness and scattering for the defocusing, $L^2$-critical, nonlinear Schrödinger equation when $d = 2$,
[8]
T. Duyckaerts, J. Holmer and S. Roudenko,
Scattering for the non-radial 3D cubic nonlinear Schrödinger equation,
[9] [10] [11]
N. Hayashi, T. Ozawa and K. Tanaka,
On a system of nonlinear Schrödinger equations with quadratic interaction,
[12]
C. E. Kenig and F. Merle,
Global well-posedness, scattering and blow-up for the energy-critical, focusing, non-linear Schrödinger equation in the radial case,
[13] [14]
H. Koch, D. Tataru and M. Vișan,
[15] [16] [17] [18]
T. Tao, M. Visan and X. Y. Zhang,
Global well-posedness and scattering for the defocusing mass-critical nonlinear Schrödinger equation for radial data in high dimensions,
[19]
T. Tao, M. Visan and X. Y. Zhang,
The nonlinear Schrödinger equation with combined power-type nonlinearities,
[20] [21]
[1]
Myeongju Chae, Sunggeum Hong, Sanghyuk Lee.
Mass concentration for the $L^2$-critical nonlinear Schrödinger
equations of higher orders.
[2]
Yanfang Gao, Zhiyong Wang.
Minimal mass non-scattering solutions of the focusing
[3]
Rowan Killip, Satoshi Masaki, Jason Murphy, Monica Visan.
The radial mass-subcritical NLS in negative order Sobolev spaces.
[4]
Yoshifumi Mimura.
Critical mass of degenerate Keller-Segel system with no-flux and Neumann boundary conditions.
[5] [6]
Rowan Killip, Soonsik Kwon, Shuanglin Shao, Monica Visan.
On the mass-critical generalized KdV equation.
[7]
Tong Li, Hailiang Liu.
Critical thresholds in a relaxation system with resonance of characteristic speeds.
[8]
Ruihong Ji, Yongfu Wang.
Mass concentration phenomenon to the 2D Cauchy problem of the compressible Navier-Stokes equations.
[9] [10]
Satoshi Masaki.
A sharp scattering condition for focusing mass-subcritical nonlinear
Schrödinger equation.
[11]
Klemens Fellner, Evangelos Latos, Takashi Suzuki.
Global classical solutions for mass-conserving, (super)-quadratic reaction-diffusion systems in three and higher space dimensions.
[12] [13] [14]
Van Duong Dinh.
On blow-up solutions to the focusing mass-critical nonlinear fractional Schrödinger equation.
[15]
Giuseppe Maria Coclite, Helge Holden.
Ground states of the Schrödinger-Maxwell system with dirac
mass: Existence and asymptotics.
[16]
Elio E. Espejo, Masaki Kurokiba, Takashi Suzuki.
Blowup threshold and collapse mass separation for a drift-diffusion system in space-dimension two.
[17]
Benedetta Noris, Hugo Tavares, Gianmaria Verzini.
Stable solitary waves with prescribed $L^2$-mass for the cubic Schrödinger system with trapping potentials.
[18]
Adrien Blanchet, Philippe Laurençot.
Finite mass self-similar blowing-up solutions of a chemotaxis system with non-linear diffusion.
[19]
Belinda A. Batten, Hesam Shoori, John R. Singler, Madhuka H. Weerasinghe.
Balanced truncation model reduction of a nonlinear cable-mass PDE system with interior damping.
[20]
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top]
|
Hint. If $a$ and $b$ are large enough, then $ab-a-b+1>\frac{ab}2$. Indeed; the inequality is equivalent to $(a-2)(b-2)>2$, so it suffices that $a,b\geq3$ and $\{a,b\}\neq\{3,4\}$.
Prime powers are certainly solutions, as they don't have any pair of coprime divisors $>1$. Now suppose $n$ has two nontrivial coprime divisors whose product is $n$. By the above, there are two cases to consider:
Case 1: at least one of them is $2$, which implies $n=2p^k$ for some odd prime $k$. We need $p^k-1\mid n$, so $p^k-1\mid2$ and hence $p^k=3$, so $n=6$. Case 2: $n=3\cdot4=12$. It's easy to check that this is a satisfies the property.
Summarizing: $n=p^k$ for some prime $p$ and an integer $k\geq0$, or $n=6$ or $n=12$.
|
A spinor which belogs to a representation of a group $G=SO(p,q)$ is a section of a product bundle $S(M)\otimes E$, where $S(M)$ is a spin bundle over a four dimensional orientable and compact manifold $M$ and $E$ is an associated vector bundle of $P(M,G)$ in an appropriate representation. A Dirac operator $D: \Delta^+ \otimes E\rightarrow \Delta^- \otimes E$ in this case is
$D=\gamma^\mu(\partial_\mu+A_\mu)$
where $A_\mu$ is a $SO(p,q)$ connection on $E$.
With this non-compact special orthogonal Lie group, can I always define a topological index for the above Dirac operator?
$ind (D)= -\frac{1}{8 \pi^{2}}\int F\wedge F$
with $F=dA+A\wedge A$ ?
Is it well defined the Atiyah-Singer index theorem in this case?
|
While the language of ZFC set theory does not admit classes, this is not the case for NBG set theory. But the language of NBG does not allow quantification on symbols for classes, and it is known that NBG is conservative on ZFC for sets, meaning that every theorem concerning only sets provable in NBG is also provable in ZFC. But Kelley-Morse (KM) set theory allows quantification on symbols for classes, and is known not to be conservative for sets on ZFC. Indeed, the language of KM admits more formulas for classes, so probably more classes, and among them more sets. My question is: "What are examples of theorems on sets provable in KM but not in ZFC set theory?" Gérard Lang
KM proves that for any class $X$, there are class club many cardinals $\delta$ that are fully $X$-correct, meaning that $\langle V_\delta,{\in},X\cap V_\delta\rangle\prec\langle V,{\in},X\rangle$ (and furthermore, this elementarity is expressible in KM, unlike GBC or ZFC). The reason is that KM proves that there is a satisfaction class $S$ for $\langle V,{\in},X\rangle$, a truth predicate indicating which formulas are true at which points in this (limited) structure, and we may then apply the reflection theorem to $\langle V,{\in},S\rangle$, to find a club of $\delta$ which are $X$-correct.
This way of thinking produces many theorems purely about sets that are provable in KM, such as:
Theorems. (KM)
Con(ZFC). Indeed, there is a transitive model of ZFC.
Moreover, every set is in some $V_\delta$ that is a model of ZFC. In other words, there is a proper class of worldly cardinals. This is a weak formulation of the Grothendieck axiom of universes, using worldly cardinals in place of inaccessible cardinals.
Even stronger, the worldly cardinals form a stationary proper class. And this is true even when one adds any given class predicate $X$, since any fully $X$-correct cardinal is $X$-worldly. So there are $\alpha$-worldly cardinals of any degree $\alpha$.
If there are unboundedly many inaccessible cardinals (or measurable cardinals or supercompat cardinals, as you like), then there is a worldly limit of such cardinals.
Nevertheless, one should not think of the large cardinal strength of KM as being very great, since if $\kappa$ is an inaccessible cardinal, then $\langle V_\kappa,{\in},V_{\kappa+1}\rangle$ satisfies KM, and so we have a comparatively low upper bound on consistency strength.
Meanwhile, I don't quite agree with your remarks about the syntactic difference between GBC and KM. One can view them both as formalized in the two-sorted language, with variable symbols for sets and variable symbols for classes, and there are exactly the same formulas in the two theories. It is just that GBC does not allow the replacement and separation axioms for formulas using class quantifiers, while KM does allow them. But meanwhile, it is perfectly sensible to speak of GBC proving or refuting formulas that happen to have class quantifiers, or to speak of such formulas being independent of GBC.
|
I am interested in algorithms that help decide whether a
countably infinite locally finite graph is connected.
I think there is no algorithm that works for all graphs, e.g. no algorithm should work for an infinite chain with one edge removed.
I care about a specific graph $\Gamma$ whose automorphism group acts with finite quotient, i.e. there are only finitely many orbits of vertices. Also $\Gamma$ can be realized as an explicit collection of points in $\mathbb R^n$, and there is an easily computable function $d:\mathbb R^n\times \mathbb R^n\to \mathbb R$ such that two vertices $v,w$ are adjacent if and only if $d(v,w)=0$. I hope to find an algorithm that can be implemented so that after some computer experiments I would have evidence that the graph is actually connected.
|
A cryptographic hash function $f : \{0,1\}^{*} \to \{0,1\}^n$ has three properties: (1) preimage resistance, (2) second-preimage resistance, and (3) collision resistance. Even further, these properties form a hierarchy where each property implies the one before it, i.e., a collision-resistant function is also second-preimage resistant, and a second-preimage resistant function is also preimage-resistant (with a condition on $f$).
In the case of (3) ⇒ (2), it's not too hard to see why: if an adversary cannot find
any colliding message pairs, then they certainly cannot find a colliding message when one of the messages is fixed.
However, (2) ⇒ (1) is substantially trickier. For some intuition, consider a second-preimage resistant hash function $f$ that was
not preimage resistant (modeled by being given access to a preimage-finding oracle). Suppose you were given a $m_1$; then you could compute $H(m_1)$ and consult the oracle for the preimage of $H(m_1)$. The oracle would then return a $m_2$ such that $H(m_1) = H(m_2)$.
This is very nearly a second preimage. The only question is if $m_1 \ne m_2$. Intuitively, given that $f$ maps infinitely-many inputs to a finite number of outputs, there "should be" a high probability that $m_1 \ne m_2$. For all real-life hash functions, this is pretty much the case, so a second-preimage resistant hash function should not lack preimage resistance.
However, it is possible to define "pathological" hash functions that have perfect, provable second-preimage resistance but not preimage resistance. The example given in chapter 9 of the
Handbook of Applied Cryptography is this:
$$f(x) = \begin{cases} 0 || x & \text{if } x \text{ is } n \text{ bits long}\\ 1 || g(x) & \text{otherwise}\end{cases}$$
where $g(x)$ is a collision-resistant hash function. In this case, for digests beginning with $0$, it's trivial to find a preimage (indeed, it's just the identity function), but such cases are provably second-preimage resistant, as there are no possible second preimages. In other words, this $f$ is bijective across the space of $n$-bit inputs.
To be more precise about when (2) ⇒ (1), Rogaway and Shrimpton have presented a theoretical analysis of the various relations between the three properties listed above in their
Cryptographic Hash-Function Basics. Essentially, their analysis treats a hash function as having a finite, fixed-length domain, i.e. $f : \{0,1\}^m \to \{0,1\}^n$, wherein they show
"conventional implications", like the implication (3) ⇒ (2); these are essentially "true" implications in the sense that they are unconditional, and
"provisional implications", like the implication that (2) ⇒ (1); these are conditional in nature, relying on how much $f$ compresses the message space (as the message space gets larger relative to the digest space, the "stronger" the implication in a probabilistic sense).
So, provisional implications are
essentially true if a hash function compresses the message space to a sufficient degree. (The "sufficient" example they provide is a hash compressing 256-bit messages to 128 bits.) Hence, second-preimage resistance implies preimage resistance only if the function in question compresses its input sufficiently. For length-preserving, length-extending, or low-compression functions, second-preimage resistance does not necessarily imply preimage resistance (as stated by the authors on page 8 about halfway down the page).
This should be intuitive given the above algorithm for finding second preimages given a preimage oracle. If you are expanding 6-bit inputs to 256 bits, it's actually quite unlikely that a preimage oracle would be able to find a second preimage. This isn't a formal argument, by any means, but it's a nice heuristic one.
Now, back to real life. Given the above algorithm for using a preimage oracle to find second preimages, I would not expect any real-life hash functions to have preimage attacks and not second-preimage attacks, especially since real hash functions typically compress data well.
On the other hand, I'm not personally aware of any historically-used, non-toy cryptographic hash function which has a second-preimage attack but not a preimage attack. Typically, collision resistance is the first thing attacked by cryptanalysts since it is (in a sense) the "hardest" property to satisfy. But if a hash function is found to be broken with regard to collisions, cryptanalysts typically go straight for the heart: preimage attacks. So, I don't know how much luck you'll have trying to find such a hash function.
You can look at the hash function lounge for some historic hash functions; it hasn't been updated since 2008, apparently, but still contains some useful info. I glanced through a few attacks and found mostly collision and preimage attacks, but you may have more luck.
|
I'm trying to understand why a certain action of a Lie Group is hamiltonian.
Let $(M,g)$ be a geodesically complete Riemannian manifold. Then there exists a canonical one-form on the cotangentbundle $T^*M$ given by $\lambda_0 : T( T^*M) \to R, (v, \alpha) \mapsto \alpha(\pi_*v)$, with $\pi_*$ the derivative of the natural bundle projection $T^*M \to M$. It is now possible to pullback this one-form to the tangentbundle using our riemannian metric, so $\lambda = (g^b)^*\lambda_0$. Using this one-form we define a symplectic structure on $TM$ by $\omega = -d\lambda$.
Now let $\phi:R\times \text{TM} \to \text{TM}, V \mapsto \dot \gamma_V(t) $ be the action generated by the geodesic flow.
My question is, if $\phi$ acts by symplectomorphisms, that is, if $\phi_t^*\omega = \omega$ for all $t\in R$ or is it even exact symplectic, that is, if $\phi_t^*\lambda = \lambda$ for all $t\in R$? Because Im trying to show that this is an hamiltonian action, so that there exists a moment map for this action.
And it would be clear if we have an exact symplectic action.
Edit: To have a well-defined action of $R$, we assume that the manifold is geodesically complete.
|
We recall that the Lagrange Interpolation Polynomial $p_n(x)$ of a function $f\in C^n(\Omega )$ for some $\Omega \subseteq \mathbb{R}$ and $n\in \mathbb{N}$, has a pointwise error term of the form $$|p_n(x) -f(x)|= \frac{f^{(n)}(\xi (x))}{(n+1)! }\prod\limits_{j=1}^n(x-x_j) \, , $$ where $\{x_j\}\subset\Omega $ are the interpolation points.
This is just one form of the polynomial interpolant of order $n$ through this points. There are many other formulas, that at least theoretically produce the same polynomial.
My question: Given a finite measure $\mu$ on $\Omega$, denote its respective orthogonal polynomial $q_n$ and its roots $x_j ^n$, $j=1,\ldots,N$. What can be generally said about the $L^2 _\mu$ error decay/convergence rate $\|p_n -f\|_2 $ for some smooth enough $f\in H^p _{2,\mu}$, and the Lagrange polynomial in these points?
I'm not sure if it is neccessary or not, but we can limit the discussion to continuous measures $d\mu \ll dx$
What I know- Classical Orthogonal Polynomials: If $\mu$ is a classical measure/ from the Askey scheme, we have spectral $L^2 _{\mu}$ convergence. This is sometime referred to as the polynomial chaos colocation expansion. However, the proof of these results is derived from the spectral properties of the respective orthogonal polynomials, and does not stem directly from the Lagrange interpolation polynomial form. The idea is to use Fourier-like techniques to show that there exists a spectrally convergent polynomial expansion $\Pi _n (f)$, and then show that it actually interpolates $f$ in these points, i.e. that $\Pi _n (x_j ^n) =f(x_j ^n)$. Note that the relevant operator $\Pi _n f$ if not its $L^2$ projection, but its approximation using quadrature formulas.
For an example of the spectral results, see for example Dongbin Xiu, "Numerical methods for stochastic computation", Theorem 3.6.
What else - non Classical: one recent paper shows that if $\nu$ is classical, and $\mu <\nu$ in some normed sense, than the $L^2$ projections spectral convergence is also true for $\mu$, and therefore also the convergence of the polynomial interpolant in $L^2$. However, Not all measures have this spectral property. If we could have prove something as strong for a general measure directly from the Lagrange interpolation polynomial, that would've been helpful.
|
A basic mathematical relation between any trigonometric functions is called a basic trigonometric identity. A basic trigonometric identity is usually used as a formula in mathematics. Hence, it is also called as a basic trigonometric formula.
There are four types of basic trigonometric identities in trigonometry and they are used as formulas in mathematics. So, everyone who studies the trigonometry newly must firstly learn all of these basic trigonometric identities.
The following trigonometric formulas derived by taking theta ($\theta$) as angle of a right angled triangle.
Trigonometric ratios form six identities in reciprocal form and learn proofs of these reciprocal formulas.
$(1)\,\,\,\,$ $\sin \theta \,=\, \dfrac{1}{\csc \theta}$
$(2)\,\,\,\,$ $\cos \theta \,=\, \dfrac{1}{\sec \theta}$
$(3)\,\,\,\,$ $\tan \theta \,=\, \dfrac{1}{\cot \theta}$
$(4)\,\,\,\,$ $\cot \theta \,=\, \dfrac{1}{\tan \theta}$
$(5)\,\,\,\,$ $\sec \theta \,=\, \dfrac{1}{\cos \theta}$
$(6)\,\,\,\,$ $\csc \theta \,=\, \dfrac{1}{\sin \theta}$
Trigonometric functions form three formulas in product form and learn the proofs of product identities.
$(1)\,\,\,\,$ $\sin \theta \times \csc \theta = 1 $
$(2)\,\,\,\,$ $\cos \theta \times \sec \theta = 1 $
$(3)\,\,\,\,$ $\tan \theta \times \cot \theta = 1 $
The six trigonometric functions involve in two relations in quotient form and learn the proofs of quotient identities.
$(1)\,\,\,\,$ $\dfrac{\sin \theta}{\cos \theta} = \tan \theta$
$(2)\,\,\,\,$ $\dfrac{\cos \theta}{\sin \theta} = \cot \theta$
The six trigonometric functions form three Pythagorean identities on the basis of Pythagoras Theorem.
$(1)\,\,\,\,$ $\sin^2{\theta} \,+\, \cos^2{\theta} \,=\, 1$
$(2)\,\,\,\,$ $\sec^2{\theta} \,-\, \tan^2{\theta} \,=\, 1$
$(3)\,\,\,\,$ $\csc^2{\theta} \,-\, \cot^2{\theta} \,=\, 1$
Learn how to solve easy to difficult mathematics problems of all topics in various methods with step by step process and also maths questions for practising.
|
Advances in Differential Equations Adv. Differential Equations Volume 1, Number 6 (1996), 1075-1098. Singular boundary value problems for nonlinear elliptic equations in nonsmooth domains Abstract
Let $\Omega$ be a piecewise-regular domain in $\Bbb R^N$ and O an irregular point on its boundary $\partial \Omega$. We study under what conditions on $q$ any solution $u$ of (E) $-\Delta u + g(x,u)=0$ where $g$ has a $q$-power-like growth at infinity ($q>1$) which coincides on $\partial \Omega \setminus {\{\text{O}}\}$ with a continuous function defined on whole $\partial \Omega$, can be extended as a continuous function in $\bar \Omega$.
Article information Source Adv. Differential Equations, Volume 1, Number 6 (1996), 1075-1098. Dates First available in Project Euclid: 25 April 2013 Permanent link to this document https://projecteuclid.org/euclid.ade/1366895245 Mathematical Reviews number (MathSciNet) MR1409900 Zentralblatt MATH identifier 0863.35021 Subjects Primary: 35J65: Nonlinear boundary value problems for linear elliptic equations Secondary: 35B65: Smoothness and regularity of solutions Citation
Fabbri, Jean; Veron, Laurent. Singular boundary value problems for nonlinear elliptic equations in nonsmooth domains. Adv. Differential Equations 1 (1996), no. 6, 1075--1098. https://projecteuclid.org/euclid.ade/1366895245
|
This article will be permanently flagged as inappropriate and made unaccessible to everyone.
Are you certain this article is inappropriate?
Excessive Violence
Sexual Content
Political / Social
Email Address:
Article Id:
WHEBN0020212950
Reproduction Date:
The chameleon is a hypothetical scalar particle which couples to matter, postulated as a dark energy candidate.[1] Due to a non-linear self-interaction, it has a variable effective mass which is an increasing function of the ambient energy density – as a result, the range of the force mediated by the particle is predicted to be very small in regions of high density (for example on Earth, where it is less than 1mm) but much larger in low-density intergalactic regions: out in the cosmos chameleon models permit a range of up to several thousand parsecs. As a result of this variable mass, the hypothetical fifth force mediated by the chameleon is able to evade current constraints on equivalence principle violation derived from terrestrial experiments even if it couples to matter with a strength equal or greater than that of gravity. While this property would allow the chameleon to drive the currently observed acceleration of the universe's expansion, it also makes it very difficult to test for experimentally.
Chameleon particles were proposed in 2003 by Khoury and Weltman.
In most theories, chameleons have a mass that scales as some power of the local energy density: m_{eff} \sim \rho^\alpha, where \alpha \simeq 1.
Chameleons also couple to photons, allowing photons and chameleons to oscillate between each other in the presence of an external magnetic field.[2]
Chameleons can be confined in hollow containers because their mass increases rapidly as they penetrate the container wall, causing them to reflect. One strategy to search experimentally for chameleons is to direct photons into a cavity, confining the chameleons produced, and then to switch off the light source. Chameleons would be indicated by the presence of an afterglow as they decay back into photons.[3]
A number of experiments have attempted to detect chameleons along with axions.[4]
The GammeV experiment[5] is a search for axions, but has been used to look for chameleons too. It consists of a cylindrical chamber inserted in a 5T magnetic field. The ends of the chamber are glass windows, allowing light from a laser to enter and afterglow to exit. GammeV set the limited coupling to photons in 2009.[6]
CHASE(CHameleon Afterglow SEarch) results published in November 2010,[7] improve the limits on mass by two orders of magnitude and 5 orders for photon coupling.
A 2014 neutron mirror measurement excluded chameleon field for values of the coupling constant \beta > 5.8*10^8,[8] where the effective potential of the chameleon quanta is written as V_{eff}=V(\Phi)+e^{\beta \Phi/M'_P} \rho, \rho being the mass density of the environment, V(\Phi) the chameleon potential and M'_P the reduced Planck mass.
Dark matter, Rice University, Italy, Columbia University, Gran Sasso National Laboratory
Semantic Web, Internet, Umbel, Database, World Wide Web
Big Bang, Xenon, Large Hadron Collider, Milky Way, Proton
Quantum gravity, General relativity, Tests of general relativity, Dark matter, Gravitation
Spacetime, Isaac Newton, Quantum gravity, Mass, General relativity
Dark matter, Xenon, Cold dark matter, Anais, Dark galaxy
|
suyash23n wrote:
The juice stall at the circus stocked just 2 brands of orange juice tetra packs. Brand A costs $1 per pack and brand B costs $1.5 per pack. Last week , brand A contributed to m% of stall’s revenue and accounted for n% of sales of juice tetra packs. Which of the following expresses m in terms of n?
(A) 100n/(150 – n) (B) 200n/(250-n) (C) 200n/(300-n) (D) 250n/(400-n) (E) 300n/(500-n)
Let´s explore an AGGRESSIVE PARTICULAR CASE
: n = 100
: in this case, only brand A was sold, hence all revenue came from brand A and our FOCUS
will be (the TARGET
) m =100 (of course)!
\(\left. \begin{gathered}
\left( A \right)\,\,\,\frac{{{{10}^4}}}{{50}} \ne 100 \hfill \\
\left( B \right)\,\,\frac{{2 \cdot {{10}^4}}}{{150}} \ne 100 \hfill \\
\left( C \right)\,\,\frac{{2 \cdot {{10}^4}}}{{200}} = 100\,\,\,\,\, \hfill \\
\left( D \right)\,\,\frac{{25 \cdot {{10}^3}}}{{300}} \ne 100 \hfill \\
\left( E \right)\,\,\frac{{3 \cdot {{10}^4}}}{{400}} \ne 100 \hfill \\
\end{gathered} \right\}\,\,\,\,\,\,\,\,\,\mathop \Rightarrow \limits^{{\text{only}}\,\,{\text{survivor}}\,{\text{!}}} \,\,\,\,\,\,\,\left( C \right)\)
This solution follows the notations and rationale taught in the GMATH method.
Regards,
Fabio.
_________________
Fabio Skilnik :: GMATH
method creator (Math for the GMAT)
Our high-level "quant" preparation starts here: https://gmath.net
|
None of the three inequalities you’ve shown hold in general for Hermitian matrices $A$ and $B$.
To demonstrate this, we can look at the case where $A$ and $B$ are $1\times 1$ real matrices (trivially Hermitian), in which case the induced matrix norm reduces to an absolute value. Writing $A=a$ and $B=b$ for $a,b\in \mathbb{R}$, we have $|||iA - B|||=\sqrt {a^2 + b^2 }$. Then, for these $A,B\in \mathbb{R^{1\times 1}}$, we have that
$$ \mathop{\min } (|||A|||, |||B|||)\leq \mathop {\max } (|||A|||, |||B|||)\leq |||iA - B|||$$
and
$$ abs(|||A||| - |||B|||)\leq |||iA - B|||,$$
where the rightmost inequalities are strict for $ab \ne 0.$
We can also use this special case to generate counterexamples to the proposed inequalities for $A,B\in\mathbb{R^{n\times n}}$ with $n>1$ by (for example) setting $a_{11}=a$ and $b_{11}=b$ for $A$ and $B$ with all the other matrix entries set to $0.$
Without further restrictions on $A$ and $B$, you’re unlikely to find a better general upper bound than $|||iA - B|||\leq |||A||| + |||B|||$.
|
I am implementing the key exchange scheme proposed by zhang et al. on Sage. In the implementation of the scheme, they have used the two distributions $\chi_{\alpha}, \chi_{\beta}$.
How to choose $\alpha, \beta$ and what should be typical values of $\alpha\ and \ \beta$ to have 100% correctness of the scheme (that is alice and bob will have same shared session key?
The second confusion is that on page number 12, (where the protocol is defined) they have done the rejection sampling and provide steps to do this. In this they state that repeat the step 1~3 with probability $1-min(\frac{\mathcal{D}_{\mathcal{Z}^{2n},_\beta}(z)}{\mathcal{D}_{\mathcal{Z}^{2n},_\beta,_z1}(z)} ,1)$. See the the highlighted image bellow:
How to define M with respect to $\alpha \ and \ \beta$ and use the above probability condition to apply the repeat condition through step 1~3?
|
\[\]
1. Concept Behind Inferential Statistics To begin with there are certain fundamental concepts in this section that span the section and are used throughout statistics. Often a sample is taken and the sample is a subgroup of a larger group, the population. From the sample it is desired to learn about the population. For example, a survey is taken on 200 people living in Bangkok and their opinion about the underground train. Results are published and comments are made about it. Is anyone truly concerned about the specific 200 people in the survey? If 200 people do not like the underground train does it really matter? No. In fact there exist well over 200 people in Bangkok that have never taken the underground train. What people really want to learn from the survey is the general opinion within Bangkok about the underground and to do this a sample of 200 people are surveyed and asked questions. If all 200 people surveyed did not like the underground train, this is of concern only because it leads us to believe that the general populace within Bangkok do not like the underground train and perhaps only a small minority like the train. The sample is almost immediately within our minds extrapolated to the population at large. In this case the population at large is people living in Bangkok. Inferential statistics are used to learn about the population from the sample.
Two common techniques to use a sample to learn about a population that go beyond descriptive statistics are hypothesis testing and confidence intervals. Hypothesis testing is used to test a theory. Confidence intervals are used to obtain a range of values for which you might consider the population mean, $\mu$, to be within. Technically, from a frequentist viewpoint, the population mean is either within the interval or not.
In general within hypothesis testing we wish to test a theory, belief or simply something of interest. It is desired to test if a quantity concerning the
population, called a parameter, is either not equal to, greater than or less than some value. Typically, the population mean, $\mu$, or proportion, $\pi$, is the parameter, but not always. In hypothesis testing the theory is turned into what is called a null hypothesis, denoted $H_0$, and an alternative hypothesis, denoted $H_1$ or $H_A$. In general hypothesis testing one may want to compare one group/sample to a specific value, say $\mu_0$. Often within hypothesis testing one may want to compare two groups/samples to each other, such as comparing the average salary of men, say $\mu_1$, to the average salary of women, say $\mu_2$.
The alternative hypothesis is what is desired to prove or show to be true and the null hypothesis the opposite. Examples: If it is desired to prove the ...
average income in Bangkok is greater than 30,000 Baht/month: $H_0:$ $\mu \leq 30,000$ and $H_A:$ $\mu > 30,000$. average income in Bangkok of men is greater than that of women: $H_0:$ $\mu_{men} \leq \mu_{women}$ and $H_A:$ $\mu_{men} > \mu_{women}$. percent of women in Hong Kong is less than 50\%: $H_0:$ $\pi \geq 50\%$ and $H_A:$ $\pi < 50\%$. etc.
group/ &\mu \geq \mu_0 & \mu < \mu_0 & P(Z < z) \\ sample &\mu \leq \mu_0 & \mu > \mu_0 & P(Z > z) \\\hline
\pi from one&\pi=\pi_0 & \pi \neq \pi_0& 2\times P(Z>|z|) \\
group/ &\pi \geq \pi_0 & \pi < \pi_0& P(Z < z) \\ sample &\pi \leq \pi_0 & \pi > \pi_0& P(Z > z) \\\hline
\mu from two&\mu_1=\mu_2 & \mu_1 \neq \mu_2& 2\times P(Z>|z|) \\
groups/&\mu_1 \geq \mu_2 & \mu_1 < \mu_2& P(Z < z) \\ samples&\mu_1 \leq \mu_2 & \mu_1 > \mu_2& P(Z > z) \\\hline
\pi from two&\pi_1=\pi_2 & \pi_1 \neq \pi_2& 2\times P(Z > |z|) \\
groups/&\pi_1 \geq \pi_2 & \pi_1 < \pi_2& P(Z < z) \\ samples&\pi_1 \leq \pi_2 & \pi_1 > \pi_2& P(Z > z) \\\hline
\end{array}
In hypothesis testing a decision is made by using what is known as a {\it p-value}. The p-value is the probability of observing what was observed or more extreme assuming the null hypothesis is true. If the probability of observing what was observed or more extreme assuming the null hypothesis is true is "very small" the researcher rejects the null hypothesis. The researcher rejects the null hypothesis when the p-value is small because we trust the data over the null hypothesis. Typically p-values less than that of 0.1, 0.05, or 0.01 are considered too small to be random chance and the null hypothesis is rejected. The value which the null hypothesis will be rejected at is called the {\it level of significance} and denoted by $\alpha$. Commonly for large data sets often a significance level of 0.01 is used. Typically in the class room setting an $\alpha=0.05$ is used.
\defm{ Important: \\ If p-value $< \alpha$ then reject $H_0$ \\ If p-value $\ge \alpha$ then fail to reject $H_0$} For hypothesis testing regardless of the test chosen and the test-statistic used the steps are generally the same. This book will only cover the p-value approach to hypothesis testing. Other books cover a rejection region as well. The rejection region approach is useful for when a p-value can't be calculated. For example, when the researcher does not have access to a computer, like on exams. When working, in this day in age the researcher will most likely have access to a computer and almost all, if not all statistical software calculates a p-value for hypothesis testing. For this reason only the p-value approach will be covered.\\
Determine the null hypothesis, $H_{0}$, and the alternative hypothesis, $H_{A}$. Decide on the appropriate level of significance, $\alpha$. Determine the sample size and sampling design to use. The tests in this chapter are appropriate when the data comes from a simple random sample. The tests in this chapter and other statistical tests are \bf not appropriate when the data comes from a convenience or other type of non-probability sample. Determine the appropriate test statistic given the data and sampling design. Collect the data and calculate the appropriate test statistic. Calculate the p-value for the $H_{0}$ and $H_{A}$ combination. Make a decision whether to fail to reject $H_{0}$ or reject the $H_{0}$ by comparing the p-value to $\alpha$.
In general when creating what is called a confidence interval, we wish to obtain a range of plausible values for a quantity concerning the population, a parameter. Typically the population mean, $\mu$, or proportion, $\pi$, is the parameter, but not always. Also, it is often desired determine a plausible range between two groups/samples to each other, such as comparing the average salary of men, say $\mu_1$, to the average salary of women, say $\mu_2$. A $(1-\alpha) \times 100\%$ confidence interval is the probability of obtaining the parameter of interest under what is known as a Bayesian approach and is often the way a confidence interval is explained. Bayesian's consider the parameter of interest a random variable. The author is a frequentist, and the author considers the parameter to be an unknown constant. Under the frequentist approach, a $(1-\alpha) \times 100\%$ is the percent of confidence intervals that are expected to contain the true value of the parameter of interest. This is assuming an infinite number of samples taken of the same size, under a simple random sample. Of course, in reality only a single sample is taken in practice. The confidence interval is thus often considered the range of plausible values the parameter might be, what it is, is unknown in reality though and may or may not be within the interval.
|
Let $V$ be a vector space and $T:V\rightarrow V$ a linear transformation with the property that $T(W)\subseteq W$ for every subspace $W$ of $V$. Prove that $T$ is scalar multiplication, i.e. there is an element $\lambda$ in the field of scalars such that $T(v)=\lambda v$ $\forall v\in V$.
My attempt: I gather that for any element $w$ in a subspace $W$ with basis $\{w_1,\dots,w_n\}$, we have
$w = a_1w_1+\dots+a_nw_n$
for scalars $a_1,\dots,a_n$.
We also know that $T(w) = T(a_1w_1)+\dots+T(a_nw_n)$, and that since for each $i$, span$\{w_i\}$ is a subspace, $T(w_i)=\alpha_i(w_i)$ for some scalar $\alpha_i$.
I feel like this should be enough for the solution, but I can't get there.
Any help appreciated!
|
Suppose I want to obtain a gate sequence representing a particular 1 qubit unitary matrix. The gate set is represented by a discrete universal set, e.g. Clifford+T gates or $\{T,H\}$ gates. A well known approach to solve the problem is to use Solovay-Kitaev (SK) algorithm. I tried this implementation for SK algorithm. The resulting circuits are incredibly long ($l\sim 100-1000$ gates for the Fowler distance $\epsilon \sim 0.01$, tolerance error for the basic approximation $\epsilon\sim 0.03$). Moreover the basic approximation (building a KD-tree) can take quite long time (although this might be due to somewhat slow Python code).
On the other hand I wrote a very simple script that generates random sequences of gates and selects the best performing circuits. It works very fast and results in much shorter circuits with Fowler distances $\epsilon< 10^{-4}-10^{-5}$. This should be more than sufficient for any realistic applications.
So, at this point I don't quite understand, what is practical significance of Solovay-Kitaev algorithm in this case (for 1 qubit operations)?
Of course, the theoretical scaling of SK algorithm looks pretty good. The number of gates generated by SK algorithm to approximate any 1 qubit unitary grows as $l\sim\log^c(1/\delta)$, where $\delta$ is L2 approximation error. For random sampling there are no such guarantees. However on practice I'm not convinced that SK is very useful for 1 qubit case. No doubts that in the case of large number of qubits random sampling will fail because of the curse of dimensionality. But it seems that SK algorithm also quickly becomes computationally unfeasible ($\#$ of qubits $\geq 4$?).
|
The partial differential equation is a combination of the diffusion plus convective transport equations and an adsorption sink. The equation for one-dimensional solute transport model is:
$$\frac{\partial C}{\partial t} = D\frac{\partial^ 2C}{\partial^ 2x}-v\frac{\partial C}{\partial x} - \frac{\rho}{\theta} \frac{\partial S}{\partial t}$$
where, C = solute concentration, D = dispersion coefficient, v = average pore-water velocity, x = distance from the inflow position, and t = time. Assuming the adsorption process is a first order reversible reaction, the rate of mass transfer to the adsorbed phase, $\frac{\partial S}{\partial t} = \frac{k_{A}\theta C}{\rho}-k_{D}S$; where, $k_{A}$ and $k_{D}$ are the adsorption (forward) and desorption (backward) rate coefficients (unit: 1/time), $\theta$ is the soil-water content by volume, and $\rho$ is the bulk density of the soil system.
The fully explicit finite-difference approximation for all except for the first order reversible reaction term can be written simply as (also, tested to work fine against exact solution):
$$C_{x,t} = C_{x,t-\Delta t} + \frac{D \Delta t}{\Delta x^2}(C|_{x+\Delta x, t-\Delta t}- 2C|_{x,t-\Delta t} + C|_{x-\Delta x,t-\Delta t}) - \frac{v\Delta t}{2 \Delta x}(C|_{x+\Delta x,t-\Delta t} - C|_{x-\Delta x, t-\Delta t}) $$
I cannot seem to figure out how the above finite-difference approximation could be modified to incorporate the reaction-term defined above.
Hint: Page#96-99 of the this book does provide a solution but I just cannot get my head around it. I'm supplying the best known articles for the analytical solution and numerical solution that I could find. Any help with reproducible example codes would be hihgly appreciated.
|
It is well-known that the S and K combinators are Turing Complete. Are there combinators that suffice to yield (only) the primitive recursive functions?
Yes, but you have to consider typed combinators. That is, you need to give $S$ and $K$ the following type schemas: $$ \begin{array}{lcl} K & : & A \to B \to A \\ S & : & (A \to B \to C) \to (A \to B) \to (A \to C) \end{array} $$ where $A, B$, and $C$ are meta-variables which can be instantiated to any concrete type at each use.
Then, you want to add the type $\mathbb{N}$ of natural numbers to the language of types, and add the following combinators: $$ \begin{array}{lcl} z & : & \mathbb{N} \\ succ & : & \mathbb{N} \to \mathbb{N} \\ iter & : & \mathbb{N} \to (\mathbb{N} \to \mathbb{N}) \to \mathbb{N} \to \mathbb{N} \end{array} $$
The equality rules for the additions are: $$ \begin{array}{lcl} iter\;i\;f\;z & = & i \\ iter\;i\;f\;(succ\;e) & = & f(iter\;i\;f\;e) \end{array} $$
It's much easier to read the programs you write, if you just write programs in the simply-typed lambda calculus, augmented with the numerals and iteration. The system I've described is a restriction of
Goedel's T, the language of higher-type arithmetic. In Goedel's T, the typing for iteration is less limited:$$\begin{array}{lcl}iter & : & A \to (A \to A) \to \mathbb{N} \to A\end{array}$$ In T, you can instantiate $iter$ at any type, not just the type of natural numbers. This takes you past primitive recursion, and lets you define things like Ackerman's function.
EDIT: Xoff asked how to encode the predecessor function. It follows via a standard trick. To explain, I'll use lambda-notation for this (which can be eliminated with bracket-abstraction), since that's far more readable. First, assume that we have pairs and the more general type for $\mathit{iter}$. Then, we can define:
$$ \begin{array}{lcl} pred' & = & \lambda k.\;iter \;(z, z) \; (\lambda (n, n').\; (succ\;n, n))\;k\\ pred & = & \lambda k.\;snd(pred'\;k) \end{array} $$
If you just have the nat-type iterator, then you need to exploit the isomorphism that $\mathbb{N} \simeq \mathbb{N} \times \mathbb{N}$, which is annoying but poses no fundamental obstacle.
|
Gabriel Cramer was a Swiss mathematician who lived in the first half of the 18th century. In 1750 he published his method for solving sets of linear equations which is in common use today.
If we have two simultaneous equations $a_1x+b_1y=c_1$ and $a_2x+b_2y=c_2$ we could write them in matrix form like this
$\begin{bmatrix}a_1 & b_1 \\[0.3em]a_2 & b_2\end{bmatrix}$ $\begin{bmatrix}x \\[0.3em] y \end{bmatrix}$ $=$ $\begin{bmatrix} c_1 \\[0.3em] c_2 \end{bmatrix}$ or more succinctly $A$ $X$ $=$ $C$
The determinant of the matrix of coefficients, $D$, is given by $D=a_1b_2-b_1a_2$
To find the value of $x$ we first calculate $D$, the determinant of the matrix of coefficients. If $D$ is non-zero, we replace column containing the $x$ coefficients in $A$ with the column vector $C$. We call this determinant $D_x$ and it is given by $D_x=c_1b_2-b_1c_2$.
To find $y$ we replace column containing the $y$ coefficients in $A$ with the column vector $C$. We call this determinant $D_y$ and it is given by $D_y=a_1c_2-c_1a_2$.
Finally, to find $x$ and $y$ we divide the respective determinants, $x=D_x/D$ and $y=D_y/D$
$x=\dfrac{Dx}{D}=\dfrac{\begin{vmatrix} c_1 & b_1 \\[0.3em] c_2 & b_2 \end{vmatrix}}{\begin{vmatrix} a_1 & b_1 \\[0.3em] a_2 & b_2 \end{vmatrix}}$ and $y=\dfrac{D_y}{D}=\dfrac{\begin{vmatrix} a_1 & c_1 \\[0.3em] a_2 & c_2 \end{vmatrix}}{\begin{vmatrix} a_1 & b_1 \\[0.3em] a_2 & b_2 \end{vmatrix}}$
Example 10.1: Solve the following simultaneous equations using Cramer's method.
The matrix form of the equation looks like this:
so we can write
For $y$ we can write
Giving us $x=-2$ and $y=3$
Sanity Check Put $x=-1$ and $y=3$ into one of the original equations.
If we use the first equation we get $-5 \times(-2)+3 = 10 + 3 = 13$
so our answers are probably right.
We can extend Cramer's method to three dimensions by adding a third variable. If we have the following equations $a_1x+b_1y+c_1z=d_1$, $a_2x+b_2y+c_2z=d_2$ and $a_3x+b_3y+c_3z=d_2$ we could write them in matrix form like this:
$\begin{bmatrix} a_1 & b_1 & c_1 \\[0.3em] a_2 & b_2 & c_2 \\[0.3em] a_3 & b_3 & c_3 \end{bmatrix}$ $\begin{bmatrix} x \\[0.3em] y \\[0.3em] z \end{bmatrix}$ $=$ $\begin{bmatrix} d_1 \\[0.3em] d_2 \\[0.3em] d_3 \end{bmatrix}$ $A$ $X$ $=$ $D$
The determinant of the matrix of coefficients is given by $|A|=a_1(b_2 c_3-c_2b_3)-b_1(a_2c_3-c_2b_3)+c1(a_2b_3-b_2a_3)$
Using Cramer's method we can write
$x=\dfrac{\begin{vmatrix} d_1 & b_1 & c_1 \\[0.3em] d_2 & b_2 & c_2 \\[0.3em] d_3 & b_3 & c_3 \end{vmatrix}}{\begin{vmatrix} a_1 & b_1 & c_1 \\[0.3em] a_2 & b_2 & c_2 \\[0.3em] a_3 & b_3 & c_3 \end{vmatrix}}$, $y=\dfrac{\begin{vmatrix} a_1 & d_1 & c_1 \\[0.3em] a_2 & d_2 & c_2 \\[0.3em] a_3 & d_3 & c_3 \end{vmatrix}}{\begin{vmatrix} a_1 & b_1 & c_1 \\[0.3em] a_2 & b_2 & c_2 \\[0.3em] a_3 & b_3 & c_3 \end{vmatrix}}$ and $z=\dfrac{\begin{vmatrix} a_1 & b_1 & d_1 \\[0.3em] a_2 & b_2 & d_2 \\[0.3em] a_3 & b_3 & d_3 \end{vmatrix}}{\begin{vmatrix} a_1 & b_1 & c_1 \\[0.3em] a_2 & b_2 & c_2 \\[0.3em] a_3 & b_3 & c_3 \end{vmatrix}}$ Example 10.2: Solve the following simultaneous equations using Cramer's method.
The matrix form of the equation looks like this:
so $x=\dfrac{-174}{-58}=3$, $y=\dfrac{116}{-58}=-2$ and $z=\dfrac{-232}{-58}=4$
Sanity Check Put $x=3$, $y=-2$ and $z=4$ into one of the original equations.
If we use the first equation we get $5 \times 3-4\times(-2)-4 = 15+8-4 = 19$
so our answers are probably right.
|
Given some normally distributed observations $x_1,x_2,...,x_n$
$\forall i\ x_i\sim\mathcal{N}(\mu, \sigma^2)$
the ML estimator decides that the variance that maximizes the likelihood function is (see here):
$\hat{\sigma^2}=\frac{1}{n}\sum_{i=1}^{n}(x_i-\bar{x}^2)$
Now, I am trying to find the variance of this estimation:
$\sigma^2_{\hat{\sigma^2}}=Var[\hat{\sigma^2}]=Var[\frac{1}{n}\sum_{i=1}^{n}(x_i-\bar{x}^2)]$
If we note that: $\hat{\sigma^2}=\frac{1}{n}\sum_{i=1}^{n}(x_i^2-2x_i\bar{x}+\bar{x}^2) \\ =\frac{1}{n}\sum_{i=1}^{n}x_i^2-2\bar{x}\frac{1}{n}\sum_{i=1}^{n}x_i+\frac{1}{n}\sum_{i=1}^{n}\bar{x}^2 \\ =\frac{1}{n}\sum_{i=1}^{n}x_i^2-2\bar{x}^2+\bar{x}^2 \\ =\frac{1}{n}\sum_{i=1}^{n}x_i^2-\bar{x}^2$
we have:
$\sigma^2_{\hat{\sigma^2}}=Var[\frac{1}{n}\sum_{i=1}^{n}x_i^2-\bar{x}^2]$
but I am stuck here since I think that $x_i$ and $\bar{x}$ are not independent in order to use the property that says that the variance of the sum is the sum of the variances.
|
In order with the explicit questions:
Yes Yes No
To answer the question I think you're attempting to ask, we can prove many things using type checking, but not everything. What does this have to do with programs? That's what the Curry-Howard correspondence tells us. The Curry-Howard correspondence is a relationship between logic and computational models. The informal version is "proofs and programs and programs are proofs". While the specifics are far too much to detail here, the rough idea is that a function (read program) that takes an input of type $A$ and produces an output of type $B$ is a proof that $A \rightarrow B$ (and yes the funtion type arrow maps to the logical implication arrow).
Why can't we do everything? The good old halting problem. Some things we just can't check. Even worse, in practice, we're often limited to things we already
know we can check, i.e. programs that are of a subclass that we know always halt.
Just to give a more complicated example, I've written a proof/program for $1+\ldots n = \frac{n\cdot(n+1)}{2}$ in Coq. Coq is a bit different to Haskell, but the idea should be fairly apparent. In this case, Coq has the advantage that it's explicitly built for theorem proving, so we can get a "proof-like" version and a "program-like" version. The proof can only complete if all the types match correctly - i.e. if we can specify a way to get from the input type to the output type, in the normal type checking sense. To avoid using division, I've rephrased the lemma.
The proof-like version:
Fixpoint sum (n : nat) : nat :=
match n with
| 0 => 0
| S n' => n + sum n'
end.
Lemma sum_closed_form (n : nat) : 2 * sum n = n * (n + 1).
Proof.
induction n.
reflexivity.
assert (Step: sum (S n) = S n + sum n).
reflexivity.
rewrite Step.
rewrite mult_plus_distr_l.
rewrite IHn.
assert (Id : S n = n + 1).
omega.
rewrite Id.
repeat rewrite mult_plus_distr_l.
repeat rewrite mult_plus_distr_r.
omega.
Qed.
So apart from the individual steps being black boxes, it's fairly clear that this looks a lot like a normal proof - it is in fact an induction, and all the mystery black boxes are in fact just
other proofs that allow us to take some input type to another. One of the key steps in understanding how this works is to look closely at statement of the lemma:
Lemma sum_closed_form (n : nat) : 2 * sum n = n * (n + 1).
Notice that it looks a bit like a function declaration, it takes a parameter of type
nat, and produces an output of
type
2 * sum n = n * (n + 1). That is in fact what it is, so in this form we can see the proof bit, and get an idea of the function bit - i.e. the proof is in fact a program. More explicitly, it is
this program:
sum_closed_form =
fun n : nat =>
nat_ind (fun n0 : nat => 2 * sum n0 = n0 * (n0 + 1)) eq_refl
(fun (n0 : nat) (IHn : 2 * sum n0 = n0 * (n0 + 1)) =>
(fun Step : sum (S n0) = S n0 + sum n0 =>
eq_ind_r (fun n1 : nat => 2 * n1 = S n0 * (S n0 + 1))
(eq_ind_r (fun n1 : nat => n1 = S n0 * (S n0 + 1))
(eq_ind_r (fun n1 : nat => 2 * S n0 + n1 = S n0 * (S n0 + 1))
((fun Id : S n0 = n0 + 1 => eq_ind_r
(fun n1 : nat => 2 * n1 + n0 * (n0 + 1) = n1 * (n1 + 1))
(eq_ind_r (fun n1 : nat =>
n1 + n0 * (n0 + 1) = (n0 + 1) * (n0 + 1 + 1))
(eq_ind_r (fun n1 : nat =>
2 * n0 + 2 * 1 + n1 = (n0 + 1) * (n0 + 1 + 1))
(eq_ind_r (fun n1 : nat =>
2 * n0 + 2 * 1 + (n0 * n0 + n0 * 1) = n1)
(eq_ind_r (fun n1 : nat =>
2 * n0 + 2 * 1 + (n0 * n0 + n0 * 1) = n1 + (n0 + 1) * 1)
(eq_ind_r (fun n1 : nat =>
2 * n0 + 2 * 1 + (n0 * n0 + n0 * 1) = n1 + (n0 + 1) * 1 + (n0 + 1) * 1)
(eq_ind_r (fun n1 : nat =>
2 * n0 + 2 * 1 + (n0 * n0 + n0 * 1) = n0 * n0 + 1 * n0 + n1 + n1)
(Decidable.dec_not_not
(2 * n0 + 2 * 1 + (n0 * n0 + n0 * 1) = n0 * n0 + 1 * n0 + (n0 * 1 + 1 * 1) +
(n0 * 1 + 1 * 1))
(dec_eq_nat (2 * n0 + 2 * 1 + (n0 * n0 + n0 * 1))
(n0 * n0 + 1 * n0 + (n0 * 1 + 1 * 1) + (n0 * 1 + 1 * 1)))
(fun H : 2 * n0 + 2 * 1 + (n0 * n0 + n0 * 1) <> n0 * n0 + 1 * n0 +
(n0 * 1 + 1 * 1) + (n0 * 1 + 1 * 1) =>
(fun (P : Z -> Prop) (H0 : P (Z.of_nat 2 * Z.of_nat (sum n0))%Z) =>
eq_ind_r P H0 (Nat2Z.inj_mul 2 (sum n0)))
(fun x : Z => x = Z.of_nat (n0 * (n0 + 1)) -> False)
((fun (P : Z -> Prop)
(H0 : P (Z.of_nat n0 * Z.of_nat (n0 + 1))%Z) =>
eq_ind_r P H0 (Nat2Z.inj_mul n0 (n0 + 1)))
(fun x : Z => (2 * Z.of_nat (sum n0))%Z = x -> False)
((fun (P : Z -> Prop) (H0 : P (Z.of_nat n0 + Z.of_nat 1)%Z) =>
eq_ind_r P H0 (Nat2Z.inj_add n0 1))
(fun x : Z => (2 * Z.of_nat (sum n0))%Z =
(Z.of_nat n0 * x)%Z -> False)
(fun _ : (2 * Z.of_nat (..))%Z =
(Z.of_nat n0 * (.. + 1))%Z =>
(fun (P : Z -> Prop) (H0 : P (..)%Z) =>
eq_ind_r P H0 (Nat2Z.inj_add (..) (..)))
(fun x : Z => Z.of_nat (..) = x -> False)
((fun (P : ..) (H0 : ..) => eq_ind_r P H0 (..))
(fun x : Z => .. = ..%Z -> False)
(fun _ : .. => (..) (..) (..) (..)))
(inj_eq (sum (S n0)) (S n0 + sum n0) Step))))
(inj_eq (2 * sum n0)
(n0 * (n0 + 1)) IHn)))
(mult_plus_distr_r n0 1 1))
(mult_plus_distr_r n0 1 n0))
(mult_plus_distr_l (n0 + 1) n0 1))
(mult_plus_distr_l (n0 + 1) (n0 + 1) 1))
(mult_plus_distr_l n0 n0 1))
(mult_plus_distr_l 2 n0 1)) Id)
(Decidable.dec_not_not (S n0 = n0 + 1)
(dec_eq_nat (S n0) (n0 + 1))
(fun H : S n0 <> n0 + 1 =>
(fun (P : Z -> Prop)
(H0 : P (Z.of_nat 2 * Z.of_nat (sum n0))%Z) =>
eq_ind_r P H0 (Nat2Z.inj_mul 2 (sum n0)))
(fun x : Z => x = Z.of_nat (n0 * (n0 + 1)) -> False)
((fun (P : Z -> Prop)
(H0 : P (Z.of_nat n0 * Z.of_nat (n0 + 1))%Z) =>
eq_ind_r P H0 (Nat2Z.inj_mul n0 (n0 + 1)))
(fun x : Z => (2 * Z.of_nat (sum n0))%Z = x -> False)
((fun (P : Z -> Prop)
(H0 : P (Z.of_nat n0 + Z.of_nat 1)%Z) =>
eq_ind_r P H0 (Nat2Z.inj_add n0 1))
(fun x : Z => (2 * Z.of_nat (sum n0))%Z = (Z.of_nat n0 * x)%Z -> False)
(fun _ : (2 * Z.of_nat (sum n0))%Z =
(Z.of_nat n0 * (Z.of_nat n0 + 1))%Z =>
(fun (P : Z -> Prop) (H0 : P (Z.of_nat (S n0) + Z.of_nat (sum n0))%Z) =>
eq_ind_r P H0 (Nat2Z.inj_add (S n0) (sum n0)))
(fun x : Z => Z.of_nat (sum (S n0)) = x -> False)
((fun (P : Z -> Prop) (H0 : P (Z.succ (Z.of_nat n0))) =>
eq_ind_r P H0 (Nat2Z.inj_succ n0))
(fun x : Z => Z.of_nat (sum (S n0)) = (x + Z.of_nat (sum n0))%Z -> False)
(fun _ : Z.of_nat (sum (S n0)) =
(Z.succ (Z.of_nat n0) + Z.of_nat (sum n0))%Z =>
(fun (P : Z -> Prop) (H0 : P (Z.succ (Z.of_nat n0))) =>
eq_ind_r P H0 (Nat2Z.inj_succ n0))
(fun x : Z => Zne x (Z.of_nat (n0 + 1)) -> False)
((fun (P : Z -> Prop) (H0 : P (Z.of_nat n0 + Z.of_nat 1)%Z) =>
eq_ind_r P H0 (Nat2Z.inj_add n0 1))
(fun x : Z => Zne (Z.succ (Z.of_nat n0)) x -> False)
(fun H0 : Zne (Z.succ (Z.of_nat n0))
(Z.of_nat n0 + 1) => ex_ind (fun (Zvar29 : Z)
(Omega75 : Z.of_nat n0 = Zvar29 /\ (0 <= Zvar29 * 1 + 0)%Z) =>
and_ind (fun (Omega66 : Z.of_nat n0 = Zvar29)
(_ : (0 <= Zvar29 * 1 + 0)%Z) =>
ex_ind (fun (Zvar30 : Z) (Omega74 : .. /\ ..%Z) =>
and_ind (.. => ..) Omega74) (intro_Z (sum (..)))) Omega75)
(intro_Z n0))) (inj_neq (S n0) (n0 + 1) H)))
(inj_eq (sum (S n0)) (S n0 + sum n0) Step))))
(inj_eq (2 * sum n0) (n0 * (n0 + 1)) IHn)))) IHn)
(mult_plus_distr_l 2 (S n0) (sum n0))) Step) eq_refl) n
: forall n : nat, 2 * sum n = n * (n + 1)
To give a shorter example, the associativity of addition can be proved like so:
Lemma add_assoc (n m p : nat) : n + (m + p) = (n + m) + p.
Proof.
induction n.
reflexivity.
simpl. rewrite IHn.
reflexivity.
Qed.
The associated function is:
add_assoc =
fun n m p : nat =>
nat_ind (fun n0 : nat => n0 + (m + p) = n0 + m + p) eq_refl
(fun (n0 : nat) (IHn : n0 + (m + p) = n0 + m + p) =>
eq_ind_r (fun n1 : nat => S n1 = S (n0 + m + p)) eq_refl IHn) n
: forall n m p : nat, n + (m + p) = n + m + p
|
It is well known that, given a sphere, the maximum number of identical spheres that we can pack around it is exactly 12, corresponding to a face centered cubic or hexagonal close packed lattice.
My question is: given a sphere of radius $R$, how many spheres of radius $r<R$ can we closely pack around it?
With disks, the problem is rather easy to solve. Indeed, with reference to the picture at the bottom, we can see that we must have
$$\theta = \frac{2 \pi} n = 2 \arctan \left( \frac r {\sqrt{R^2+2 R r}} \right)$$
from which
$$n = \left \lfloor \frac \pi {\arctan \left( \frac r {\sqrt{R^2+2 R r}} \right)}\right \rfloor$$
The last expression gives the correct result for $R=r$, namely $n=6$ (hexagonal lattice). Moreover, when $R \gg r$, we get
$$n \simeq \left \lfloor \frac {\pi R} {r}\right \rfloor$$
which is completely reasonable.
How can I tackle the same problem in the 3D case (spheres)?
It is clear that for $R \gg r$ we must get
$$n \simeq \left \lfloor \frac {4 \pi R^2} {\pi r^2}\right \rfloor$$
and also that we must have $n(R=r)=12$.
Any hint/suggestion is appreciated.
|
A) \[17\sqrt{3}\] units
B) \[36\]units
C) \[72\]units
D) \[48\sqrt{3}\] unitsView Solution
A) 11 cm
B) 13 cm
C) 15 cm
D) 14 cmView Solution
A) \[20\text{ }c{{m}^{2}}\]
B) \[21\text{ }c{{m}^{2}}\]
C) \[19\text{ }c{{m}^{2}}\]
D) \[18\text{ }c{{m}^{2}}\]View Solution
A) 11 m
B) 22 m
C) 15.75 m
D) 10.75 mView Solution
A) 264 cm
B) 352 cm
C) 500 cm
D) 528 cmView Solution
A) \[346.5\text{ }c{{m}^{2}}\]
B) \[340\text{ }c{{m}^{2}}\]
C) \[355\text{ }c{{m}^{2}}\]
D) \[342\text{ }c{{m}^{2}}\]View Solution
A) \[378\,{{m}^{2}}\]
B) \[438\,{{m}^{2}}\]
C) \[786\,{{m}^{2}}\]
D) None of theseView Solution
A) \[\pi {{k}^{2}}\]
B) \[3\pi {{k}^{2}}\]
C) \[5\pi {{k}^{2}}\]
D) \[2\pi {{k}^{2}}\]View Solution
A) 75%
B) 100%
C) 125%
D) 150%View Solution
A) \[\left( 100-25\pi \right)\text{ }c{{m}^{2}}\]
B) \[\left( 100-50\pi \right)c{{m}^{2}}\]
C) \[\left( 50\pi -100 \right)c{{m}^{2}}\]
D) \[\left( 25\pi -100 \right)\text{ }c{{m}^{2}}\]View Solution
question_answer11) An equilateral triangle has a circle inscribed in it and is circumscribed by a circle. There is another equilateral triangle inscribed in the inner circle. What is the ratio of the areas of the outer circle and the inner equilateral triangle?
A) \[\frac{16\pi }{3\sqrt{3}}\]
B) \[\frac{8\pi }{2\sqrt{3}}\]
C) \[\frac{24\pi }{3\sqrt{3}}\]
D) None of theseView Solution
A) \[\frac{\pi +4{{k}^{2}}}{4}\]
B) \[\frac{(\pi +4)}{4}{{k}^{2}}\]
C) \[\left( \frac{\pi +8}{4} \right){{k}^{2}}\]
D) \[2(\pi +4){{k}^{2}}\]View Solution
A) \[\frac{49}{4}(\pi -2)\]
B) \[\frac{49}{4}(\pi -1)\]
C) \[\frac{49}{4}(\pi -3)\]
D) \[\frac{49}{2}(\pi -2)\]View Solution
question_answer14) A circular garden of radius is 15 m is surrounded by a circular path of width 7 m. If the path is to be covered with tiles at a rate of Rs. 10 per\[{{m}^{2}}\], then what is the total cost of the work?
A) Rs. 8410
B) Rs. 7140
C) Rs. 8140
D) Rs. 7410View Solution
A) \[(400-100\pi )\]
B) \[(360-100\pi )\]
C) \[(231-100\pi )\]
D) \[(400-50\pi )\]View Solution
A) \[\frac{\pi }{\sqrt{3}}{{a}^{2}}\]
B) \[\frac{\pi {{a}^{2}}}{16}\]
C) \[\frac{\left( \pi -\sqrt{2} \right)}{4}{{a}^{2}}\]
D) \[\frac{2\sqrt{3}}{\pi }{{a}^{2}}\]View Solution
A) \[1100\sqrt{3}\,c{{m}^{2}}\]
B) \[1323\sqrt{3}\,c{{m}^{2}}\]
C) \[1369\sqrt{3}\,c{{m}^{2}}\]
D) \[1442\sqrt{3}\,c{{m}^{2}}\]View Solution
question_answer18) In the given figure ABC is an equilateral triangle and C as the centre of the circle. A and B lie on the circle. What is the area of the shaded region, if the diameter of the circle is 28 cm?
A) \[\left( 102\frac{2}{3}-49\sqrt{3} \right)c{{m}^{2}}\]
B) \[\left( 103\frac{2}{3}-4998\sqrt{3} \right)c{{m}^{2}}\]
C) \[\left( 109-38\sqrt{3} \right)cm\]
D) None of theseView Solution
question_answer19) A square shaped bus shelter is supported on four circular poles. The circumferences of each pole are 'x' m and the length of each side of the shelter is 'y' m. find the area of the unsupported part of the shelter.
A) \[\left( {{x}^{2}}-\frac{{{y}^{2}}}{\pi } \right){{m}^{2}}\]
B) \[\left( {{y}^{2}}+\frac{{{x}^{2}}}{\pi } \right){{m}^{2}}\]
C) \[\left( {{x}^{2}}+\frac{{{y}^{2}}}{\pi } \right){{m}^{2}}\]
D) \[\left( {{y}^{2}}-\frac{{{x}^{2}}}{\pi } \right){{m}^{2}}\]View Solution
Four identical semicircles are drawn inside a big square as shown in the figure. Each side of the big square is 14 cm long.
A) \[125\text{ }c{{m}^{2}}\]
B) \[112\text{ }c{{m}^{2}}\]
C) \[173\text{ }c{{m}^{2}}\]
D) \[159\text{ }c{{m}^{2}}\]View Solution
The figure given is made up of a rectangle, identical semicircle(s) and quadrant (s).
A) \[1350\text{ }c{{m}^{2}}\]
B) \[1154\text{ }c{{m}^{2}}\]
C) \[1400\text{ }c{{m}^{2}}\]
D) \[{{21}^{2}}\times \left( 6-\pi \right)\text{ }c{{m}^{2}}\]View Solution
question_answer22) An ink pen, with a cylindrical barrel of diameter 2 cm and height 10.5 cm, completely filled with ink, can be used to write 4950 words. How many words can be written using 400 ml of ink?
A) 40000
B) 60000
C) 450000
D) 80000View Solution
A) \[13.2\text{ }{{m}^{2}}\]
B) \[14.2\text{ }{{m}^{2}}\]
C) \[13.4\text{ }{{m}^{2}}\]
D) \[14.4\text{ }{{m}^{2}}\]View Solution
A) \[\frac{100-36\pi }{41}\]
B) \[\frac{100-25\pi }{8}\]
C) \[\frac{100+25\pi }{8}\]
D) None of theseView Solution
The given figure shows an isosceles triangle and a semicircle with centre O.
A) 15.6 cm
B) 18.8 cm
C) 16.8 cm
D) 20.4 cmView Solution
question_answer26) The given figure is made up of a circle and three identical semicircles. If O is the centre and XY is the diameter of the bigger circle respectively and XY is equal to 28 cm, What is the perimeter of the shaded part?
A) 67 cm
B) 50 cm
C) 80 cm
D) 15 cmView Solution
A) \[15\text{ }c{{m}^{2}}\]
B) \[21\,c{{m}^{2}}\]
C) \[16\,c{{m}^{2}}\]
D) \[23\,c{{m}^{2}}\]View Solution
A) 5 m
B) 6 m
C) 7 m
D) 8 mView Solution
question_answer29) A circle of radius 'b' is divided into 6 equal sectors. An equilateral triangle is drawn on the chord of each sector to lie outside the circle. What is the area of the resulting figure?
A) \[3{{b}^{2}}\left( \pi +\sqrt{3} \right)\]
B) \[3\sqrt{3}{{b}^{2}}\]
C) \[3\left( {{b}^{2}}\sqrt{3}+\pi \right)\]
D) \[\frac{3\sqrt{3}\pi {{b}^{2}}}{2}\]View Solution
A) \[115.5\text{ }c{{m}^{2}}\]
B) \[228.5\text{ }c{{m}^{2}}\]
C) \[154\text{ }c{{m}^{2}}\]
D) None of theseView Solution
A) \[66\text{ }cm\]
B) \[66\sqrt{3}\,cm\]
C) \[66\sqrt{2}\,cm\]
D) None of theseView Solution
A) \[\frac{1}{4}\left( \pi -2{{a}^{2}} \right)\]
B) \[\left( \frac{1}{4} \right)\left( \pi {{a}^{2}}-{{a}^{2}} \right)\]
C) \[\frac{{{a}^{2}}}{2}\]
D) \[{{a}^{2}}\left( \frac{\pi -2}{2} \right)\]View Solution
question_answer33) There are two circles intersecting each other. Another smaller circle with centre O is lying between the common regions of two larger circles. Centres of the circle (i.e., A, O and B) are lying on a straight line. If AB = 16 cm and the radii of the larger circles are 10 cm each, what is the area of the smaller circle?
A) \[4\pi \text{ }c{{m}^{2}}\]
B) \[2\pi \text{ }c{{m}^{2}}\]
C) \[\frac{4}{\pi }\text{ }c{{m}^{2}}\]
D) \[\frac{\pi }{4}\text{ }c{{m}^{2}}\]View Solution
A) \[(2\pi -3)c{{m}^{2}}\]
B) \[(4-\pi )c{{m}^{2}}\]
C) \[(16-4\pi )c{{m}^{2}}\]
D) None of theseView Solution
A) \[\left( \frac{2\sqrt{3}-\pi }{2} \right)c{{m}^{2}}\]
B) \[\left( \frac{3\sqrt{2}-\pi }{3} \right)c{{m}^{2}}\]
C) \[\frac{2\sqrt{3}}{\pi }c{{m}^{2}}\]
D) \[\frac{\sqrt{6}}{2\pi }c{{m}^{2}}\]View Solution
question_answer36) A circular paper is folded along its diameter; then again it is folded to form a quadrant. Then it is cut as shown in the figure and after that the paper was reopened in the original circular shape. What is the ratio of the original paper to that of the remaining paper? (The shaded portion is cut off from the quadrant. The radius of quadrant OAB is 5 cm and radius of each semicircle is 1 cm):
A) \[25:16\]
B) \[25:9\]
C) \[20:9\]
D) \[31:25\]View Solution
question_answer37) ABCD is a square. A circle is inscribed in the square. Also taking A, B, C, D (the vertices of square) as the centers, four Quadrants are drawn, which are touching each other on the mid-point of the sides of square. If the Area of square is 4 cm 2, what is the area of the shaded region?
A) \[\left( 4-\frac{3\pi }{2} \right)c{{m}^{2}}\]
B) \[(2\pi -4)c{{m}^{2}}\]
C) \[(4-2\pi )c{{m}^{2}}\]
D) \[\left( \frac{7-3\pi }{2} \right)c{{m}^{2}}\]View Solution
question_answer38) In the adjoining diagram ABCD is a square with side 'a' cm. In the diagram the area of the larger circle with centre 'O' is equal to the sum of the areas of all the rest four circles with equal radii, whose centers are P, Q, R and S. What is the ratio between the diagonal of square and radius of a smaller circle?
A) \[\left( 2\sqrt{2}+3 \right)\]
B) \[\left( 2+3\sqrt{2} \right)\]
C) \[\left( 4+3\sqrt{2} \right)\]
D) can't be determinedView Solution
question_answer39) In Fig, ABC is a right - angle triangle, \[\angle B=90{}^\circ ,AB=28\text{ }cm\] and \[BC=21\text{ }cm\]. With AC as diameter, a semicircle is drawn and with BC as radius, a quarter-circle is drawn. Find the area of the shaded region correct to two decimal places.
A) \[428.75\,c{{m}^{2}}\]
B) \[857.50\,c{{m}^{2}}\]
C) \[214.37\,c{{m}^{2}}\]
D) \[371.56\,c{{m}^{2}}\]View Solution
A) \[(\pi /3){{\left( 2+\sqrt{3} \right)}^{2}}\]
B) \[6\pi {{\left( 2+\sqrt{3} \right)}^{2}}\]
C) \[3\pi {{\left( 2+\sqrt{3} \right)}^{2}}\]
D) \[\left( \frac{\pi }{6} \right){{\left( 2+\sqrt{3} \right)}^{2}}\]View Solution
You need to login to perform this action.
You will be redirected in 3 sec
|
Suppose that Alice receives a subset $S \subseteq \{1,\dots,n\}$ and Bob receives $T \subseteq \{1,\dots,n\}$. It is promised that $\lvert S \cap T \rvert = 1$. What is the randomized communication complexity of determining the common element $S \cap T$?
My interest in this is as follows. The zero-communication cost of this problem is $\log n$ since Alice and Bob can just guess $S \cap T$ using public coins and abort if they guess wrong. However, I can't think of an $O(\log n)$ cost communication protocol. Since it is not known whether zero-communication cost is much less than randomized communication cost, I am thinking that I am missing something obvious here.
Zero-communication cost is defined as follows. After Alice and Bob receive their inputs, they must not communicate at all. However, they share public coins, and they are allowed to answer with "abort". If neither party aborts, they must provide the correct answer with $2/3$ probability. The zero-communication cost is the negative log of the probability of not aborting. In arxiv:1204.1505 it is shown (among other things) that nearly all known lower bounds on communication complexity are in fact lower bounds for zero-communication.
Update: @Shitikanth showed that the communication complexity is $\Omega(n)$. So, apparently this gives a gap between communication cost and zero-communication cost. But arxiv:1204.1505 seems to give the impression that no such gap is known. What am I missing?
|
Images are essential elements in most of the scientific documents. LaTeX provides several options to handle images and make them look exactly what you need. In this article is explained how to include images in the most common formats, how to shrink, enlarge and rotate them, and how to reference them within your document.
Contents
Below is a example on how to import a picture.
\documentclass{article} \usepackage{graphicx} \graphicspath{ {./images/} } \begin{document} The universe is immense and it seems to be homogeneous, in a large scale, everywhere we look at. \includegraphics{universe} There's a picture of a galaxy above \end{document}
Latex can not manage images by itself, so we need to use the
graphicx package. To use it, we include the following line in the preamble:
\usepackage{graphicx}
The command
\graphicspath{ {./images/} } tells LaTeX that the images are kept in a folder named
images under the directory of the main document.
The
\includegraphics{universe} command is the one that actually included the image in the document. Here
universe is the name of the file containing the image without the extension, then universe.PNG becomes universe. The file name of the image should not contain white spaces nor multiple dots. Note: The file extension is allowed to be included, but it's a good idea to omit it. If the file extension is omitted it will prompt LaTeX to search for all the supported formats. For more details see the section about generating high resolution and low resolution images.
When working on a document which includes several images it's possible to keep those images in one or more separated folders so that your project is more organised.
The command
\graphicspath{ {images/} } tells LaTeX to look in the
images folder. The path is to the current working directory - so, the compiler will look for the file in the same folder as the code where the image is included. The path to the folder is relative by default, if there is no initial directory specified, for instance relative
%Path relative to the .tex file containing the \includegraphics command \graphicspath{ {images/} }
This is a typically straightforward way to reach the graphics folder within a file tree, but can leads to complications when .tex files within folders are included in the mail .tex file. Then, the compiler may end up looking for the images folder in the wrong place. Thus,
it is best practice to specify the graphics path to be relative to the main .tex file, denoting the main .tex file directory as
./ , for instance
%Path relative to the main .tex file \graphicspath{ {./images/} }
as in the introduction.
The path can also be
, if the exact location of the file on your system is specified. For example: absolute
%Path in Windows format: \graphicspath{ {c:/user/images/} } %Path in Unix-like (Linux, Mac OS) format \graphicspath{ {/home/user/images/} }
Notice that this command requires a trailing slash
/ and that the path is in between double braces.
You can also set multiple paths if the images are saved in more than one folder. For instance, if there are two folders named
images1 and images2, use the command.
\graphicspath{ {./images1/}{./images2/} }
If no path is set LaTeX will look for pictures in the folder where the .tex file the image is included in is saved.
If we want to further specify how LaTeX should include our image in the document (length, height, etc), we can pass those settings in the following format:
\begin{document} Overleaf is a great professional tool to edit online, share and backup your \LaTeX{} projects. Also offers a rather large help documentation. \includegraphics[scale=1.5]{lion-logo}
The command
\includegraphics[scale=1.5]{lion-logo} will include the image
lion-logo in the document, the extra parameter
scale=1.5 will do exactly that, scale the image 1.5 of its real size.
You can also scale the image to a some specific width and height.
\begin{document} Overleaf is a great professional tool to edit online, share and backup your \LaTeX{} projects. Also offers a rather large help documentation. \includegraphics[width=3cm, height=4cm]{lion-logo}
As you probably have guessed, the parameters inside the brackets
[width=3cm, height=4cm] define the width and the height of the picture. You can use different units for these parameters. If only the
width parameter is passed, the height will be scaled to keep the aspect ratio.
The length units can also be relative to some elements in document. If you want, for instance, make a picture the same width as the text:
\begin{document} The universe is immense and it seems to be homogeneous, in a large scale, everywhere we look at. \includegraphics[width=\textwidth]{universe}
Instead of
\textwidth you can use any other default LaTeX length:
\columnsep, \linewidth, \textheight, \paperheight, etc. See the reference guide for a further description of these units.
There is another common option when including a picture within your document, to
rotate it. This can easily accomplished in LaTeX:
\begin{document} Overleaf is a great professional tool to edit online, share and backup your \LaTeX{} projects. Also offers a rather large help documentation. \includegraphics[scale=1.2, angle=45]{lion-logo}
The parameter
angle=45 rotates the picture 45 degrees counter-clockwise. To rotate the picture clockwise use a negative number.
In the previous section was explained how to include images in your document, but the combination of text and images may not look as we expected. To change this we need to introduce a new
environment.
In the next example the figure will be positioned right below this sentence. \begin{figure}[h] \includegraphics[width=8cm]{Plot} \end{figure}
The
figure environment is used to display pictures as floating elements within the document. This means you include the picture inside the
figure environment and you don't have to worry about it's placement, LaTeX will position it in a such way that it fits the flow of the document.
Anyway, sometimes we need to have more control on the way the figures are displayed. An additional parameter can be passed to determine the figure positioning. In the example,
begin{figure}[h], the parameter inside the brackets set the position of the figure to
. Below a table to list the possible positioning values. here
Parameter Position h Place the float here, i.e., approximately at the same point it occurs in the source text (however, not exactly at the spot) t Position at the top of the page. b Position at the bottom of the page. p Put on a special page for floats only. ! Override internal parameters LaTeX uses for determining "good" float positions. H Places the float at precisely the location in the LaTeX code. Requires the
float package, though may cause problems occasionally. This is somewhat equivalent to
h!.
In the next example you can see a picture at the
top of the document, despite being declared below the text.
In this picture you can see a bar graph that shows the results of a survey which involved some important data studied as time passed. \begin{figure}[t] \includegraphics[width=8cm]{Plot} \centering \end{figure}
The additional command
\centering will centre the picture. The default alignment is
left.
It's also possible to
wrap the text around a figure. When the document contains small pictures this makes it look better.
\begin{wrapfigure}{r}{0.25\textwidth} %this figure will be at the right \centering \includegraphics[width=0.25\textwidth]{mesh} \end{wrapfigure} There are several ways to plot a function of two variables, depending on the information you are interested in. For instance, if you want to see the mesh of a function so it easier to see the derivative you can use a plot like the one on the left. \begin{wrapfigure}{l}{0.25\textwidth} \centering \includegraphics[width=0.25\textwidth]{contour} \end{wrapfigure} On the other side, if you are only interested on certain values you can use the contour plot, you can use the contour plot, you can use the contour plot, you can use the contour plot, you can use the contour plot, you can use the contour plot, you can use the contour plot, like the one on the left. On the other side, if you are only interested on certain values you can use the contour plot, you can use the contour plot, you can use the contour plot, you can use the contour plot, you can use the contour plot, you can use the contour plot, you can use the contour plot, like the one on the left.
For the commands in the example to work, you have to import the package
wrapfig. Add to the preamble the line
\usepackage{wrapfig}.
Now you can define the
wrapfigure environment by means of the commands
\begin{wrapfigure}{l}{0.25\textwidth} \end{wrapfigure}. Notice that the environment has two additional parameters enclosed in braces. Below the code is explained with more detail:
{l}
{0.25\textwidth}
\centering
For a more complete article about image positioning see Positioning images and tables
Captioning images to add a brief description and labelling them for further reference are two important tools when working on a lengthy text.
Let's start with a caption example:
\begin{figure}[h] \caption{Example of a parametric plot ($\sin (x), \cos(x), x$)} \centering \includegraphics[width=0.5\textwidth]{spiral} \end{figure}
It's really easy, just add the
\caption{Some caption} and inside the braces write the text to be shown. The placement of the caption depends on where you place the command; if it'a above the
includegraphics then the caption will be on top of it, if it's below then the caption will also be set below the figure.
Captions can also be placed right after the figures. The
sidecap package uses similar code to the one in the previous example to accomplish this.
\documentclass{article} \usepackage[rightcaption]{sidecap} \usepackage{graphicx} %package to manage images \graphicspath{ {images/} } \begin{SCfigure}[0.5][h] \caption{Using again the picture of the universe. This caption will be on the right} \includegraphics[width=0.6\textwidth]{universe} \end{SCfigure}
There are two new commands
\usepackage[rightcaption]{sidecap}
rightcaption. This parameter establishes the placement of the caption at the right of the picture, you can also use
\begin{SCfigure}[0.5][h] \end{SCfigure}
h works exactly as in the
You can do a more advanced management of the caption formatting. Check the further reading section for references.
Figures, just as many other elements in a LaTeX document (equations, tables, plots, etc) can be referenced within the text. This is very easy, just add a
label to the figure or SCfigure environment, then later use that label to refer the picture.
\begin{figure}[h] \centering \includegraphics[width=0.25\textwidth]{mesh} \caption{a nice plot} \label{fig:mesh1} \end{figure} As you can see in the figure \ref{fig:mesh1}, the function grows near 0. Also, in the page \pageref{fig:mesh1} is the same example.
There are three commands that generate cross-references in this example.
\label{fig:mesh1}
\ref{fig:mesh1}
\pageref{fig:mesh1}
The
\caption is mandatory to reference a figure.
Another great characteristic in a LaTeX document is the ability to automatically generate a
list of figures. This is straightforward.
This command only works on captioned figures, since it uses the caption in the table. The example above lists the images in this article.
Important Note:
When using cross-references your LaTeX project must be compiled twice, otherwise the references, the page references and the table of figures won't work.
So far while specifying the image file name in the
\includegraphics command we have omitted file extensions. However, that is not necessary, though it is often useful. If the file extension is omitted, LaTeX will search for any supported image format in that directory, and will search for various extensions in the default order (which can be modified).
This is useful in switching between development and production environments. In a development environment (when the article/report/book is still in progress), it is desirable to use low-resolution versions of images (typically in .png format) for fast compilation of the preview. In the production environment (when the final version of the article/report/book is produced), it is desirable to include the high-resolution version of the images.
This is accomplished by
Thus, if we have two versions of an image, venndiagram.pdf (high-resolution) and venndiagram.png (low-resolution), then we can include the following line in the preamble to use the .png version while developing the report -
\DeclareGraphicsExtensions{.png,.pdf}
The command above will ensure that if two files are encountered with the same base name but different extensions (for example venndiagram.pdf and venndiagram.png), then the .png version will be used first, and in its absence the .pdf version will be used, this is also a good ideas if some low-resolution versions are not available.
Once the report has been developed, to use the high-resolution .pdf version, we can change the line in the preamble specifying the extension search order to
\DeclareGraphicsExtensions{.pdf,.png} Improving on the technique described in the previous paragraphs, we can also instruct LaTeX to generate low-resolution .png versions of images on the fly while compiling the document if there is a PDF that has not been converted to PNG yet. To achieve that, we can include the following in the preamble after
\usepackage{graphicx}
\usepackage{epstopdf} \epstopdfDeclareGraphicsRule{.pdf}{png}{.png}{convert #1 \OutputFile} \DeclareGraphicsExtensions{.png,.pdf}
If venndiagram2.pdf exists but not venndiagram2.png, the file venndiagram2-pdf-converted-to.png will be created and loaded in its place. The command
convert #1 is responsible for the conversion and additional parameters may be passed between convert and #1. For example - convert -density 100 #1.
There are some important things to have in mind though:
--shell-escape option.
\epstopdfDeclareGraphicsRule, so that only high-resolution PDF files are loaded. We'll also need to change the order of precedence.
LaTeX units and legths
Abbreviation Definition pt A point, is the default length unit. About 0.3515mm mm a millimetre cm a centimetre in an inch ex the height of an x in the current font em the width of an m in the current font \columnsep distance between columns \columnwidth width of the column \linewidth width of the line in the current environment \paperwidth width of the page \paperheight height of the page \textwidth width of the text \textheight height of the text \unitleght units of length in the picture environment. About image types in LaTeX JPG: Best choice if we want to insert photos PNG: Best choice if we want to insert diagrams (if a vector version could not be generated) and screenshots PDF: Even though we are used to seeing PDF documents, a PDF can also store images EPS: EPS images can be included using the epstopdfpackage (we just need to install the package, we don't need to use \usepackage{}to include it in our document.)
For more information see
|
This vignette describes methods to analyse all possible centrality rankings of a network at once. To do so, a partial rankings as computed from neighborhood-inclusion or, more general, positional dominance is needed. In this vignette we focus on neighborhood-inclusion but note that all considered methods are readily applicable for positional dominance. For more examples consult the tutorial.
Neighborhood-inclusion or induces a partial ranking on the vertices of a graph \(G=(V,E)\). We write \(u\leq v\) if \(N(u)\subseteq N[v]\) holds for two vertices \(u,v \in V\). From the fact that \[u\leq v \implies c(u) \leq c(v)\] holds for any centrality index \(c:V\to \mathbb{R}\), we can characterize the set of all
possible centrality based node rankings. Namely as the set of rankings that extend the partial ranking “\(\leq\)” to a (complete) ranking. A node ranking can be defined as a mapping \[rk: V \to \{1,\ldots,n\},\] where we use the convention that \(u\) is the top ranked node if \(rk(u)=n\) and the bottom ranked one if \(rk(u)=1\). The set of all possible rankings can then be characterized as \[ \mathcal{R}(\leq)=\{rk:V \to \{1,\ldots,n\}\; : \; u\leq v \implies rk(u)\leq rk(v)\}. \] This set contains all rankings that could be obtained with a centrality index. Once \(\mathcal{R}(\leq)\) is calculated, it can be used for a probabilistic assessment of centrality, analyzing all possible rankings at once. Examples include relative rank probabilities (How likely is it, that a node \(u\) is more central than another node \(v\)?) or expected ranks (How central do we expect a node \(u\) to be). It most be noted though, that deriving the set \(\mathcal{R}(\leq)\) quickly becomes infeasible for larger networks, and one has to resort to approximation methods. These and more theoretical details can be found in
Schoch, David. (2018). Centrality without Indices: Partial rankings and rank Probabilities in networks.
Social Networks, 54, 50-60.(link)
netrankr Package
library(netrankr)library(igraph)library(magrittr)
Before calculating any probabilities consider the following example graph and the rankings induced by various centrality indices, shown as rank intervals (consult this vignette for details).
g <- graph.empty(n=11,directed = FALSE)g <- add_edges(g,c(1,11,2,4,3,5,3,11,4,8,5,9,5,11,6,7,6,8, 6,10,6,11,7,9,7,10,7,11,8,9,8,10,9,10))V(g)$name <- LETTERS[1:11]#neighborhood inclusion P <- g %>% neighborhood_inclusion()#without %>% operator:# P <- neighborhood_inclusion(g)cent_scores <- data.frame( degree=degree(g), betweenness=round(betweenness(g),4), closeness=round(closeness(g),4), eigenvector=round(eigen_centrality(g)$vector,4), subgraph=round(subgraph_centrality(g),4))plot_rank_intervals(P,cent.df = cent_scores)
Notice how all five centrality rank a different vertex as the most central one.
In the following subsections the output of the function
exact_rank_probabilities() are described which may help to circumvent the potential arbitrariness of index induced rankings. But first, let us briefly look at all the return values.
res <- exact_rank_prob(P)str(res)
## List of 6## $ lin.ext : num 739200## $ mse : int [1:11] 1 2 3 4 5 6 7 8 9 10 ...## $ rank.prob : num [1:11, 1:11] 0.545 0.273 0 0 0 ...## ..- attr(*, "dimnames")=List of 2## .. ..$ : chr [1:11] "A" "B" "C" "D" ...## .. ..$ : chr [1:11] "1" "2" "3" "4" ...## $ relative.rank: num [1:11, 1:11] 0 0.3333 0 0.0476 0 ...## ..- attr(*, "dimnames")=List of 2## .. ..$ : chr [1:11] "A" "B" "C" "D" ...## .. ..$ : chr [1:11] "A" "B" "C" "D" ...## $ expected.rank: Named num [1:11] 1.71 3 4.29 7.5 8.14 ...## ..- attr(*, "names")= chr [1:11] "A" "B" "C" "D" ...## $ rank.spread : Named num [1:11] 0.958 1.897 1.725 2.54 2.16 ...## ..- attr(*, "names")= chr [1:11] "A" "B" "C" "D" ...
The return value
lin.ext gives the number of possible rankings that are in accordance with the partial ranking
P. The
names vector returns the names of nodes if they were supplied with the
names parameter. Otherwise, node ids are returned as a character vector. The vector
mse returns the equivalence classes of
P. Nodes \(u\) and \(v\) are equivalent if \(N(u)\subseteq N[v]\) and \(N(v)\subseteq N[u]\) holds. The remaining return values are discussed in the following.
Instead of insisting on fixed ranks of nodes as given by indices, we can use
rank probabilities to assess the likelihood of certain rank. Formally, rank probabilities are simply defined as \[P(rk(u)=k)=\frac{\lvert \{rk \in \mathcal{R}(\leq) \; : \; rk(u)=k\} \rvert}{\lvert \mathcal{R}(\leq) \rvert}.\] Rank probabilities are given by the return value
rank.prob of the
exact_rank_prob() function.
rp <- round(res$rank.prob,2)rp
## 1 2 3 4 5 6 7 8 9 10 11## A 0.55 0.27 0.12 0.05 0.01 0.00 0.00 0.00 0.00 0.00 0.00## B 0.27 0.22 0.17 0.13 0.09 0.06 0.04 0.02 0.01 0.00 0.00## C 0.00 0.16 0.22 0.21 0.17 0.12 0.07 0.04 0.01 0.00 0.00## D 0.00 0.03 0.05 0.07 0.09 0.11 0.12 0.13 0.13 0.14 0.14## E 0.00 0.00 0.02 0.05 0.08 0.10 0.13 0.15 0.16 0.16 0.16## F 0.00 0.05 0.08 0.10 0.11 0.11 0.11 0.11 0.11 0.11 0.11## G 0.00 0.05 0.08 0.10 0.11 0.11 0.11 0.11 0.11 0.11 0.11## H 0.00 0.03 0.05 0.07 0.09 0.11 0.12 0.13 0.13 0.14 0.14## I 0.09 0.09 0.09 0.09 0.09 0.09 0.09 0.09 0.09 0.09 0.09## J 0.09 0.09 0.09 0.09 0.09 0.09 0.09 0.09 0.09 0.09 0.09## K 0.00 0.00 0.02 0.05 0.08 0.10 0.13 0.15 0.16 0.16 0.16
Entries
rp[u,k] correspond to \(P(rk(u)=k)\).
The most interesting probabilities are certainly \(P(rk(u)=n)\), that is how likely is it for a node to be the most central.
rp[,11]
## A B C D E F G H I J K ## 0.00 0.00 0.00 0.14 0.16 0.11 0.11 0.14 0.09 0.09 0.16
Recall from the previous section that we found five indices that ranked \(6,7,8,10\) and \(11\) on top. The probability tell us now, how likely it is to find an index that rank these nodes on top. In this case, node \(11\) has the highest probability to be the most central node.
In some cases, we might not necessarily be interested in a complete ranking of nodes, but only in the relative position of a subset of nodes. This idea leads to
relative rank probabilities, that is formally defined as \[P(rk(u)\leq rk(v))=\frac{\lvert \{rk \in \mathcal{R}(\leq) \; : \; rk(u)\leq rk(v)\} \rvert}{\lvert \mathcal{R}(\leq) \rvert}.\] Relative rank probabilities are given by the return value
relative.rank of the
exact_rank_prob() function.
rrp <- round(res$relative.rank,2)rrp
## A B C D E F G H I J K## A 0.00 0.67 1.00 0.95 1.00 1.00 1.00 0.95 0.86 0.86 1.00## B 0.33 0.00 0.67 1.00 0.92 0.83 0.83 1.00 0.75 0.75 0.92## C 0.00 0.33 0.00 0.80 1.00 0.75 0.75 0.80 0.64 0.64 1.00## D 0.05 0.00 0.20 0.00 0.56 0.44 0.44 0.50 0.38 0.38 0.56## E 0.00 0.08 0.00 0.44 0.00 0.38 0.38 0.44 0.32 0.32 0.50## F 0.00 0.17 0.25 0.56 0.62 0.00 0.50 0.56 0.43 0.43 0.62## G 0.00 0.17 0.25 0.56 0.62 0.50 0.00 0.56 0.43 0.43 0.62## H 0.05 0.00 0.20 0.50 0.56 0.44 0.44 0.00 0.38 0.38 0.56## I 0.14 0.25 0.36 0.62 0.68 0.57 0.57 0.62 0.00 0.50 0.68## J 0.14 0.25 0.36 0.62 0.68 0.57 0.57 0.62 0.50 0.00 0.68## K 0.00 0.08 0.00 0.44 0.50 0.37 0.37 0.44 0.32 0.32 0.00
Entries
rrp[u,v] correspond to \(P(rk(u)\leq rk(v))\).
The more a value
rrp[u,v] deviates from \(0.5\) towards \(1\), the more confidence we gain that a node \(v\) is more central than a node \(u\).
The
expected rank of a node in centrality rankings is defined as the expected value of the rank probability distribution. That is, \[\rho(u)=\sum_{k=1}^n k\cdot P(rk(u)=k).\] Expected ranks are given by the return value
expected.rank of the
exact_rank_prob() function.
ex_rk <- round(res$expected.rank,2)ex_rk
## A B C D E F G H I J K ## 1.71 3.00 4.29 7.50 8.14 6.86 6.86 7.50 6.00 6.00 8.14
As a reminder, the higher the numeric rank, the more central a node is. In this case, node \(11\) has the highest expected rank in any centrality ranking.
|
SOLVED: Using Simply Beautiful Art's method, I managed to find the following upper bounds (and I numerically checked them). For all $a > 0$ and $m \in \mathbb{N}$, we have: $$\sum_{k=2m+1}^{\infty} \left(\frac{a}{\sqrt{k}}\right)^k \leq \left(1+\frac{a}{\sqrt{2}}\right)\frac{a}{\sqrt{2}}e\left[e^{\frac{a^2}{2e}} - \sum_{k=0}^{m-1} \frac{\left(\frac{a^2}{2e}\right)^k}{k!}\right] \leq \left(1+\frac{a}{\sqrt{2}}\right)\frac{a}{\sqrt{2}}e^{\frac{a^2}{2e}+1}$$ The first one is very close, the second one only gets close when $a$ is large.
EDIT: Rephrased the post considerably. The original statement can be found below.
Let $M \in \mathbb{N}$, $a > 0$. I have the following (convergent) series: $$\sum_{k=M}^{\infty} \left(\frac{a}{\sqrt{k}}\right)^k$$ I would like to find a closed form upper bound on this series that shows the qualitive dependece on $a$. For example, one thing we could do is the following, as pointed out by DonAntonio (assuming $a^2 > M$):
\begin{align*} \sum_{k=M}^{\infty} \left(\frac{a}{\sqrt{k}}\right)^k &= \sum_{k=M}^{\lfloor a^2+1 \rfloor} \left(\frac{a}{\sqrt{k}}\right)^k + \sum_{k=\lfloor a^2+2 \rfloor}^{\infty} \left(\frac{a}{\sqrt{k}}\right)^k \\ &\leq \left(\frac{a}{\sqrt{M}}\right)^{\lfloor a^2+1 \rfloor} \cdot (a^2+2-M) + \left(\frac{a}{\sqrt{\lfloor a^2 + 2 \rfloor}}\right)^{\lfloor a^2 + 2\rfloor} \cdot \frac{1}{1 - \frac{a}{\sqrt{\lfloor a^2 + 2 \rfloor}}} \end{align*} But this bound is not very strict, and hence does not tell us much about how the value of the series depends on $a$. So, I am looking for tighter upper bounds that reveal more of the qualitative dependence of the value of the series on $a$.
Let $M \in \mathbb{N}$, $a > 0$. I'm looking for an upper bound on the following series (it is not hard to see that it converges): $$\sum_{k=M}^{\infty} \left(\frac{a}{\sqrt{k}}\right)^k$$ However, I'm horribly stuck. Any method to upper bound this without losing too much precision would be greatly appreciated. In particular, I am interested how the resulting value depends on $a$, so an upper bound in big-O-notation in $a$ would be perfect.
I already tried to cut this series into several consecutive geometric series: \begin{align*} \sum_{k=M}^\infty \left(\frac{a}{\sqrt{k}}\right)^k &= \sum_{\ell=0}^{\infty} \sum_{k=2^\ell M}^{2^{\ell+1}M - 1} \left(\frac{a}{\sqrt{2^{\ell+1}M}}\right)^k = \sum_{\ell=0}^{\infty} \left(\frac{a}{\sqrt{2^{\ell+1}M}}\right)^{2^{\ell}M} \cdot \frac{1 - \left(\frac{a}{\sqrt{2^{\ell+1}M}}\right)^{2^\ell M}}{1 - \left(\frac{a}{\sqrt{2^{\ell+1}M}}\right)} \end{align*} But I don't see any way to continue from here.
|
The following probability question appeared in an earlier thread:
I have two children. One is a boy born on a Tuesday. What is the probability I have two boys?
The claim was that it is not actually a mathematical problem and it is only a language problem.
If one wanted to restate this problem formally the obvious way would be like so:
Definition: Sex is defined as an element of the set $\\{\text{boy},\text{girl}\\}$. Definition: Birthday is defined as an element of the set $\\{\text{Monday},\text{Tuesday},\text{Wednesday},\text{Thursday},\text{Friday},\text{Saturday},\text{Sunday}\\}$ Definition: A Child is defined to be an ordered pair: (sex $\times$ birthday).
Let $(x,y)$ be a pair of children,
Define an auxiliary predicate $H(s,b) :\\!\\!\iff s = \text{boy} \text{ and } b = \text{Tuesday}$.
Calculate $P(x \text{ is a boy and } y \text{ is a boy}|H(x) \text{ or } H(y))$
I don't see any other sensible way to formalize this question.
To actually solve this problem now requires no thought (infact it is thinking which leads us to guess incorrect answers), we just compute
$$ \begin{align*} & P(x \text{ is a boy and } y \text{ is a boy}|H(x) \text{ or } H(y)) \\\\ =& \frac{P(x\text{ is a boy and }y\text{ is a boy and }(H(x)\text{ or }H(y)))} {P(H(x)\text{ or }H(y))} \\\\ =& \frac{P((x\text{ is a boy and }y\text{ is a boy and }H(x))\text{ or }(x\text{ is a boy and }y\text{ is a boy and }H(y)))} {P(H(x)) + P(H(y)) - P(H(x))P(H(y))} \\\\ =& \frac{\begin{align*} &P(x\text{ is a boy and }y\text{ is a boy and }x\text{ born on Tuesday}) \\\\ + &P(x\text{ is a boy and }y\text{ is a boy and }y\text{ born on Tuesday}) \\\\ - &P(x\text{ is a boy and }y\text{ is a boy and }x\text{ born on Tuesday and }y\text{ born on Tuesday}) \\\\ \end{align*}} {P(H(x)) + P(H(y)) - P(H(x))P(H(y))} \\\\ =& \frac{1/2 \cdot 1/2 \cdot 1/7 + 1/2 \cdot 1/2 \cdot 1/7 - 1/2 \cdot 1/2 \cdot 1/7 \cdot 1/7} {1/2 \cdot 1/7 + 1/2 \cdot 1/7 - 1/2 \cdot 1/7 \cdot 1/2 \cdot 1/7} \\\\ =& 13/27 \end{align*} $$
Now what I am wondering is, does this refute the claim that this puzzle is just a language problem or add to it? Was there a lot of room for misinterpreting the questions which I just missed?
|
@egreg It does this "I just need to make use of the standard hyphenation function of LaTeX, except "behind the scenes", without actually typesetting anything." (if not typesetting includes typesetting in a hidden box) it doesn't address the use case that he said he wanted that for
@JosephWright ah yes, unlike the hyphenation near box question, I guess that makes sense, basically can't just rely on lccode anymore. I suppose you don't want the hyphenation code in my last answer by default?
@JosephWright anway if we rip out all the auto-testing (since mac/windows/linux come out the same anyway) but leave in the .cfg possibility, there is no actual loss of functionality if someone is still using a vms tex or whatever
I want to change the tracking (space between the characters) for a sans serif font. I found that I can use the microtype package to change the tracking of the smallcaps font (\textsc{foo}), but I can't figure out how to make \textsc{} a sans serif font.
@DavidCarlisle -- if you write it as "4 May 2016" you don't need a comma (or, in the u.s., want a comma).
@egreg (even if you're not here at the moment) -- tomorrow is international archaeology day: twitter.com/ArchaeologyDay , so there must be someplace near you that you could visit to demonstrate your firsthand knowledge.
@barbarabeeton I prefer May 4, 2016, for some reason (don't know why actually)
@barbarabeeton but I have another question maybe better suited for you please: If a member of a conference scientific committee writes a preface for the special issue, can the signature say John Doe \\ for the scientific committee or is there a better wording?
@barbarabeeton overrightarrow answer will have to wait, need time to debug \ialign :-) (it's not the \smash wat did it) on the other hand if we mention \ialign enough it may interest @egreg enough to debug it for us.
@DavidCarlisle -- okay. are you sure the \smash isn't involved? i thought it might also be the reason that the arrow is too close to the "M". (\smash[t] might have been more appropriate.) i haven't yet had a chance to try it out at "normal" size; after all, \Huge is magnified from a larger base for the alphabet, but always from 10pt for symbols, and that's bound to have an effect, not necessarily positive. (and yes, that is the sort of thing that seems to fascinate @egreg.)
@barbarabeeton yes I edited the arrow macros not to have relbar (ie just omit the extender entirely and just have a single arrowhead but it still overprinted when in the \ialign construct but I'd already spent too long on it at work so stopped, may try to look this weekend (but it's uktug tomorrow)
if the expression is put into an \fbox, it is clear all around. even with the \smash. so something else is going on. put it into a text block, with \newline after the preceding text, and directly following before another text line. i think the intention is to treat the "M" as a large operator (like \sum or \prod, but the submitter wasn't very specific about the intent.)
@egreg -- okay. i'll double check that with plain tex. but that doesn't explain why there's also an overlap of the arrow with the "M", at least in the output i got. personally, i think that that arrow is horrendously too large in that context, which is why i'd like to know what is intended.
@barbarabeeton the overlap below is much smaller, see the righthand box with the arrow in egreg's image, it just extends below and catches the serifs on the M, but th eoverlap above is pretty bad really
@DavidCarlisle -- i think other possible/probable contexts for the \over*arrows have to be looked at also. this example is way outside the contexts i would expect. and any change should work without adverse effect in the "normal" contexts.
@DavidCarlisle -- maybe better take a look at the latin modern math arrowheads ...
@DavidCarlisle I see no real way out. The CM arrows extend above the x-height, but the advertised height is 1ex (actually a bit less). If you add the strut, you end up with too big a space when using other fonts.
MagSafe is a series of proprietary magnetically attached power connectors, originally introduced by Apple Inc. on January 10, 2006, in conjunction with the MacBook Pro at the Macworld Expo in San Francisco, California. The connector is held in place magnetically so that if it is tugged — for example, by someone tripping over the cord — it will pull out of the socket without damaging the connector or the computer power socket, and without pulling the computer off the surface on which it is located.The concept of MagSafe is copied from the magnetic power connectors that are part of many deep fryers...
has anyone converted from LaTeX -> Word before? I have seen questions on the site but I'm wondering what the result is like... and whether the document is still completely editable etc after the conversion? I mean, if the doc is written in LaTeX, then converted to Word, is the word editable?
I'm not familiar with word, so I'm not sure if there are things there that would just get goofed up or something.
@baxx never use word (have a copy just because but I don't use it;-) but have helped enough people with things over the years, these days I'd probably convert to html latexml or tex4ht then import the html into word and see what come out
You should be able to cut and paste mathematics from your web browser to Word (or any of the Micorsoft Office suite). Unfortunately at present you have to make a small edit but any text editor will do for that.Givenx=\frac{-b\pm\sqrt{b^2-4ac}}{2a}Make a small html file that looks like<!...
@baxx all the convertors that I mention can deal with document \newcommand to a certain extent. if it is just \newcommand\z{\mathbb{Z}} that is no problem in any of them, if it's half a million lines of tex commands implementing tikz then it gets trickier.
@baxx yes but they are extremes but the thing is you just never know, you may see a simple article class document that uses no hard looking packages then get half way through and find \makeatletter several hundred lines of trick tex macros copied from this site that are over-writing latex format internals.
|
I don't understand the different behaviour of the advection-diffusion equation when I apply different boundary conditions. My motivation is the simulation of a real physical quantity (particle density) under diffusion and advection. Particle density should be conserved in the interior unless it flows out from the edges. By this logic, if I enforce Neumann boundary conditions the ends of the system such as $\frac{\partial \phi}{\partial x}=0$ (on the left and the right sides) then the system should be
"closed" i.e. if the flux at the boundary is zero then no particles can escape.
For all the simulations below, I have applied the Crank-Nicolson discretization to the advection-diffusion equation and all simulation have $\frac{\partial \phi}{\partial x}=0$ boundary conditions. However, for the first and last rows of the matrix (the boundary condition rows) I allow $\beta$ to be changed independently of the interior value. This allows the end points to be fully implicit.
Below I discuss 4 different configurations, only one of them is what I expected. At the end I discuss my implementation.
Diffusion only limit
Here the advection terms are turned off by setting the velocity to zero.
Diffusion only, with $\boldsymbol{\beta}$=0.5 (Crank-Niscolson) at all points
The quantity is not conserved as can be seen by the pulse area reducing.
Diffusion only, with $\boldsymbol{\beta}$=0.5 (Crank-Niscolson) at interior points, and $\boldsymbol{\beta}$=1 (full implicit) at the boundaries
By using fully implicit equation on the boundaries I achieve what I expect:
no particles escape. You can see this by the area being conserved as the particle diffuse. Why should the choice of $\beta$ at the boundary points influence the physics of the situation? Is this a bug or expected? Diffusion and advection
When the advection term is included, the value of $\beta$ at the boundaries does not seem to influence the solution. However, for all cases when the boundaries seem to be
"open" i.e. particles can escape the boundaries. Why is this the case? Advection and Diffusion with $\boldsymbol{\beta}$=0.5 (Crank-Niscolson) at all points
Advection and Diffusion with $\boldsymbol{\beta}$=0.5 (Crank-Niscolson) at interior points, and $\boldsymbol{\beta}$=1 (full implicit) at the boundaries
Implementation of the advection-diffusion equation
Starting with the advection-diffusion equation,
$ \frac{\partial \phi}{\partial t} = D\frac{\partial^2 \phi}{\partial x^2} + \boldsymbol{v}\frac{\partial \phi}{\partial x} $
Writing using Crank-Nicolson gives,
$ \frac{\phi_{j}^{n+1} - \phi_{j}^{n}}{\Delta t} = D \left[ \frac{1 - \beta}{(\Delta x)^2} \left( \phi_{j-1}^{n} - 2\phi_{j}^{n} + \phi_{j+1}^{n} \right) + \frac{\beta}{(\Delta x)^2} \left( \phi_{j-1}^{n+1} - 2\phi_{j}^{n+1} + \phi_{j+1}^{n+1} \right) \right] + \boldsymbol{v} \left[ \frac{1-\beta}{2\Delta x} \left( \phi_{j+1}^{n} - \phi_{j-1}^{n} \right) + \frac{\beta}{2\Delta x} \left( \phi_{j+1}^{n+1} - \phi_{j-1}^{n+1} \right) \right] $
Note that $\beta$=0.5 for Crank-Nicolson, $\beta$=1 for fully implicit, and, $\beta$=0 for fully explicit.
To simplify the notation let's make the substitution,
$ s = D\frac{\Delta t}{(\Delta x)^2} \\ r = \boldsymbol{v}\frac{\Delta t}{2 \Delta x} $
and move the known value $\phi_{j}^{n}$ of the time derivative to the right-hand side,
$ \phi_{j}^{n+1} = \phi_{j}^{n} + s \left( 1-\beta \right) \left( \phi_{j-1}^{n} - 2\phi_{j}^{n} + \phi_{j+1}^{n} \right) + s \beta \left( \phi_{j-1}^{n+1} - 2\phi_{j}^{n+1} + \phi_{j+1}^{n+1} \right) + r \left( 1 - \beta \right) \left( \phi_{j+1}^{n} - \phi_{j-1}^{n} \right) + r \beta \left( \phi_{j+1}^{n+1} - \phi_{j-1}^{n+1} \right) $
Factoring the $\phi$ terms gives,
$ \underbrace{\beta(r - s)\phi_{j-1}^{n+1} + (1 + 2s\beta)\phi_{j}^{n+1} -\beta(s + r)\phi_{j+1}^{n+1}}_{\boldsymbol{A}\cdot\boldsymbol{\phi^{n+1}}} = \underbrace{ (1-\beta)(s - r)\phi_{j-1}^{n} + (1-2s[1-\beta])\phi_{j}^{n} + (1-\beta)(s+r)\phi_{j+1}^{n}}_{\boldsymbol{M\cdot}\boldsymbol{\phi^n}} $
which we can write in matrix form as $\boldsymbol{A}\cdot\boldsymbol{\phi^{n+1}} = \boldsymbol{M}\cdot\boldsymbol{\phi^{n}}$ where,
$ \boldsymbol{A} = \left( \begin{matrix} 1+2s\beta & -\beta(s + r) & & 0 \\ \beta(r-s) & 1+2s\beta & -\beta (s + r) & \\ & \ddots & \ddots & \ddots \\ & \beta(r-s) & 1+2s\beta & -\beta (s + r) \\ 0 & & \beta(r-s) & 1+2s\beta \\ \end{matrix} \right) $
$ \boldsymbol{M} = \left( \begin{matrix} 1-2s(1-\beta) & (1 - \beta)(s + r) & & 0 \\ (1 - \beta)(s - r) & 1-2s(1-\beta) & (1 - \beta)(s + r) & \\ & \ddots & \ddots & \ddots \\ & (1 - \beta)(s - r) & 1-2s(1-\beta) & (1 - \beta)(s + r) \\ 0 & & (1 - \beta)(s - r) & 1-2s(1-\beta) \\ \end{matrix} \right) $
Applying Neumann boundary conditions NB is working through the derivation again I think I have spotted the error. I assumed a fully implicit scheme ($\beta$=1) when writing the finite difference of the boundary condition. If you assume a Crank-Niscolson scheme here the complexity become too great and I could not solve the resulting equations to eliminate the nodes which are outside the domain. However, it would appear possible, there are two equation with two unknowns, but I couldn't manage it. This probably explains the difference between the first and second plots above. I think we can conclude that only the plots with $\beta$=0.5 at the boundary points are valid.
Assuming the flux at the left-hand side is known (assuming a fully implicit form),
$ \frac{\partial\phi_1^{n+1}}{\partial x} = \sigma_L $
Writing this as a centred-difference gives,
$ \frac{\partial\phi_1^{n+1}}{\partial x} \approx \frac{\phi_2^{n+1} - \phi_0^{n+1}}{2\Delta x} = \sigma_L $
therefore, $ \phi_0^{n+1} = \phi_{2}^{n+1} - 2 \Delta x\sigma_L $
Note that this introduces a node $\phi_0^{n+1}$ which is outside the domain of the problem. This node can be eliminated by using a second equation. We can write the $j=1$ node as,
$ \beta(r - s)\phi_0^{n+1} + (1+2s\beta)\phi_1^{n+1} - \beta(s+r)\phi_2^{n+1} = (1-\beta)(s - r)\phi_{j-1}^{n} + (1-2s[1-\beta])\phi_{j}^{n} + (1-\beta)(s+r)\phi_{j+1}^{n} $
Substituting in the value of $\phi_0^{n+1}$ found from the boundary condition gives the following result for the $j$=1 row,
$ (1+2s\beta)\phi_1^{n+1} - 2s\beta\phi_2^{n+1} = (1-\beta)(s - r)\phi_{j-1}^{n} + (1-2s[1-\beta])\phi_{j}^{n} + (1-\beta)(s+r)\phi_{j+1}^{n} + 2\beta(r-s)\Delta x\sigma_L $
Performing the same procedure for the final row (at $j$=$J$) yields,
$ -2s\beta\phi_{J-1}^{n+1} + (1+2s\beta)\phi_J^{n+1} = (1-\beta)(s - r)\phi_{J-1}^{n} + (1 - 2s(1-\beta))\phi_{J}^{n} + 2\beta(s+r)\Delta x\sigma_R $
Finally making the boundary rows implicit (setting $\beta$=1) gives,
$ (1+2s)\phi_1^{n+1} - 2s\phi_2^{n+1} = \phi_{j-1}^{n} + 1\phi_{j}^{n} + 2(r-s)\Delta x\sigma_L $
$ -2s\phi_{J-1}^{n+1} + (1+2s)\phi_J^{n+1} = \phi_{J}^{n} + 2(s+r)\Delta x\sigma_R $
Therefore with Neumann boundary conditions we can write the matrix equation, $\boldsymbol{A}\cdot\phi^{n+1} = \boldsymbol{M}\cdot\phi^{n} + \boldsymbol{b_N}$,
where,
$ \boldsymbol{A} = \left( \begin{matrix} 1+2s & -2s & & 0 \\ \beta(r-s) & 1+2s\beta & -\beta (s + r) & \\ & \ddots & \ddots & \ddots \\ & \beta(r-s) & 1+2s\beta & -\beta (s + r) \\ 0 & & -2s & 1+2s \\ \end{matrix} \right) $
$ \boldsymbol{M} = \left( \begin{matrix} 1 & 0 & & 0 \\ (1 - \beta)(s - r) & 1-2s(1-\beta) & (1 - \beta)(s + r) & \\ & \ddots & \ddots & \ddots \\ & (1 - \beta)(s - r) & 1-2s(1-\beta) & (1 - \beta)(s + r) \\ 0 & & 0 & 1 \\ \end{matrix} \right) $
$ \boldsymbol{b_N} = \left( \begin{matrix} 2 (r - s) \Delta x \sigma_L & 0 & \ldots & 0 & 2 (s + r) \Delta x \sigma_R \end{matrix} \right)^{T} $
My current understanding
I think the difference between the first and second plots is explained by noting the error outlined above.
Regarding the conservation of the physical quantity. I believe the cause is that, as pointed out here, the advection equation in the form I have written it doesn't allow propagation in the reverse direction so the wave just passes through even with zero-flux boundary conditions. My initial intuition regarding conservation only applied when advection term is zero (this is solution in plot number 2 where the area is conserved).
Even with
Neumann zero-fluxboundary conditions $\frac{\partial \phi}{\partial x} = 0$ the mass can still leave the system, this is because the correct boundary conditions in this case are Robinboundary conditions in which the totalflux is specified $j = D\frac{\partial \phi}{\partial x} + \boldsymbol{v}\phi = 0$. Moreover the Neunmann condition specifies that mass cannot leave the domain via diffusion, it says nothing about advection. In essence what we have hear are closed boundary conditions to diffusion and open boundary conditions to advection. For more information see the answer here, Implementation of gradient zero boundary conditon in advection-diffusion equation. Would you agree?
|
On the cohomology ring of divided powers and free ∞-loop spaces
Orateur:
Lorenzo Guerra
Dates:
Vendredi, 14 Juin, 2019 - 14:00 - 15:00
Résumé:
Given a pointed topological space $(X, \ast)$, its loop space is the space $\Omega(X, \ast)$ of continuous maps $\gamma \colon [0,1] \to X$ such that $\gamma (0) = \gamma (1) = \ast$, suitably topologized. A $k$-fold loop space (or simply $k$-loop space) is the result of $k$ consecutive applications of the functor $\Omega$ to a pointed space, while an $\infty$-loop space is a topological space homotopically equivalent to a $k$-loop space for every $k$. $\infty$-loop spaces are extremely interesting for algebraic topologists.
There is a free functor $Q$ from the category of topological spaces to that of $\infty$-loop spaces. Given a space $X$, the ``building block'' of the object $Q(X)$ are the divided powers $D_kX = E(\mathcal{S}_n) \times_{\mathcal{S}_n} X^k$, which are also spaces of interest of their own.
In this talk, I will present a description of the cohomology ring of $Q(X)$ and $D_kX$ with coefficients in prime fields $\mathbb{F}_p$. This is joint work with prof. P. Salvatore and prof. D. Sinha.
Accueil Annuaire Equipes Evènements Formation par la Recherche Laboratoire
|
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
|
On the optimal map in the $ 2 $-dimensional random matching problem
1.
Scuola Normale Superiore, Piazza dei Cavalieri 7, 56126 Pisa, Italy
2.
ETH, Rämistrasse 101, 8092 Zürich, Switzerland
3.
Dipartimento di Matematica, Università di Pisa, Largo Bruno Pontecorvo 5, 56127 Pisa, Italy
We show that, on a $ 2 $-dimensional compact manifold, the optimal transport map in the semi-discrete random matching problem is well-approximated in the $ L^2 $-norm by identity plus the gradient of the solution to the Poisson problem $ - {\Delta} f^{n, t} = \mu^{n, t}-1 $, where $ \mu^{n, t} $ is an appropriate regularization of the empirical measure associated to the random points. This shows that the ansatz of [
As part of our strategy, we prove a new stability result for the optimal transport map on a compact manifold.
Mathematics Subject Classification:Primary: 60D05; Secondary: 49J55, 58J35, 35F21. Citation:Luigi Ambrosio, Federico Glaudo, Dario Trevisan. On the optimal map in the $ 2 $-dimensional random matching problem. Discrete & Continuous Dynamical Systems - A, 2019, 39 (12) : 7291-7308. doi: 10.3934/dcds.2019304
References:
[1] [2] [3] [4] [5]
S. H. Benton,
The Hamilton-Jacobi Equation: A Global Approach, Academic Press, 1977.
Google Scholar
[6] [7] [8]
S. Caracciolo, C. Lucibello, G. Parisi and G. Sicuro, Scaling hypothesis for the Euclidean bipartite matching problem,
[9]
I. Chavel,
[10] [11]
L. C. Evans and R. F. Gariepy,
[12]
A. Fathi, Regularity of $C^1$ solutions of the Hamilton-Jacobi equation, in
[13]
N. Gigli,
On Hölder continuity-in-time of the optimal transport map towards measures along a curve,
[14] [15]
M. Goldman, M. Huesmann and F. Otto, A large-scale regularity theory for the Monge-Ampère equation with rough data and application to the optimal matching problem, preprint, arXiv: 1808.09250.Google Scholar
[16] [17] [18] [19]
M. Ledoux, On optimal matching of Gaussian samples ii, https://perso.math.univ-toulouse.fr/ledoux/files/2019/03/SudakovII.pdf.Google Scholar
[20]
M. Ledoux, A fluctuation result in dual Sobolev norm for the optimal matching problem, https://perso.math.univ-toulouse.fr/ledoux/files/2019/02/matchingclt.pdf.Google Scholar
[21] [22]
P.-L. Lions,
[23]
J. Lott and C. Villani, Hamilton–Jacobi semigroup on length spaces and applications,
[24] [25]
W. Rudin,
[26]
F. Santambrogio,
[27] [28] [29] [30] [31] [32]
N. G. Trillos and D. Slepčev,
On the rate of convergence of empirical measures in $\infty$-transportation distance,
[33]
show all references
References:
[1] [2] [3] [4] [5]
S. H. Benton,
The Hamilton-Jacobi Equation: A Global Approach, Academic Press, 1977.
Google Scholar
[6] [7] [8]
S. Caracciolo, C. Lucibello, G. Parisi and G. Sicuro, Scaling hypothesis for the Euclidean bipartite matching problem,
[9]
I. Chavel,
[10] [11]
L. C. Evans and R. F. Gariepy,
[12]
A. Fathi, Regularity of $C^1$ solutions of the Hamilton-Jacobi equation, in
[13]
N. Gigli,
On Hölder continuity-in-time of the optimal transport map towards measures along a curve,
[14] [15]
M. Goldman, M. Huesmann and F. Otto, A large-scale regularity theory for the Monge-Ampère equation with rough data and application to the optimal matching problem, preprint, arXiv: 1808.09250.Google Scholar
[16] [17] [18] [19]
M. Ledoux, On optimal matching of Gaussian samples ii, https://perso.math.univ-toulouse.fr/ledoux/files/2019/03/SudakovII.pdf.Google Scholar
[20]
M. Ledoux, A fluctuation result in dual Sobolev norm for the optimal matching problem, https://perso.math.univ-toulouse.fr/ledoux/files/2019/02/matchingclt.pdf.Google Scholar
[21] [22]
P.-L. Lions,
[23]
J. Lott and C. Villani, Hamilton–Jacobi semigroup on length spaces and applications,
[24] [25]
W. Rudin,
[26]
F. Santambrogio,
[27] [28] [29] [30] [31] [32]
N. G. Trillos and D. Slepčev,
On the rate of convergence of empirical measures in $\infty$-transportation distance,
[33]
[1] [2]
Brendan Pass.
Multi-marginal optimal transport and multi-agent matching problems: Uniqueness and structure of solutions.
[3]
J. M. Mazón, Julio D. Rossi, J. Toledo.
Optimal matching problems with costs given by Finsler distances.
[4] [5] [6] [7]
Nalini Anantharaman, Renato Iturriaga, Pablo Padilla, Héctor Sánchez-Morgado.
Physical solutions of the Hamilton-Jacobi equation.
[8] [9]
María Barbero-Liñán, Manuel de León, David Martín de Diego, Juan C. Marrero, Miguel C. Muñoz-Lecanda.
Kinematic reduction and the Hamilton-Jacobi equation.
[10]
Larry M. Bates, Francesco Fassò, Nicola Sansonetto.
The Hamilton-Jacobi equation,
integrability, and nonholonomic
systems.
[11] [12] [13]
Angel Angelov, Marcus Wagner.
Multimodal image registration by elastic matching of edge sketches via optimal control.
[14]
Yoshikazu Giga, Przemysław Górka, Piotr Rybka.
Nonlocal spatially inhomogeneous Hamilton-Jacobi equation
with unusual free boundary.
[15]
Laura Caravenna, Annalisa Cesaroni, Hung Vinh Tran.
Preface: Recent developments related to conservation laws and Hamilton-Jacobi equations.
[16] [17]
Fabio Camilli, Paola Loreti, Naoki Yamada.
Systems of convex Hamilton-Jacobi equations with implicit obstacles and the obstacle problem.
[18]
Alexander Quaas, Andrei Rodríguez.
Analysis of the attainment of boundary conditions for a nonlocal diffusive Hamilton-Jacobi equation.
[19]
Giuseppe Marmo, Giuseppe Morandi, Narasimhaiengar Mukunda.
The Hamilton-Jacobi theory and the analogy between classical and
quantum mechanics.
[20]
2018 Impact Factor: 1.143
Tools
Article outline
Figures and Tables
[Back to Top]
|
The entropy for the output of SHA-256 truncated to its first $128$ bits when fed a random $128$-bit input is about $127.173$ bit, down from very close to $128$ bit before truncation (see final note). The truncation does not halve the entropy, because the halves are not independent. The right line of thought is that SHA-256 truncated to its first $128$ bits is a fine $128$-bit hash, and behaves like a random oracle.
For even moderate $h$ (e.g. at least $32$), the expected entropy in the output of a $h$-bit random oracle fed with random $h$-bit input is close to $h-0.8272$ bit. As $h$ grows, that expected entropy is $\approx h-\eta$ bit with $$\eta={1\over e}\sum_{j=1}^\infty{j\;\log_2j\over j!}\;\;=0.82724538915300508343173\dots\text{bit}$$
Proof, where I'll be using $a\approx b$ as a convenient shorthand for $$\lim_{h\to\infty}{a\over b}=1$$
For a particular distribution implemented by the oracle, let $n_j$ be the number of output values appearing exactly $j$ times among the outputs for all inputs. The exact entropy $H$ for that particular distribution can be computed from the $n_j$ by applying the definition of entropy, giving$$\begin{align}H&=\sum_{j=1}^{2^h}n_j\;{j\over2^h}\;\log_2{2^h\over j}\\&={h\over2^h}\sum_{j=1}^{2^h}j\;n_j\;-{1\over2^h}\sum_{j=1}^{2^h}n_j\;j\;\log_2{j}\end{align}$$where we have (by merely counting what all inputs lead to)$$\sum_{j=1}^{2^h}j\;n_j\;=2^h$$thus$$h-H={1\over2^h}\sum_{j=1}^{2^h}n_j\;j\;\log_2{j}$$ For fixed $j$ and as $h$ grows, by careful counting of the possibilities, we can establish that for random distribution, odds that any particular value is reached $j$ times is $\approx{1\over e\;j!}$. Thus for fixed $j$ and as $h$ grows, the expected $n_j$ is $\approx{2^h\over e\;j!}$. In the exact expression of $h-H$, all the terms in the sum are non-negative. To obtain an asymptotic of the expected $h-H$ when $h$ grows, we can thus replace $n_j$ by its expected value, and obtain, as desired, that when $h$ grows$$\text{the expected value of } (h-H)\;\text{ is }\approx{1\over e}\sum_{j=1}^\infty{j\;\log_2j\over j!}$$
I've been unable to locate a source; the closest I found is an empirical derivation of $\eta$ to 4 decimals by Andrea Röck:
Collision Attacks based on the Entropy Loss caused by Random Functions, WEWoRC 2007, slides; with more in her thesis.
My first empirical derivation was using a program which draws $2^h$ pseudo-random $h$-bit values and counts how many values are reached how many times; for $h=35$ (the largest I could do with 20GB RAM), three runs gave:
run 1 run 2 run 3
0 12640123427 36.79% 12640183855 36.79% 12640308584 36.79%
1 12640408212 36.79% 12640365800 36.79% 12640104651 36.79%
2 6320124091 18.39% 6320013534 18.39% 6320174710 18.39%
3 2106681541 6.13% 2106762262 6.13% 2106726749 6.13%
4 526645276 1.53% 526674914 1.53% 526679947 1.53%
5 105334156 0.31% 105325000 0.31% 105330269 0.31%
6 17561277 0.05% 17551924 0.05% 17556150 0.05%
7 2507918 0.01% 2508727 0.01% 2505282 0.01%
8 313971 0.00% 313943 0.00% 313406 0.00%
9 34748 0.00% 34553 0.00% 34755 0.00%
10 3424 0.00% 3542 0.00% 3546 0.00%
11 291 0.00% 287 0.00% 292 0.00%
12 31 0.00% 24 0.00% 24 0.00%
13 4 0.00% 3 0.00% 2 0.00%
14 1 0.00% 0 0.00% 1 0.00%
15+ 0 0.00% 0 0.00% 0 0.00%
entropy 34.172763 bit 34.172758 bit 34.172751 bit
Note: if we consider a random function from $\{0,1\}^{128}$ to $\{0,1\}^{256}$, most likely there is a small $k$ (most often $0$ or $1$, sometime $2$, rarely $3$ or more) such that $k$ outputs have exactly two corresponding inputs, $2^{128}-2k$ outputs have exactly one, and $2^{256}-2^{128}+2k$ outputs have none (odds that any output is reached three or more times are negligible).
Therefore, in this most likely case, the entropy on output of that function when fed a random $128$-bit input is $-k\;2^{-127}\log_2(2^{-127})-(2^{128}-2k)\;2^{-128}\log_2(2^{-128})$, that is $128-k\;2^{-127}$.
The best model we have for SHA-256 (not truncated) for $128$-bit input is a particular function chosen at random among functions from $\{0,1\}^{128}$ to $\{0,1\}^{256}$, thus we can conclude the entropy for the output of SHA-256 when fed a random $128$-bit input is likely exactly $128-k\;2^{-127}$ for $k\in\{0,1,2\}$, which is very nearly $128$ bit, down to about the 37
th decimal places.
|
Anatomy of a Short Circuit
A short circuit is an electrical fault where a conductive path (usually of low impedance) is formed between two or more conductive parts of an electrical system (e.g. phase-phase, phase-earth, phase-neutral, etc). This article looks at the nature of short circuits and tries to break down and explain the constituent parts of fault currents. Note that the terms "short circuit" and "fault" are often used interchangeably.
In most networks, a short circuit is similar to the closing transient of an RL circuit, where the R and L components are the impedances of the source(s). The transient characteristics of short circuit currents vary depending on whether they are near or far from synchronous generators. The sections below describe the two general types of short circuits:
Near-to-Generator Short Circuit
A fault close to a synchronous generator has the following maximum short circuit current [math]i_{sc}(t)[/math]:
[math]i_{sc}(t) = E \sqrt{2} \left[ \left( \frac{1}{X_{d}''} - \frac{1}{X_{d}'} \right) e^{-t/t_{d}''} + \left( \frac{1}{X_{d}'} - \frac{1}{X_{d}} \right) e^{-t/t_{d}'} + \frac{1}{X_{d}} \right] \sin (\omega t) + \frac{E \sqrt{2}}{X_{d}''} e^{-t/t_{a}} \, [/math]
Where [math]E \, [/math] is the phase-to-neutral rms voltage at the generator terminals (V)
[math]X_{d}'' \, [/math] is the generator direct-axis subtransient reactance ([math]\Omega[/math]) [math]X_{d}' \, [/math] is the generator direct-axis transient reactance ([math]\Omega[/math]) [math]X_{d} \, [/math] is the generator synchronous reactance ([math]\Omega[/math]) [math]T_{d}'' \, [/math] is the generator subtransient time constant (s) [math]T_{d}' \, [/math] is the generator transient time constant (s) [math]T_{a} \, [/math] is the aperiodic time constant (s)
From the above equation, it can be seen that the short circuit current can be broken up into an aperiodic current (dc component of the short circuit):
[math] \frac{E \sqrt{2}}{X_{d}''} e^{-t/t_{a}} \, [/math]
And a series of three damped sinusoidal waveforms corresponding to the following distinct stages:
(1) Subtransient component: [math] E \sqrt{2} \left( \frac{1}{X_{d}''} - \frac{1}{X_{d}'} \right) e^{-t/t_{d}''} \sin (\omega t) \, [/math]
This period typically lasts 10 to 20ms from the start of the fault. The subtransient reactance is due to the flux casued by the stator currents crossing the air gap and reaching the rotor surface or amortisseur / damper windings.
(2) Transient component: [math] E \sqrt{2} \left( \frac{1}{X_{d}'} - \frac{1}{X_{d}} \right) e^{-t/t_{d}'} \sin (\omega t) \, [/math]
This period typically lasts 100 to 400ms after the subtransient period. The transient reactance occurs when all the damping currents in the rotor surface or amortisseur / damper windings have decayed, but while the damping currents in the field winding are still in action.
(3) Steady-state component: [math] E \sqrt{2} \frac{1}{X_{d}} \sin (\omega t) \, [/math]
The steady-state occurs after the transient period when all the damping currents in the field windings have decayed, and essentially remains until the fault is cleared.
Putting these all together, we get the familiar near-to-generator short circuit waveform:
Far-from-Generator Short Circuit
In short circuits occurring far from synchronous generators, we can ignore the effects of the generator subtransient behaviour. It can be shown through transient circuit analysis that the maximum far-from-generator short circuit is as follows:
[math] i_{sc}(t) = \frac {E \sqrt{2}}{Z_{sc}} \left[ \sin \left( \omega t + \frac{\pi}{2} \right) - e^{-\frac{R}{X} \omega t} \right] \, [/math]
Where [math]E \, [/math] is the rms voltage of the circuit (V)
[math] Z_{sc} \, [/math] is the fault impedance ([math]\Omega[/math]) [math] \frac{R}{X} \, [/math] is the R/X ratio at the point of fault (pu)
We can see that there are two components:
(1) A decaying aperiodic component: [math] - \frac {E \sqrt{2}}{Z_{sc}} e^{-\frac{R}{X} \omega t} \, [/math]
(2) A steady state component: [math] \frac {E \sqrt{2}}{Z_{sc}} \sin \left( \omega t + \frac{\pi}{2} \right) \, [/math]
Putting these together, we get the total far-from-generator fault current:
During the transient period, the peak transient current is typically 1.5 to 2.5 times higher than the peak steady state current.
|
First let us consider a Riemannian fiber bundle, i.e a fiber bundle $\pi: M\to B$ of oriented Riemannian manifolds. We denote by $T(M/B)$ the bundle of vertical tangent vectors and assume that the bundle $\pi: M\to B$ possesses the following additional structures:
a connection, that is, a choice of a splitting $TM=T_HM\oplus T(M/B)$ so that the subbundle $T_HM$ is isomorphic to the vector bundle $\pi^*TB$; a connection $\nabla^{M/B}$ on $T(M/B)$.
Let $P: TM\to T(M/B)$ be the projection operator with kernel the chosen horizontal tangent space $T_HM$.
Let $S$ be the second fundamental form as a section of the bundle $$ End(T(M/B))\otimes T_H^*M\cong T^*(M/B)\otimes T(M/B)\otimes T^*_HM $$ defined by $$ (S(X,\theta),Z)=\langle\nabla^{M/B}_ZX-P[Z,X],\theta\rangle $$ for $Z\in \Gamma(M,T_HM)$, $X\in \Gamma(M,T(M/B))$ and $\theta\in \Gamma(M,T^*(M/B))$.
Let the tensor $\Omega$ be the section of the bundle $Hom(\wedge^2T_HM,T(M/B))$ over $M$ defined by the formula $$ \Omega(X,Y)=-P[X,Y] $$ for $X$ and $Y$ in $\Gamma(M,T_HM)$.
We could check that $S$ and $\Omega$ are tensors.
The de Rham differential $d_M$ may be expressed in thers of $\nabla^{M/B}$, $S$, $\Omega$, and the vertical exterior differential $d_{M/B}$ of the fiber bundle. First we extend $d_{M/B}$ to an operator on $\Gamma(M,\wedge T^*_HM\otimes \wedge T^*(M/B)$ by the formula $$ d_{M/B}(\pi^*\nu\otimes \beta)=(-1)^{|\nu|}\pi^*\nu\otimes d_{M/B}\beta $$ for $\nu\in \mathscr{A}(B)$ and $\beta\in \Gamma(M,\wedge T^*(M/B))$.
Let $e_i$ be a local frame of $T(M/B)$ and $f_{\alpha}$ be a local frame of $TB$, with dual frames $e^i$ and $f^{\alpha}$, respectively. We define an operator $\delta_B$ by $$ \delta_B(\pi^*\nu\otimes \beta)=\pi^*(d_B\nu)\otimes \beta+(-1)^{|\nu|}\pi^*\nu\otimes \sum f^{\alpha}\wedge \nabla^{M/B}_{f_{\alpha}}\beta $$
Then $d_M$ could be decomposed as $$ \tag{*}d_M=d_{M/B}+\delta_B-\sum\langle S,e^i\rangle \iota(e_i)+\sum\langle \Omega,e^i \rangle\iota(e_i). $$
See
Heat Kernels and Dirac Operators Section 10.1, in particular Proposition 10.1.
Now we consider a holomorphic fiber bundle of complex Riemannian manifolds $\pi^*: M\to B$. Let $J$ and $J^{\prime}$ be the complex structure on $TM$ and $TB$ respectively. $J$ maps $T(M/B)$ into itself. We also assume that $J$ maps $T^HM$ into itself. However we do not require $T^{H,(1,0)}M$ to be a holomorphic subbundle of $T^{(1,0)}M$. Moreover we assume the fiber bundle is Kahler in the sense that there exists a smooth $2$-form $\omega$ on $M$ of complex type $(1,1)$ with the following properties
$\omega$ is closed; $T^HM$ and $T(M/B)$ are orthogonal with respect to $\omega$; If $X$ and $Y\in T(M/B)$, then $\omega(X,Y)=\langle X,JY\rangle$ where the right hand side is given by the Riemannian metric. See Analytic Torsion and Holomorphic Determinant Bundles IISection 1.(c).
My question is: in the complex case, do we have a $4$-term decomposition of $\bar{\partial}_M$ which is the analogue of the decomposition of $d_M$ in (*)?
|
I wouldn't have asked this question if I hadn't seen this image:
From this image it seems like there are reals that are neither rational nor irrational (dark blue), but is it so or is that illustration incorrect?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
I wouldn't have asked this question if I hadn't seen this image:
From this image it seems like there are reals that are neither rational nor irrational (dark blue), but is it so or is that illustration incorrect?
A real number is irrational if and only if it is not rational. By definition any real number is either rational or irrational.
I suppose the creator of this image chose this representation to show that rational and irrational numbers are both part of the bigger set of real numbers. The dark blue area is actually the empty set.
This is my take on a better representation:
Feel free to edit and improve this representation to your liking. I've oploaded the SVG sourcecode to pastebin.
No. The definition of an irrational number is a number which is not a rational number, namely it is not the ratio between two integers.
If a real number is not rational, then by definition it is irrational.
However, if you think about
algebraic numbers, which are rational numbers and irrational numbers which can be expressed as roots of polynomials with integer coefficients (like $\sqrt2$ or $\sqrt[4]{12}-\frac1{\sqrt3}$), then there are irrational numbers which are not algebraic. These are called transcendental numbers.
Irrational means not rational. Can something be not rational, and not not rational? Hint: no.
Of course, the "traditional" answer is no, there are no real numbers that are not rational nor irrational. However, being the contrarian that I am, allow me to provide an alternative interpretation which gives a different answer.
In intuitionistic logic, where the law of excluded middle (LEM) $P\vee\lnot P$ is rejected, things become slightly more complicated. Let $x\in \Bbb Q$ mean that there are two integers $p,q$ with $x=p/q$. Then the traditional interpretation of "$x$ is irrational" is $\lnot(x\in\Bbb Q)$, but we're going to call this "$x$ is not rational" instead. The statement "$x$ is not not rational", which is $\lnot\lnot(x\in\Bbb Q)$, is implied by $x\in\Bbb Q$ but not equivalent to it.
Consider the equation $0<|x-p/q|<q^{-\mu}$ where $x$ is the real number being approximated and $p/q$ is the rational approximation, and $\mu$ is a positive real constant. We measure the accuracy of the approximation by $|x-p/q|$, but don't let the denominator (and hence also the numerator, since $p/q$ is near $x$) be too large by demanding that the approximation be within a power of $q$. The larger $\mu$ is, the fewer pairs $(p,q)$ satisfy the equation, so we can find the least upper bound of $\mu$ such that there are infinitely many coprime solutions $(p,q)$ to the equation, and this defines the irrationality measure $\mu(x)$. There is a nice theorem from number theory that says that the irrationality measure of any irrational algebraic number is $2$, and the irrationality measure of a transcendental number is $\ge2$, while the irrationality measure of any rational number is $1$.
Thus there is a measurable gap between the irrationality measures of rational and irrational numbers, and this yields an alternative "constructive" definition of irrational: let $x\in\Bbb I$, read "$x$ is irrational", if $|x-p/q|<q^{-2}$ has infinitely many coprime solutions. Then $x\in\Bbb I\to x\notin\Bbb Q$, i.e. an irrational number is not rational, and in classical logic $x\in\Bbb I\leftrightarrow x\notin\Bbb Q$, so this is equivalent to the usual definition of irrational. This is viewed as a more constructive definition because rather than asserting a negative (that $x=p/q$ yields a contradiction), it instead gives an infinite sequence of good approximations which verifies the irrationality of the number.
This approach is also similar to the continued fraction method: irrational numbers have infinite simple continued fraction representations, while rational numbers have finite ones, so given an infinite continued fraction representation you automatically know that the limit cannot be rational.
The bad news is that because intuitionistic or constructive logic is strictly weaker than classical logic, it does not prove anything that classical logic cannot prove. Since classical logic proves that every number is rational or irrational, it does not prove that there is a non-rational non-irrational number (assuming consistency), so intuitionistic logic also cannot prove the existence of a non-rational non-irrational number. It just can't prove that this is impossible (it
might be true, for some sense of "might"). On the other hand, there should be a model of the reals with constructive logic + $\lnot$LEM, such that there is a non-rational non-irrational number, and I invite any constructive analysts to supply such examples in the comments.
Every real number is either rational or irrational. The picture is not a good illustration I think. Though notice that a number can not be both irrational and rational (in the picture intersection is empty)
We can represents real numbers on line i.e. real line which contains rationals and irrationals. Now by completeness property of real numbers, which says that real line has no gap. So there is no real number that is neither rational nor irrational.
The set of irrational numbers is the complement of the set of rational numbers, in the set of real numbers. By definition, all real numbers must be either rational or irrational.
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
|
This paper concerns linear first-order hyperbolic systems in one space dimension of the type
$$
\partial_tu_j + a_j(x,t)\partial_xu_j + \sum\limits_{k=1}^nb_{jk}(x,t)u_k = f_j(x,t),\; x \in (0,1),\; j=1,\ldots,n,
$$
with periodicity conditions in time and reflection boundary conditions in space. We state a kind of dissipativity condition (depending on the coefficients $a_j$ and $b_{jj}$ and the boundary reflection coefficients), which implies Fredholm solvability of the problem, i.e., either there is a nontrivial solution to the homogeneous problem (in this case the space of such solutions has finite dimension) or the nonhomogeneous problem is uniquely solvable for any right-hand side (in this case the solution depends continuously on the right-hand side). In particular, under those conditions no small denominator effects occur.
Our results work for many non-strictly hyperbolic systems, but they are new even in the case of strict hyperbolicity.
Finally, in the case that all coefficients $a_j$ are $t$-independent, we show that the solutions are $C^\infty$-smooth if the data are $C^\infty$-smooth.
|
Actually, this is more of a general question relating to a homework problem I already did. I was given the initial wavefunction of a particle in an infinite square well:\Psi(x,0) = Ax if (0 \leq x \leq \frac{a}{2}), and =A(a-x) if (\frac{a}{2} \leq x \leq a)And of course \Psi(0,0) =...
Homework StatementProve or give a counterexample: If U is a subspace of V that is invariant under every operator on V, then U = {0} or U = V. Assume that V is finite dimensional.The attempt at a solutionI really think that I should be able to produce a counterexample, however...
Homework StatementGive a specific example of an operator T on R^4 such that,1. dim(nullT) = dim(rangeT) and2. dim(the intersection of nullT and rangeT) = 1The attempt at a solutionI know that dim(R^4) = dim(nullT) + dim(rangeT) = 4, so dim(nullT) = dim(rangeT) = 2.I also...
Homework StatementProve that if there exists a linear map on V whose null space and range are both finite dimensional, then V is finite dimensional.The attempt at a solutionI *think* the following is true: For all v in V, T(v) is in range(T), otherwise T(v) = 0 which implies v is in...
1. Homework StatementSuppose that T is a linear map from V to F, where F is either R or C. Prove that if u is an element of V and u is not an element of null(T), thenV = null(T) (direct sum) {au : a is in F}.2. Relevant informationnull(T) is a subspace of VFor all u in V, u is...
I was wondering if anyone knew anything about epidemic models which take into account the ability of a disease to mutate. Basically I’m curious if there are any existing models which could predict how a rapidly changing disease might affect the progression of an epidemic, or how slower...
hi,I'm trying to put a graph generated in maple into a latex document, but I have no experience using either program. So far I've been able to save my maple plot in postscript format, and based on various online tutorials I've included the \usepackage{graphics} comand after...
Prove that the intersection of any collection of subspaces of V is a subspace of V.Ok, I know I need to show that:1. For all u and v in the intersection, it must imply that u+v is in the intersection, and2. For all u in the intersection and c in some field, cu must be in the...
1. Homework StatementProve: If a, b are nonzero elements in a PID, then there are elements s, t in the domain such that sa + tb = g.c.d.(a,b).2. Homework Equationsg.c.d.(a,b) = sa + tb if sa + tb is an element of the domain such that,(i) (sa + tb)|a and (sa + tb)|b and(ii) If...
1. Homework StatementLet G_1 and G_2 be groups with normal subgroups H_1 and H_2, respectively. Further, we let \iota_1 : H_1 \rightarrow G_1 and \iota_2 : H_2 \rightarrow G_2 be the injection homomorphisms, and \nu_1 : G_1 \rightarrow G_1/H_1 and \nu_2 : G_2/H_2 be the quotient...
Prove that $\mathbf{F}$^{\infty} is infinite dimensional.$\mathbf{F}$^{\infty} is the vector space consisting of all sequences of elements of $\mathbf{F}$, and $\mathbf{F}$ denotes the real or complex numbers.I was thinking of showing that no list spans $\mathbf{F}$^{\infty}, which would...
hi,In the course of doing my quantum homework I ran into a bit of a snag.In one of my calculations I need to replace the sum from n = 1 to infinity of 1/n^2 (for odd n only) with its number value.My book instructs me to get the information from a table and actualy gives the value (for...
This is something that I think I should already know, but I am confused.It really seems to me that the set of all real numbers, \Re should be compact.However, this would require that \Re be closed and bounded, or equivalently,that every sequence of points in \Re have a limit...
hi,My question reads:Let f be defined and continuous on the interval D_1 = (0, 1),and g be defined and continuous on the interval D_2 = (1, 2).Define F(x) on the set D=D_1 \cup D_2 =(0, 2) \backslash \{1\} by the formula:F(x)=f(x), x\in (0, 1)F(x)=g(x), x\in (1, 2)...
hi,In a discussion of the historical motivations for a move from calculus to operators, my QM book says..."Many mathematicians were uncomfortable with the 'metaphysical implications' of a mathematics formulated in terms of infinitesimal quantities (like dx). This disquiet was the stimulus...
The question says:Let A be a set and x a number.Show that x is a limit point of A if and only if there exists a sequence x_1 , x_2 , ... of distinct points in A that converge to x.Now I know from the if and only if statement that I need to prove this thing both ways.So, the...
This is not a specific homework question so much as it is a general conceptual question.My analysis book includes a theorem that states:1. The union of any number of open sets is an open set.2. The intersection of a finite number of open sets is an open set.I follow the proof of...
How to prove stuff about linear algebra???Question:Suppose (v_1, v_2, ..., v_n) is linearly independent in V and w\in V.Prove that if (v_1 +w, v_2 +w, ..., v_n +w) is linearly dependent, then w\in span(v_1, ...,v_n).To prove this I tried...If (v_1, v_2, ..., v_n) is linearly...
Use the definition of an open set to show that if a finite number of points are removed, the remaining set is still open.Definition:A set is open if every point of the set lies in an open interval entirely contained in the set.I'm a bit lost, but I think that I somehow need to show...
Question:Suppose there is a set E\subset \Re is bounded from below.Let x=inf(E)Prove there exists a sequence x_1, x_2,... \in E, such that x=lim(x_n).I am not sure but it seems like my x=lim(x_n) =liminf(x_n).In class we constructed a Cauchy sequence by bisection to find sup...
Question:(I've got a few like this, so I'd like to know if I am doing them correctly.)Compute the sup, inf, limsup, liminf, and all the limit points of the following sequence x_1, x_2,... wherex_n = 1/n + (-1)^nWhat I did was write down the first few terms to get an idea of the...
Theorem:For every non empty set E of real numbers that is bounded above there exists a unique real number sup(E) such that1. sup(E) is an upper bound for E.2. if y is an upper bound for E then y \geq sup(E).Prove:sup(A\cap B)\leq sup(A)I can show a special case of...
Question:Prove that if a Cauchy sequence x_1, x_2,... of rationals is modified by changing a finite number of terms, the result is an equivalent Cauchy sequence.All the math classes I have taken previously were computational, and my textbook contains almost no definitions.So, I...
Here's the problem statement:Prove that x_1,x_2,x_3,... is a Cauchy sequence if it has the property that |x_k-x_{k-1}|<10^{-k} for all k=2,3,4,.... If x_1=2, what are the bounds on the limit of the sequence?Someone suggested that I use the triangle inequality as follows:let n=m+l...
|
I am dealing with a highly nonlinear system of two PDEs. I already have a code to solve the system in case of Dirichlet boundary conditions. The explicit system is:
$$ \begin{eqnarray*} \partial_{t}u & = & D_{11}\partial_{x}^{2}u+\partial_{x}D_{11}\cdot\partial_{x}u+D_{12}\partial_{x}^{2}v+\partial_{x}D_{12}\cdot\partial_{x}v\\ \partial_{t}v & = & D_{21}\partial_{x}^{2}u+\partial_{x}D_{21}\cdot\partial_{x}u+D_{22}\partial_{x}^{2}v+\partial_{x}D_{22}\cdot\partial_{x}v \end{eqnarray*}$$
with boundary conditions:
$$u(0,t)=0 \ (\partial_x v)(0,t) = 0$$
And the discretized form of the scheme are:
$$\begin{eqnarray*} -D_{21}(x_{j})u_{n+1,j-1}-D_{22}(x_{j})v_{n+1,j-1}\\ {}[\mu+2D_{11}(x_{j})]u_{n+1,j}+2D_{12}(x_{j})v_{n+1,j}\\ -D_{11}(x_{j})u_{n+1,j+1}-D_{12}(x_{j})v_{n+1,j+1} & = & \frac{1}{4}[4\mu-D_{11}(x_{j+1})+D_{11}(x_{j-1})]u_{n,j}\\ & & -[D_{12}(x_{j+1})-D_{12}(x_{j})]v_{n,j}\\ & & +\frac{1}{4}[D_{11}(x_{j+1})-D_{11}(x_{j-1})]u_{n,j+1}\\ & & +\frac{1}{4}[D_{12}(x_{j+1})-D_{12}(x_{j-1})]v_{n,j+1} \end{eqnarray*}$$
$$ \begin{eqnarray*} -D_{21}(x_{j})u_{n+1,j-1}-D_{22}(x_{j})v_{n+1,j-1}\\ 2D_{21}(x_{j})u_{n+1,j}+[\mu+2D_{22}(x_{j})]v_{n+1,j}\\ -D_{21}(x_{j})u_{n+1,j+1}-D_{22}(x_{j})v_{n+1,j+1} & = & \frac{1}{4}[4\mu-D_{21}(x_{j+1})+D_{21}(x_{j-1})]u_{n,j}\\ & & -[D_{22}(x_{j+1})-D_{22}(x_{j})]v_{n,j}\\ & & +\frac{1}{4}[D_{21}(x_{j+1})-D_{21}(x_{j-1})]u_{n,j+1}\\ & & +\frac{1}{4}[D_{22}(x_{j+1})-D_{22}(x_{j-1})]v_{n,j+1} \end{eqnarray*} $$
That is, I am solving a linear system of the form:
$$A\boldsymbol{x}=\boldsymbol{b}$$
where $A$ has $2J\times 2J$ entries, according to the structure:
$$\left(\begin{array}{cccccccccc} \mbox{1st equation}\rightarrow & | & c_{u}^{j} & c_{v}^{j} & c_{u}^{j+1} & c_{v}^{j+1} & 0 & & ... & 0\\ \mbox{2nd equation}\rightarrow & | & d_{u}^{j} & d_{v}^{j} & d_{u}^{j+1} & d_{v}^{j+1} & 0 & & ... & 0\\ \vdots & | & . & . & . & . & . & . & . & .\\ \mbox{1st}\rightarrow & | & 0 & 0 & c_{u}^{j-1} & c_{v}^{j-1} & c_{u}^{j} & c_{v}^{j} & c_{u}^{j+1} & c_{v}^{j+1}\\ \mbox{2nd }\rightarrow & | & 0 & 0 & d_{u}^{j-1} & d_{v}^{j-1} & d_{u}^{j} & d_{v}^{j} & d_{u}^{j+1} & d_{v}^{j+1}\vdots \end{array}\right)$$
where the $c$'s and $d$'s are the the coefficients corresponding to the scheme above.
Now, I am trying to implement the Neumann boundary condition (the one on $v$), using the ghost point method as discussed, for instance, here for a simpler equation.
Thus, I would like to change my code in order to add some extra terms to the array, but I can't work out what to put in them. Also, in the respective components of the matrix $A$. Because when I write the condition:
$$\frac{u_1-u_{-1}}{(\Delta x)^2}=0$$ (and same for $v$)
I need to eliminate $u_{-1}$ and $v_{-1}$, using also the third and fourth equations I wrote in this post. If I inverted for $u_{-1}$ and $v_{-1}$ (assuming I know how to), I do not see how I could write down a code which works for any kind of scheme I have, without putting the explicit inversion in the matrix.
|
An assortment of curves for fitting chemistry examples is presented in these Colby College class notes. Of particular application is the sigmoid response curve with variable "slope" for the central part of the curve:
$$ f(x) = \frac{a}{1 + e^{bx - c} } + d $$
[This is similar to the suggested
logistic function proposed in the first Answer, but has four rather than three parameters allowing for the inflection points (changing sign of second derivative) to appear in a location other than at the origin.]
Such functions are inherently monotone (decreasing when $b\gt 0$ and increasing when $b\lt 0$), so they will "automatically" possess that property if you fit your data to one of these models, avoiding the tendency of polynomial curve fits to
oscillations seen in your graph.
A disadvantage of this
nonlinear parameterization is that computing it requires an iterative process, rather than by the direct solvers available for a linear least squares fitting as you seem to have used for the fifth degree polynomial.
But nonlinear least squares fitting is often not difficult when reasonable initial estimates for the parameters are available. There are online solvers (at a glance Nonlinear Least Squares Regression suggested in the class note above seems to be intact, although a second site mentioned there returns 404:Not found). Most spreadsheets have either built-in or add-in solvers that would be capable of doing the fit. But you may find it edifying to do the fit "manually" so that you understand the roles played by the parameters.
I would start with the two extremes where the curve levels off. That is, for arguments well below the "drop-off" portion of the curve, the data appears to approach $f(0) = 360$ more or less. From this one might estimate:
$$ a + d \approx 360 $$
while the data well above the "drop-off" suggests $d \approx 130$. So we can get initial estimates for parameters $a,d$ easily.
Visually the inflection point (change from concave down to concave up) appears to occur at about $x=3.5$. Solving $f''(x) = 0$ gives us $bx = c$, thus eliminating one more parameter estimate. Finally the first derivative of $f(x)$ at the inflection point can be estimated visually from the slope exhibited by the data (although this seems rather steep, circa $600$, so less firmly estimated by sight).
The iterative procedure (nonlinear least squares regression) will seek to adjust the parameters $a,b,c,d$ to minimize the least squares measure of error:
$$ \sum_{i=1}^n (y_i - f(x_i))^2 $$
where summation is taken over the set of data $(x_i,y_i)$ from your titration experiments.
|
Consider the load balancing problem on two machines. Thus we want to distribute a set of $n$ jobs with processing times $t_1,...,t_n$ over two machines such that the makespan (maximum of the processing times of the two machines) is minimized. Professor Smart has designed an approximation algorithm
Alg for this problem, and he claims that his algorithm is a $1.05$ approximation algorithm. We run Alg on a problem instance where the total size of all jobs is $200$, and Alg returns a solution whose makespan is $120$.
(i) Suppose that we know that all job sizes are at most $100$. Can we then conclude that professor Smart's claim is false?
(ii) Same question when all job sizes are at most $10$.
Let's talk about the case (i):
We know that $\sum{t_i} = 200$ and that $t_i \leq 100$. The makespan of the Algorithm $Alg = 120$, so $Alg \leq 1.05 * OPT$. We have no other information about the algorithm used. A lower bound would be $LB = max( \frac{\sum{t_i}}{2}, max(t_i)) = max (100,100) = 100$ so I would say for that particular instance we'd have $120 \leq 1.05 * 100 = 105$ which means the claim would be false.
Likewise for the case (ii).
My answer is marked as incorrect, and I am struggling to do the right analysis.
Can anyone help please ?
|
The other answers here, describing oxygen toxicity are telling what can go wrong if you have too much oxygen, but they are not describing two important concepts that should appear with their descriptions. Also, there is a basic safety issue with handling pressure tanks of high oxygen fraction.An important property of breathed oxygen is its partial pressure....
It does. You would find the average percentage of the atmosphere that is argon is very slightly higher at the floor of valleys. However, bear in mind first of all it wouldn't be anywhere near a complete stratification -- a layer of pure argon, then another of pure N2, and so on. A mixture of nearly ideal gases doesn't do that, at least at equilibrium, ...
The common saying is a hold over from when STP was defined to be $\pu{273.15 K}$ and $\pu{1 atm}$. However, IUPAC changed the definition in 1982 so that $\pu{1 atm}$ became $\pu{1 bar}$. I think the main issue is a lot of educators didn't get the memo and went right along either teaching STP as $\pu{1 atm}$ or continuing with the line they were taught ("$\pu{...
The ideal gas law is a very good approximation of how gases behave most of the timeThere is no logical flaw in the laws. Most gases most of the time behave in a way that is close to the ideal gas equation. And, as long as you recognise the times they don't, the equation is good description of the way they behave.The ideal gas equations assume that the ...
Our body is used to the environment around us. Once you change part of the environment, you have to be ready for the consequences.Inhaling pure oxygen is the cause for what is known as oxygen toxicity.Oxygen toxicity is a condition resulting from the harmful effects of breathing molecular oxygen $\ce{(O2)}$ at increased partial pressures.High ...
Essentially, because the carbon dioxide sublimates from solid (dry ice) to gas at a very low temperature (roughly −78 °C at 1 atm), it causes water vapour in the air to condense, causing a visible fog. Thus what you are seeing is not carbon dioxide, but rather water.When we exhale and it is reasonably warm, the carbon dioxide expelled is roughly body ...
It's been known since 1941 that the answer to your question is in the negative, i.e. that there will never be a closed form equation of state for a nonideal gas.In 1941 Mayer and Montroll developed what is now known as the cluster expansion for the partition function of a nonideal gas whose particles have pairwise interactions. This cluster expansion ...
At the end of the tunnel, you're still trying to approximate the statistical average of interactions between individual molecules using macroscopic quantities. The refinements add more parameters because you're trying to parametrise the overall effect of those individual interactions for every property that is involved for each molecule.You're never going ...
The differences in acceleration due to gravity is not the main factor in comparing how accurate the approximation is for each planet.The main factor is the mass of gas each planet's atmosphere contains.Mercury has almost no atmosphere. The total mass of all gas in Mercury's atmosphere is only 10000 kg! The pressure is less than $10^{-14}$ bar. The ...
A big point of confusion is that it is still taught (at least in the mid-2000's) that STP is defined with respect to $\pu{273 K}$ and $\pu{1 atm}$ of pressure, or $\pu{1.01325 bar}$ of pressure, even though IUPAC changed their definition to be with respect to $\pu{1 bar}$ of pressure. By using the ideal gas law on the old STP definition, you get that the ...
You must consider this:The question whether a physical system follows a particular law is not a "yes or no" question. There is always an error when you compare what you measure with what the law predicts. The error can be at the 17th digit, but it's still there. Let me quote a very insightful passage by H. Jeffreys about this:It is not true that ...
I didn't know that balloons expanded during the fly because of thermodynamics, and I didn't know how high they can fly, but a rapid search tells that a partially unfilled regular balloon can fly until an altitude of around $\pu{25 km}$.Now, $\pu{25 km}$ means that it reaches the first part of the stratosphere, with temperatures of $\pu{-60 ^\circ C}$, that ...
While most everything the previous answer states is correct, I would point out that taking four times the volume of a single particle has nothing to do with experiment and arises mathematically.In deriving the VDW equation, the particles are still assumed to be hard spheres, but this assumption is corrected for with the parameter $a$.The hard sphere ...
PreliminariesConsider $U = U(V,T, p)$. However, assuming that it is possible to write an equation of state of the form $p = f(V,T)$, I don't have to explicitly address the $p$ dependence of $U$, and I can write the following differential:$$\mathrm{d}U = \underbrace{\left ( \frac{\partial U}{\partial V} \right)_T}_{\pi_T} \mathrm{d}V + \underbrace{\...
If the balloon is closed, then yes, both volume and pressure will increase when the gas inside is heated. Let's look at two simpler cases first.If the gas were completely free to expand against ambient pressure (say, inside of a container sealed with a freely moving piston, with no friction), then the heated gas would expand until it created as much force ...
The heat capacities are defined as$$C_p = \left(\frac{\partial H}{\partial T}\right)_{\!p} \qquad \qquad C_V = \left(\frac{\partial U}{\partial T}\right)_{\!V} \tag{1}$$and since $H = U + pV$, we have$$\begin{align}C_p - C_V &= \left(\frac{\partial H}{\partial T}\right)_{\!p} - \left(\frac{\partial U}{\partial T}\right)_{\!V} \tag{2} \\&= \...
As a certified SCUBA diver, I learned that breathing pressurized pure oxygen leads to oxygen toxicity, which can be fatal. However, I'm not anywhere near an expert on the mechanism of oxygen toxicity, but I believe it has to do with resulting in a lot more reactive oxygen species which can cause oxidative stress and lipid peroxidation. I'm not really ...
Does this mean that both1 mole of $\ce O$ would occupy $22.4~\mathrm L$ (or if this doesn't usually occur in nature, say 1 mole of $\ce{He}$ or another monoatomic gas)1 mole of $\ce{O2}$ would occupy $22.4~\mathrm L$Yes, it means exactly that. And you're right, a stable gas of $\ce O$ atoms is a pretty exotic thing, so $\ce{He}$ is a much ...
That's because of two reasons. One is entropy, the ultimate force of chaos and disorder. Sure, gases would like to be arranged according to their density, but even above that, they would like to be mixed, because mixing creates a great deal of entropy. If you prevent the mixing, then they would behave just as you expected. Indeed, a balloon filled with $\ce{...
You may recall the ideal gas law: $$PV = nRT.$$Here, $P$ is pressure, $V$ is volume, $n$ is the amount of gas present (in moles), $R$ is the ideal gas constant, and $T$ is temperature.In an enclosed system, with no gas flowing in or out, $n$ is constant (as is also, obviously, $R$). We can rearrange the equation above to pull all the constant terms to ...
This is merely a shard of a fact which does not make much sense in and by itself. After all, in systems with gas/liquid equilibrium there is nothing really special about $\left(\dfrac{\partial\mathfrak p}{\partial V}\right)_T=0$. On the contrary, this is pretty typical. See all those points where the blue lines (isotherms) are horizontal? They make up a ...
The van der Waals equation can't be derived from first principles. It is an ad-hoc formula. There is a "derivation" in statistical mechanics from a partition function that is engineered to give the right answer. It also cannot be derived from first principles.A gas is a collection of molecules that do not cohere strongly enough to form a liquid or a ...
I edited the first van der Waals equations in your question, because it was incorrect.First, the volume available to the gas is pretty much was you think: it's the space left for it to occupy, i.e. the volume delimited by the container. If you think of a gas tank, it's the interior volume of the tank. For systems of macroscopic dimensions, there is no real ...
Carbon Dioxide (CO2) readily dissolve in water and form Carbonic Acid (i.e H2CO3 (aq) )This is the formation of bonds.Then Carbonic Acid (i.e H2CO3 (aq) ) dissociate in water as follows.So water gets H+ ions, so that cause water acidic.The following shows dissociation of Carbonic Acid (i.e H2CO3 (aq) ) more clearly.Carbon Monoxide (CO) do not ...
Yes. Any fluid with a temperature is above critical temperature and the pressure above the critical pressure is by defintion a supercritical fluid. Don't be mislead by all the claims that supercritical fluids are special and wonky with all sorts of amazing, bizarre properties. This is true of some supercritical fluids near the critical point, but the ...
You're actually on the right track. Looking at the percent composition, you've correctly identified that the ratio of $\ce{C}$ to $\ce{F}$ atoms is 1:1, however, you cannot assume that the formula is just $\ce{CF}$ (which isn't a known compound), it could be any compound with that ratio, $\ce{C2F2}$, $\ce{C3F3}$, $\ce{C4F4}$, etc.The way to narrow it down ...
There is a liquid state for carbon dioxide. Borrowing the $\ce{CO2}$ phase diagram from Wikipedia, we can see that $\ce{CO2}$ will condense at a few atmospheres, dependent on temperature. At still higher pressures, the liquid will solidify. Below the triple point temperature, the gas will transition directly to solid.
General estimates have placed a can of Coca-Cola to have 2.2 grams of $\ce{CO2} $ in a single can. As a can is around 12 fluid ounces, or 355 ml, the amount of $\ce{CO2}$ in a can is:$$\text{2.2 g} \ \ce{CO2}* \frac{\text{1 mol} \ \ce{CO2}}{\text{44 g} \ \ce{CO2} } = 0.05 \ \text{mol}$$$$ \text{355 mL} * \frac{\text{1 L}}{\text{1000 mL}} = 0.355 \ \text{...
If one rearranges the ideal gas law equation, you can obtain the following (assuming $n$ and $T$ are non-zero):$$\frac{PV}{nT} = R$$$R$ is a constant, and there are in fact infinitely many possible sets of values $(P, V, n, T)$ that satisfy the equation. Let $(P_1, V_1, n_1, T_1)$ denote one such set, and let $(P_2, V_2, n_2, T_2)$ denote a second one. ...
TL;DR: Spray cans don't actually get colder when shaken. However, shaking a can does increase heat conduction from your hand to the can, making it feel colder.Humans don't actually sense external temperature directly; our thermoreceptors are located under the skin, and thus effectively measure the rate at which body heat is lost through the skin. This is ...
|
I'm not sure if this is exactly what you are looking for or perhaps you already know what I am about to say.
There is a geometric notion of a twistor spinor (or conformal Killing spinor): one which is in the kernel of the Penrose operator (see below). Then one defines the twistor space as the projectivisation of the space of twistor spinors. Doing this for Minkowski spacetime recovers the usual twistor space.
Let $(M,g)$ be a riemannian spin manifold. (When I say riemannian I include also the case of a metric with indefinite signature.) Let $S$ denote the complex spinor bundle. The spin connection defines a map$$
\nabla: \Gamma(S) \to \Omega^1(S)
$$from spinor fields to one-forms with values in $S$. Now $\Omega^1(S) = \Gamma(T^*M \otimes S)$ and Clifford action of one-forms on spinors gives a map$$
\Omega^1(S) \to \Gamma(S)
$$The composition of the previous two maps is the Dirac operator. The Penrose operator is in some sense the complement of the Dirac operator $D$. The kernel of the Clifford map $T^*M \otimes S \to S$ defines a subbundle $W$, say, of $T^*M \otimes S$. Composing the covariant derivative with the projection $\Omega^1(S) \to \Gamma(W)$ defines the Penrose operator $P: \Gamma(S) \to \Gamma(W)$: explicitly,$$
P_X \psi = \nabla_X \psi + \frac1n X \cdot D\psi
$$for all vector fields $X$ and spinor fields $\psi$, and where $n = \dim M$. (My Clifford algebra conventions are $X^2 = - |X|^2$.) Notice that the "gamma trace" of the Penrose operator vanishes.
There is a sizeable literature on twistor spinors mostly in riemannian and lorentzian signatures. This is the work of Helga Baum and collaborators in Berlin. A search for "twistor spinors" in MathSciNet should give you many links.
One important property of the twistor spinor equation is that it is conformally invariant, whence the twistor spinors of conformally related riemannian spin manifolds correspond in a simple way. Since you mention maximally symmetric lorentzian manifolds, this observation might be of use because such spaces are conformally flat, hence you can write down the twistor spinors simply by rescaling the twistor spinors in Minkowski spacetime. In riemannian signature (hence for round spheres and hyperbolic spaces) this is described in the 1990 Humboldt University Seminarberichte
Twistor and Killing spinors on riemannian manifolds by Baum, Friedrich, Grunewald and Kath, later published by Teubner.
|
Hey guys! I built the voltage multiplier with alternating square wave from a 555 timer as a source (which is measured 4.5V by my multimeter) but the voltage multiplier doesn't seem to work. I tried first making a voltage doubler and it showed 9V (which is correct I suppose) but when I try a quadrupler for example and the voltage starts from like 6V and starts to go down around 0.1V per second.
Oh! I found a mistake in my wiring and fixed it. Now it seems to show 12V and instantly starts to go down by 0.1V per sec.
But you really should ask the people in Electrical Engineering. I just had a quick peek, and there was a recent conversation about voltage multipliers. I assume there are people there who've made high voltage stuff, like rail guns, which need a lot of current, so a low current circuit like yours should be simple for them.
So what did the guys in the EE chat say...
The voltage multiplier should be ok on a capacitive load. It will drop the voltage on a resistive load, as mentioned in various Electrical Engineering links on the topic. I assume you have thoroughly explored the links I have been posting for you...
A multimeter is basically an ammeter. To measure voltage, it puts a stable resistor into the circuit and measures the current running through it.
Hi all! There is theorem that links the imaginary and the real part in a time dependent analytic function. I forgot its name. Its named after some dutch(?) scientist and is used in solid state physics, who can help?
The Kramers–Kronig relations are bidirectional mathematical relations, connecting the real and imaginary parts of any complex function that is analytic in the upper half-plane. These relations are often used to calculate the real part from the imaginary part (or vice versa) of response functions in physical systems, because for stable systems, causality implies the analyticity condition, and conversely, analyticity implies causality of the corresponding stable physical system. The relation is named in honor of Ralph Kronig and Hans Kramers. In mathematics these relations are known under the names...
I have a weird question: The output on an astable multivibrator will be shown on a multimeter as half the input voltage (for example we have 9V-0V-9V-0V...and the multimeter averages it out and displays 4.5V). But then if I put that output to a voltage doubler, the voltage should be 18V, not 9V right? Since the voltage doubler will output in DC.
I've tried hooking up a transformer (9V to 230V, 0.5A) to an astable multivibrator (which operates at 671Hz) but something starts to smell burnt and the components of the astable multivibrator get hot. How do I fix this? I check it after that and the astable multivibrator works.
I searched the whole god damn internet, asked every god damn forum and I can't find a single schematic that converts 9V DC to 1500V DC without using giant transformers and power stage devices that weight 1 billion tons....
something so "simple" turns out to be hard as duck
In peskin book of QFT the sum over zero point energy modes is an infinite c-number, fortunately, it's experimental evidence doesn't appear, since experimentalists measure the difference in energy from the ground state. According to my understanding the zro pt energy is the same as the ground state, isn't it?
If so, always it is possible to substract a finite number (higher exited state for e.g.) from this zero point enrgy (which is infinte), it follows that, experimentally we always obtain infinte spectrum.
@AaronStevens Yeah, I had a good laugh to myself when he responded back with "Yeah, maybe they considered it and it was just too complicated". I can't even be mad at people like that. They are clearly fairly new to physics and don't quite grasp yet that most "novel" ideas have been thought of to death by someone; likely 100+ years ago if it's classical physics
I have recently come up with a design of a conceptual electromagntic field propulsion system which should not violate any conservation laws, particularly the Law of Conservation of Momentum and the Law of Conservation of Energy. In fact, this system should work in conjunction with these two laws ...
I rememeber that Gordon Freeman's thesis was "Observation of Einstein-Podolsky-Rosen Entanglement on Supraquantum Structures by Induction Through Nonlinear Transuranic Crystal of Extremely Long Wavelength (ELW) Pulse from Mode-Locked Source Array "
In peskin book of QFT the sum over zero point energy modes is an infinite c-number, fortunately, it's experimental evidence doesn't appear, since experimentalists measure the difference in energy from the ground state. According to my understanding the zro pt energy is the same as the ground state, isn't it? If so, always it is possible to substract a finite number (higher exited state for e.g.) from this zero point enrgy (which is infinte), it follows that, experimentally we always obtain infinte spectrum.
@ACuriousMind What confuses me is the interpretation of Peskin to this infinite c-number and the experimental fact
He said, the second term is the sum over zero point energy modes which is infnite as you mentioned. He added," fortunately, this energy cannot be detected experm., since the experiments measure only the difference between from the ground state of H".
@ACuriousMind Thank you, I understood your explanations clearly. However, regarding what Peskin mentioned in his book, there is a contradiction between what he said about the infinity of the zero point energy/ground state energy, and the fact that this energy is not detectable experimentally because the measurable quantity is the difference in energy between the ground state (which is infinite and this is the confusion) and a higher level.
It's just the first encounter with something that needs to be renormalized. Renormalizable theories are not "incomplete", even though you can take the Wilsonian standpoint that renormalized QFTs are effective theories cut off at a scale.
according to the author, the energy differenc is always infinite according to two fact. the first is, the ground state energy is infnite, secondly, the energy differenc is defined by substituting a higher level energy from the ground state one.
@enumaris That is an unfairly pithy way of putting it. There are finite, rigorous frameworks for renormalized perturbation theories following the work of Epstein and Glaser (buzzword: Causal perturbation theory). Just like in many other areas, the physicist's math sweeps a lot of subtlety under the rug, but that is far from unique to QFT or renormalization
The classical electrostatics formula $H = \int \frac{\mathbf{E}^2}{8 \pi} dV = \frac{1}{2} \sum_a e_a \phi(\mathbf{r}_a)$ with $\phi_a = \sum_b \frac{e_b}{R_{ab}}$ allows for $R_{aa} = 0$ terms i.e. dividing by zero to get infinities also, the problem stems from the fact that $R_{aa}$ can be zero due to using point particles, overall it's an infinite constant added to the particle that we throw away just as in QFT
@bolbteppa I understand the idea that we need to drop such terms to be in consistency with experiments. But i cannot understand why the experiment didn't predict such infinities that arose in the theory?
These $e_a/R_{aa}$ terms in the big sum are called self-energy terms, and are infinite, which means a relativistic electron would also have to have infinite mass if taken seriously, and relativity forbids the notion of a rigid body so we have to model them as point particles and can't avoid these $R_{aa} = 0$ values.
|
$X_1,...,X_n$ are independent Poisson random variables with$ X_j $having parameter$j\lambda$.What is the fisher information contained in $(X_1,...,X_n)$ about $\lambda$? BTW,What is the likelihood function in this question? $S_n$ is the log likelihood function,is the first derivative right ?i dont know if it is correctas this is the first step to get the answer. What i calculate is $\frac{\partial S_n}{\partial x}=\sum_{i=1}^{n}\frac{X_i}{\lambda i}-1$
Try the following:
1) Calculate the likelihood function based on observations $x_1,\ldots,x_n$ from $X_1,\ldots,X_n$. This is just $$ L(\lambda)=L(\lambda;(x_1,\ldots,x_n))=\prod_{i=1}^n p_i(x_i), $$ where $p_i$ denotes the probability function corresponding to $X_i$. Then calculate the loglikehood function $l(\lambda)=l(\lambda;(x_1,\ldots,x_n))=\log(L(\lambda;(x_1,\ldots,x_n)))$.
2) Differentiate twice with respect to $\lambda$ and get an expression for $$ \frac{\partial^2 l(\lambda)}{\partial \lambda^2}. $$
3) Then the Fischer information is the following $$ i(\lambda)=E\left[-\frac{\partial^2 l(\lambda;(X_1,\ldots,X_n)}{\partial \lambda^2}\right]. $$
I think the correct answer must be $\frac{n(n+1)}{2}\frac{1}{\lambda}$, but please correct me if I'm wrong.
|
An algebraic approach to entropy plateaus in non-integer base expansions
Mathematics Department, University of North Texas, 1155 Union Cir #311430, Denton, TX 76203-5017, USA
For a positive integer $ M $ and a real base $ q\in(1, M+1] $, let $ {\mathcal{U}}_q $ denote the set of numbers having a unique expansion in base $ q $ over the alphabet $ \{0, 1, \dots, M\} $, and let $ \mathbf{U}_q $ denote the corresponding set of sequences in $ \{0, 1, \dots, M\}^ {\mathbb{N}} $. Komornik et al. [
Adv. Math. 305 (2017), 165–196] showed recently that the Hausdorff dimension of $ {\mathcal{U}}_q $ is given by $ h(\mathbf{U}_q)/\log q $, where $ h(\mathbf{U}_q) $ denotes the topological entropy of $ \mathbf{U}_q $. They furthermore showed that the function $ H: q\mapsto h(\mathbf{U}_q) $ is continuous, nondecreasing and locally constant almost everywhere. The plateaus of $ H $ were characterized by Alcaraz Barrera et al. [ Trans. Amer. Math. Soc., 371 (2019), 3209–3258]. In this article we reinterpret the results of Alcaraz Barrera et al. by introducing a notion of composition of fundamental words, and use this to obtain new information about the structure of the function $ H $. This method furthermore leads to a more streamlined proof of their main theorem. Keywords:Beta-expansion, univoque set, topological entropy, entropy plateau, transitive subshift, composition of fundamental words. Mathematics Subject Classification:Primary: 11A63; Secondary: 37B10, 37B40, 68R15. Citation:Pieter C. Allaart. An algebraic approach to entropy plateaus in non-integer base expansions. Discrete & Continuous Dynamical Systems - A, 2019, 39 (11) : 6507-6522. doi: 10.3934/dcds.2019282
References:
[1] [2]
R. Alcaraz Barrera, S. Baker and D. Kong,
Entropy, topological transitivity, and dimensional properties of unique $q$-expansions,
[3] [4]
P. Allaart and D. Kong, On the continuity of the Hausdorff dimension of the univoque set,
[5] [6]
S. Baker, Generalized golden ratios over integer alphabets,
[7] [8] [9] [10]
P. Erdős, I. Joó and V. Komornik,
Characterization of the unique expansions $1=\sum_{i=1}^\infty q^{-n_i}$ and related problems,
[11] [12] [13] [14] [15] [16] [17] [18]
D. Lind and B. Marcus,
An Introduction to Symbolic Dynamics and Coding, Cambridge University Press, Cambridge, 1995.
doi: 10.1017/CBO9780511626302.
Google Scholar
[19] [20]
show all references
References:
[1] [2]
R. Alcaraz Barrera, S. Baker and D. Kong,
Entropy, topological transitivity, and dimensional properties of unique $q$-expansions,
[3] [4]
P. Allaart and D. Kong, On the continuity of the Hausdorff dimension of the univoque set,
[5] [6]
S. Baker, Generalized golden ratios over integer alphabets,
[7] [8] [9] [10]
P. Erdős, I. Joó and V. Komornik,
Characterization of the unique expansions $1=\sum_{i=1}^\infty q^{-n_i}$ and related problems,
[11] [12] [13] [14] [15] [16] [17] [18]
D. Lind and B. Marcus,
An Introduction to Symbolic Dynamics and Coding, Cambridge University Press, Cambridge, 1995.
doi: 10.1017/CBO9780511626302.
Google Scholar
[19] [20]
[1]
Dante Carrasco-Olivera, Roger Metzger Alvan, Carlos Arnoldo Morales Rojas.
Topological entropy for set-valued maps.
[2]
Silvère Gangloff, Benjamin Hellouin de Menibus.
Effect of quantified irreducibility on the computability of subshift entropy.
[3]
Dominik Kwietniak.
Topological entropy and distributional chaos in hereditary shifts with applications to spacing shifts and beta shifts.
[4] [5] [6]
Michał Misiurewicz, Peter Raith.
Strict inequalities for the entropy of transitive piecewise monotone maps.
[7]
Boris Hasselblatt, Zbigniew Nitecki, James Propp.
Topological entropy for nonuniformly continuous maps.
[8] [9] [10]
Piotr Oprocha, Paweł Potorski.
Topological mixing, knot points and bounds of topological entropy.
[11] [12]
Eva Glasmachers, Gerhard Knieper, Carlos Ogouyandjou, Jan Philipp Schröder.
Topological entropy
of minimal geodesics and volume growth on surfaces.
[13]
César J. Niche.
Topological entropy of a magnetic flow and the growth of the number of trajectories.
[14] [15] [16] [17] [18] [19] [20]
João Ferreira Alves, Michal Málek.
Zeta functions and topological entropy of periodic nonautonomous
dynamical systems.
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top]
|
Positive ground state solutions for fractional Laplacian system with one critical exponent and one subcritical exponent
School of Mathematics and Statistics and Hubei Key Laboratory of Engineering Modeling and Scientific Computing, Huazhong University of Science and Technology, Wuhan 430074, China
$ \begin{cases} (-\Delta)^{s}u+\mu u = |u|^{p-1}u+\lambda v,& x\in\mathbb{R}^{N},\\ (-\Delta)^{s}v+\nu v = |v|^{2^{\ast}-2}v+\lambda u,& x\in\mathbb{R}^{N},\\ \end{cases} $
$ (-\Delta)^{s} $
$ 0<s<1,\ N>2s, \ \lambda <\sqrt{\mu\nu },\ 1<p<2^{\ast}-1\; \text{and}\; \ 2^{\ast} = \frac{2N}{N-2s} $
$ \mu_{0}\in(0,1) $
$ 0<\mu\leq\mu_{0} $
$ \mu>\mu_{0} $
$ \lambda_{\mu,\nu}\in[\sqrt{(\mu-\mu_{0})\nu},\sqrt{\mu\nu}) $
$ \lambda>\lambda_{\mu,\nu} $
$ \lambda<\lambda_{\mu,\nu} $ Mathematics Subject Classification:Primary: 35J50, 35B33, 35R11. Citation:Maoding Zhen, Jinchun He, Haoyuan Xu, Meihua Yang. Positive ground state solutions for fractional Laplacian system with one critical exponent and one subcritical exponent. Discrete & Continuous Dynamical Systems - A, 2019, 39 (11) : 6523-6539. doi: 10.3934/dcds.2019283
References:
[1] [2] [3]
B. Barrios, E. Colorado, A. de Pablo and U. Sánchez,
On some critical problems for the fractional Laplacian operator,
[4] [5] [6]
X. Cabré and Y. Sire,
Nonlinear equations for fractional Laplacians, I: Regularity, maximum principles, and Hamiltonian estimates,
[7]
L. A. Caffarelli, J.-M. Roquejoffre and Y. Sire,
Variational problems with free boundaries for the fractional Laplacian,
[8]
X. Chang and Z.-Q. Wang,
Ground state of scalar field equations involving fractional Laplacian with general nonlinearity,
[9]
Z. J. Chen and W. M. Zou,
An optimal constant for the existence of least energy solutions of a coupled Schrödinger system,
[10] [11]
Z. J. Chen and W. M. Zou,
Positive least energy solutions and phase separation for coupled Schrödinger equations with critical exponent,
[12]
Z. J. Chen and W. M. Zou,
Positive least energy solutions and phase separation for coupled Schrödinger equations with critical exponent: Higher dimensional case,
[13]
X. Y. Cheng and S. Ma,
Existence of three nontrivial solutions for elliptic systems with critical exponents and weights,
[14] [15]
A. Cotsiolis and N. K. Tavoularis,
Best constants for Sobolev inequalities for higher order fractional derivatives,
[16] [17] [18] [19] [20] [21]
D. F. Lü and S. J. Peng,
On the positive vector solutions for nonlinear fractional Laplacian system with linear coupling,
[22]
J. Marcos do Ó and D. Ferraz,
Concentration-compactness principle for nonlocal scalar field equations with critical growth,
[23] [24]
S. J. Peng, Y.-F. Peng and Z.-Q. Wang, On elliptic systems with Sobolev critical growth,
[25]
S. J. Peng, W. Shuai and Q. F. Wang,
Multiple positive solutions for linearly coupled nonlinear elliptic systems with critical exponent,
[26]
W. Rudin,
[27] [28] [29]
X. D. Shang, J. H. Zhang and Y. Yang,
Positive solutions of nonhomogeneous fractional Laplacian problem with critical exponent,
[30] [31] [32] [33]
show all references
References:
[1] [2] [3]
B. Barrios, E. Colorado, A. de Pablo and U. Sánchez,
On some critical problems for the fractional Laplacian operator,
[4] [5] [6]
X. Cabré and Y. Sire,
Nonlinear equations for fractional Laplacians, I: Regularity, maximum principles, and Hamiltonian estimates,
[7]
L. A. Caffarelli, J.-M. Roquejoffre and Y. Sire,
Variational problems with free boundaries for the fractional Laplacian,
[8]
X. Chang and Z.-Q. Wang,
Ground state of scalar field equations involving fractional Laplacian with general nonlinearity,
[9]
Z. J. Chen and W. M. Zou,
An optimal constant for the existence of least energy solutions of a coupled Schrödinger system,
[10] [11]
Z. J. Chen and W. M. Zou,
Positive least energy solutions and phase separation for coupled Schrödinger equations with critical exponent,
[12]
Z. J. Chen and W. M. Zou,
Positive least energy solutions and phase separation for coupled Schrödinger equations with critical exponent: Higher dimensional case,
[13]
X. Y. Cheng and S. Ma,
Existence of three nontrivial solutions for elliptic systems with critical exponents and weights,
[14] [15]
A. Cotsiolis and N. K. Tavoularis,
Best constants for Sobolev inequalities for higher order fractional derivatives,
[16] [17] [18] [19] [20] [21]
D. F. Lü and S. J. Peng,
On the positive vector solutions for nonlinear fractional Laplacian system with linear coupling,
[22]
J. Marcos do Ó and D. Ferraz,
Concentration-compactness principle for nonlocal scalar field equations with critical growth,
[23] [24]
S. J. Peng, Y.-F. Peng and Z.-Q. Wang, On elliptic systems with Sobolev critical growth,
[25]
S. J. Peng, W. Shuai and Q. F. Wang,
Multiple positive solutions for linearly coupled nonlinear elliptic systems with critical exponent,
[26]
W. Rudin,
[27] [28] [29]
X. D. Shang, J. H. Zhang and Y. Yang,
Positive solutions of nonhomogeneous fractional Laplacian problem with critical exponent,
[30] [31] [32] [33]
[1]
Kaimin Teng, Xiumei He.
Ground state solutions for fractional Schrödinger equations with critical Sobolev exponent.
[2]
Yinbin Deng, Wentao Huang.
Positive ground state solutions for a quasilinear elliptic equation with critical exponent.
[3]
Guangze Gu, Xianhua Tang, Youpei Zhang.
Ground states for asymptotically periodic fractional Kirchhoff equation with critical Sobolev exponent.
[4]
Xudong Shang, Jihui Zhang, Yang Yang.
Positive solutions of nonhomogeneous fractional Laplacian problem with critical exponent.
[5]
Gui-Dong Li, Chun-Lei Tang.
Existence of positive ground state solutions for Choquard equation with variable exponent growth.
[6]
Xu Zhang, Shiwang Ma, Qilin Xie.
Bound state solutions of Schrödinger-Poisson system with critical exponent.
[7]
Qilin Xie, Jianshe Yu.
Bounded state solutions of Kirchhoff type problems with a critical exponent in high dimension.
[8]
Yong-Yong Li, Yan-Fang Xue, Chun-Lei Tang.
Ground state solutions for asymptotically periodic modified Schr$ \ddot{\mbox{o}} $dinger-Poisson system involving critical exponent.
[9] [10]
Antonio Capella.
Solutions of a pure critical exponent problem involving the half-laplacian in annular-shaped domains.
[11] [12] [13] [14]
Claudianor O. Alves, Giovany M. Figueiredo, Gaetano Siciliano.
Ground state solutions for fractional scalar field equations under a general critical nonlinearity.
[15] [16]
Lun Guo, Wentao Huang, Huifang Jia.
Ground state solutions for the fractional Schrödinger-Poisson systems involving critical growth in $ \mathbb{R} ^{3} $.
[17]
Wentao Huang, Jianlin Xiang.
Soliton solutions for a quasilinear Schrödinger equation with critical exponent.
[18]
Qi-Lin Xie, Xing-Ping Wu, Chun-Lei Tang.
Existence and multiplicity of solutions for Kirchhoff type problem with critical exponent.
[19]
Sergey Zelik.
Asymptotic regularity of solutions of a nonautonomous damped wave equation with a critical growth exponent.
[20]
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top]
|
Martin's comment is right on the money; in particular, the best way to get a feeling for coends is through the many examples where they appear, such as (generalized) tensor products. But from an abstract point of view, coends can be considered as universal "extranatural transformations", and the ubiquity of coends (and ends) is explained by the ubiquity of such extranatural transformations.
Let me take a specific example before getting into the abstract aspects. Let's consider the example of geometric realization of simplicial sets, as a left adjoint to the singularization functor $S: Top \to Set^{\Delta^{op}}$. Recall that if $Y$ is a space, then $S(Y)$ is the simplicial set $\Delta^{op} \to Set$ whose value at the ordinal $[n]$ (with $n+1$ points) is defined by
$$S(Y)([n]) = \hom_{Top}(\sigma_n, Y)$$
where $\sigma_n$ is the $n$-dimensional affine simplex seen as a topological space. We are interested in constructing a left adjoint $R$ to $S$, so that for any simplicial set $X$, the set of natural transformations
$$X \to S(Y)$$
is in natural bijective correspondence with continuous maps $R(X) \to Y$.
The way to do this is to "bend" a natural transformation
$$X([n]) \to \hom_{Top}(\sigma_n, Y)$$
(a family of maps natural in $[n] \in \Delta$) into another family
$$\phi_n: X([n]) \times \sigma_n \to Y$$
of continuous maps indexed by $n$. This family has a property intimately related to the naturality of the first family; it is called "extranaturality". It means that given any morphism $f: [m] \to [n]$, the composite
$$X([n]) \times \sigma_m \stackrel{X[f] \times id}{\to} X([m]) \times \sigma_m \stackrel{\phi_m}{\to} Y$$
equals the morphism
$$X([n]) \times \sigma_m \stackrel{id \times \sigma_f}{\to} X([n]) \times \sigma_n \stackrel{\phi_n}{\to} Y;$$
this precisely mirrors the naturality of the first family in $n$. Thus, what we are after is an extranatural transformation
$$X([n]) \times \sigma_n \to R(X)$$
with the universal property that given any extranatural transformation $\phi_n$ as above, there exists a unique map $R(X) \to Y$ making the evident triangle commute (for each $n$). This is of course the coend
$$R(X) = \int^n X([n]) \times \sigma_n$$
and the appropriate construction in terms of coproducts and coequalizers that you indicated in your question is exactly what is required to construct the universal extranatural transformation.
This is easily abstracted. Given any functor $F: C^{op} \times C \to D$, one can define what it means for a family of maps $F(c, c) \to d$ (for fixed $d$) to be extranatural in $c$, and the coend is again described as a universal extranatural transformation. In nature, such transformations almost invariably arise by "bending" a natural transformation into an extranatural (also called "dinatural") one. The tensor product mentioned by Martin fits into this pattern: thinking of a left $R$-module map of the form
$$M \to \hom_{Ab}(N, A)$$
($M$ a left $R$-module, $N$ a right $R$-module, $A$ an abelian group; the hom acquires a left $R$-module structure) as a $Ab$-enriched natural transformation between functors of the form $R \to Ab$ (where a ring $R$ is viewed as a one-object $Ab$-enriched category), we can "bend" this map into a map
$$M \otimes N \to A$$
which is extranatural with respect to scalar actions, and the quotient $M \otimes N \to M \otimes_R N$ is the universal such extranatural map. But this only scratches the surface of possibilities for this type of situation.
Finally, I second Martin's remark on the traditional integral notation -- not too much should be made of this, except that weighted colimits are primary examples of coends, and there are interchange isomorphisms which are reminiscent of Fubini theorems; this is touched upon in Categories for the Working Mathematician.
|
Adjiashvili, David, Oertel, Timm and Weismantel, Robert 2015. A polyhedral Frobenius theorem with applications to integer optimization. SIAM Journal on Discrete Mathematics 29 (3) , pp. 1287-1302. 10.1137/14M0973694
PDF - Accepted Post-Print Version
Download (364kB) | Preview
Abstract
We prove a representation theorem of projections of sets of integer points by an integer matrix $W$. Our result can be seen as a polyhedral analogue of several classical and recent results related to the Frobenius problem. Our result is motivated by a large class of nonlinear integer optimization problems in variable dimension. Concretely, we aim to optimize $f(Wx)$ over a set $\mathcal{F} = P\cap \mathbb{Z}^n$, where $f$ is a nonlinear function, $P\subset \mathbb{R}^n$ is a polyhedron, and $W\in \mathbb{Z}^{d\times n}$. As a consequence of our representation theorem, we obtain a general efficient transformation from the latter class of problems to integer linear programming. Our bounds depend polynomially on various important parameters of the input data leading, among others, to first polynomial time algorithms for several classes of nonlinear optimization problems. Read More: http://epubs.siam.org/doi/10.1137/14M0973694
Item Type: Article Date Type: Published Online Status: Published Schools: Mathematics Subjects: Q Science > QA Mathematics Publisher: Society for Industrial and Applied Mathematics ISSN: 0895-4801 Date of First Compliant Deposit: 30 March 2016 Date of Acceptance: 11 May 2015 Last Modified: 29 Jun 2019 22:17 URI: http://orca.cf.ac.uk/id/eprint/86767 Citation Data
Cited 1 time in
Scopus. View in Scopus. Powered By Scopus® Data Actions (repository staff only)
Edit Item
|
1,764 69
Hi PF!
I am trying to solve an ODE by casting it as an operator problem, say ##K[y(x)] = \lambda M[y(x)]##, where ##y## is a trial function, ##x## is the independent variable, ##\lambda## is the eigenvalue, and ##K,M## are linear differential operators. For this particular problem, it's easier for me to work with the inverse operator problem ##M^{-1}[y(x)] = \lambda K^{-1}[y(x)]##. Constructing inverse operators implies building a Green's function. The technique I've used to build a Green's function is variation of parameters, which takes the form ##G = y_1(x)y_2(\xi) / w_\alpha## where ##y_1,y_2## are fundamental solutions associated with a particular operator, say ##K##, and ##w_\alpha## is their associated Wronskian, where the subscript ##\alpha## is a parameter. I observe ##\alpha \to 0 \implies w\to 0##. Does this imply any ##\alpha \neq 0## yields the correct Green's function? What if ##\alpha## is very VERY small but not zero? Is this something that can cause numerical issues? I ask this because an analytic solution for the operator ODE exists for the ##\alpha = 0## case, but this causes issues with the Wronskian. When benchmarking, how ``small'' of an ##\alpha## should I use? I can provide more information if someone is willing to help and needs more understanding, as I have not really mentioned specifics regarding the numerics.
I am trying to solve an ODE by casting it as an operator problem, say ##K[y(x)] = \lambda M[y(x)]##, where ##y## is a trial function, ##x## is the independent variable, ##\lambda## is the eigenvalue, and ##K,M## are linear differential operators. For this particular problem, it's easier for me to work with the inverse operator problem ##M^{-1}[y(x)] = \lambda K^{-1}[y(x)]##. Constructing inverse operators implies building a Green's function.
The technique I've used to build a Green's function is variation of parameters, which takes the form ##G = y_1(x)y_2(\xi) / w_\alpha## where ##y_1,y_2## are fundamental solutions associated with a particular operator, say ##K##, and ##w_\alpha## is their associated Wronskian, where the subscript ##\alpha## is a parameter. I observe ##\alpha \to 0 \implies w\to 0##. Does this imply any ##\alpha \neq 0## yields the correct Green's function? What if ##\alpha## is very VERY small but not zero? Is this something that can cause numerical issues?
I ask this because an analytic solution for the operator ODE exists for the ##\alpha = 0## case, but this causes issues with the Wronskian. When benchmarking, how ``small'' of an ##\alpha## should I use? I can provide more information if someone is willing to help and needs more understanding, as I have not really mentioned specifics regarding the numerics.
|
Periodic Hill¶
This tutorial will describe how to run a case from scratch. We illustrate this procedure through a relatively simple example involving incompressible laminar flow in a two-dimensional periodic hill domain. Our implementation is loosely based on the case presented by Mellen et al. [Mellen2000]. A thorough review for this case can be found in the ERCOFTAC knowledge base wiki.
Pre-processing¶
We assume that you have installed Nek5000 in your home directory.This tutorial requires that you have the tools
genbox and
genmap compiled.Make sure
$HOME/Nek5000/bin is in your search PATH.
Cases are setup in Nek5000 by editing case files. Users should select an editor of choice with which to do this (e.g vi). A case being simulated involves data for mesh, parameters, etc. As a first step, the user should create a case directory in their run directory.
cd $HOME/Nek5000/runmkdir hillpcd hillp
Mesh generation¶
In this tutorial we use a simple box mesh generated by
genbox with the following input file:
-2 spatial dimension (will create box.re2)1 number of fields## comments: two dimensional periodic hill##========================================================#Box hillp-22 8 Nelx Nely0.0 9.0 1. x0 x1 ratio0.0 0.1 0.25 0.5 1.5 2.5 2.75 2.9 3.0 y0 y1 ratioP ,P ,W ,W BC's: (cbx0, cbx1, cby0, cby1)
For this mesh we are specifying 22 uniform elements in the stream-wise (x) direction.8 non-uniform elements are specified in the span-wise (y) direction in order to resolve the boundary layers.The boundary conditions are periodic in the x-direction and no-slip in the y.Additional details on generating meshes using
genbox can be found here. Now we can run genbox with
genbox
On input provide the input file name (e.g.
hillp.box).The tool will produce a binary mesh and boundary data file
box.re2 which should be renamed to
hillp.re2.
usr file¶
The user file implements various subroutines to allow the user to interact with the solver.
To get started we copy the template to our case directory
cp $HOME/Nek5000/core/zero.usr hillp.usr
Modify mesh and apply mass flux¶
To drive the flow a mass flux is applied such that bulk velocity \(u_b=1\).
For a periodic hill, we will need to modify the geometry. Let \({\bf x} := (x,y)\) denote the old geometry, and \({\bf x}' := (x',y')\) denote the new geometry. For a domain with \(y\in [0,3]\) and \(x\in [0,9]\) the following function will map the straight pipe geometry to a periodic hill:
where \(A=4.5, B=3.5, C=1/6\). We have chosen these constants so that the height of the hill (our reference length), \(h=1\). Note that, as \(y \longrightarrow 3\), the perturbation, goes to zero. So that near \(y = 3\), the mesh recovers its original form.
In Nek5000, we can specify this through
usrdat2 in the usr file as follows
subroutine usrdat2! implicit noneinclude 'SIZE'include 'TOTAL'ntot = nx1*ny1*nz1*neltsa = 4.5sb = 3.5sc = 1./6do i=1,ntot xx = xm1(i,1,1,1) argx = sb*(abs(xx-sa)-sb) A1 = sc + sc*tanh(argx) ym1(i,1,1,1) = ym1(i,1,1,1) + (3-ym1(i,1,1,1))*A1enddo! apply mass flux to drive the flow such that Ubar = 1param(54) = -1 ! x-directionparam(55) = 1 ! Ubarreturnend
Initial & boundary conditions¶
The next step is to specify the initial conditions.This can be done in the subroutine
useric as follows:
subroutine useric(ix,iy,iz,ieg)! implicit noneinteger ix,iy,iz,eginclude 'SIZE'include 'TOTAL'include 'NEKUSE'ux = 1.0uy = 0.0uz = 0.0temp = 0.0returnend
Control parameters¶
The control parameters for any case are given in the
.par file.For this case, using any text editor, create a new file called
hillp.par and type in the following
## nek parameter file#[GENERAL]stopAt = endTimeendTime = 200variableDT = yestargetCFL = 0.4timeStepper = bdf2writeControl = runTimewriteInterval = 20[PROBLEMTYPE]equation = incompNS[PRESSURE]residualTol = 1e-5residualProj = yes[VELOCITY]residualTol = 1e-8density = 1viscosity = -100
In choosing
viscosity = -100 we are actually setting the Reynolds number. This assumes that\(\rho \times u_b \times h = 1\) where \(u_b\) denotes the bulk velocity and \(h\) the hill height.
We have set the calculation to stop at the physical time of \(T=200\) (
endTime=200) which is roughly 22 flow-thru time units (based on the bulk velocity \(u_b\) and length of periodic pitch, \(L=9\)). Additional details on the names of keys in the
.par file can be found here.
SIZE file¶
The static memory layout of Nek5000 requires the user to set some solver parameters through a so called
SIZE file.Typically it’s a good idea to start from our template.Copy the
SIZE.template file from the core directory and rename it
SIZE in the working directory:
cp $HOME/Nek5000/core/SIZE.template SIZE
Then, adjust the following parameters in the BASIC section
...! BASICparameter (ldim=2)parameter (lx1=8)parameter (lxd=12)parameter (lx2=lx1)parameter (lelg=22*8)parameter (lpmin=1)parameter (lpmax=4)parameter (ldimt=1)...
For this tutorial we have set our polynomial order to be \(N=7\) - this is defined in the
SIZE file above as
lx1=8 which indices that there are 8 points in each spatial dimension of every element.Additional details on the parameters in the
SIZE file are given here.
Compilation¶
With the
hillp.usr, and
SIZE files created, we are now ready to compile:
makenek hillp
If all works properly, upon compilation the executable
nek5000 will be generated.
Running the case¶
First we need to run our domain paritioning tool
genmap
On input specify
hillp as your casename and press enter to use the default tolerance. This step will produce
hillp.ma2 which needs to be generated only once.
Now you are all set, just run
nekbmpi hillp 4
to launch an MPI jobs on your local machine using 4 ranks. The output will be redirected to
logfile.
Post-processing the results¶
Once execution is completed your directory should now contain multiple checkpoint files that look like this:
hillp.f00001hillp.f00002...
The preferred mode for data visualization and analysis with Nek5000 isto use Visit/Paraview. One can use the script
visnek, to be found in
/scripts. It is sufficent to run:
visnek hillp
(or the name of your session) to obatain a file named
hillp.nek5000 which can be recognized in Visit/Paraview.
In the viewing window one can visualize the flow-field as depicted in Fig. 2.
|
The Professor has really confused me here on this one on what the
$\log 2$ mean? formula: $\text{Entropy} = \log(\text{Phrases})/\log 2$
If I understand the problem correctly, you are asking what the $\log 2$ is doing there in the denominator. This is essentially to ensure that the
base of the logarithm (whether it's $10$ or $e$ or $2$) doesn't matter and you always get the computation result as a base-2 logarithm.
As a quick reminder: The
base in a logarithm is the $b$ for which you are looking to find the $x$ such that $b^x=a$ for $x=\log a$.
How many bits [are needed to] represent $x$ [phrases]?
So you have $x$ values and you want to know how many bits you need to have in order to identify all of them. Well, first note that 1 bit can address 2 values, 2 bits can address 4 values, 3 bits can address 8 values, etc. $n$ bits can address $2^n$ values. Thus we are looking for the smallest value $n$ such that $x\leq 2^n$. So we first compute a logarithm, to an arbitrary base, on both sides of the inequality, which yields $\log x\leq \log(2^n)=n\cdot \log 2\Leftrightarrow \log x/\log 2\leq n$, thus $\log x/\log 2$ bits are sufficient.
|
I want to prove the following statement $$\forall x,y \in \mathbb R ,\ a >1 :\\ \ \ \ \ x < y \iff a^x < a^y$$
I can use all the exponentiation laws for rational numbers and I would like to prove the statement by using rational sequences $q_n \rightarrow x$ and $r_n \rightarrow y$ which converge to $x$ and $y$ for a rising $n$.
I tried proving using the contrapositive version of the statement for my left - to - right proof, but I do not think that it is correct. My proof was the following:
$$a^x \ge a^y \implies x\geq y$$
If $a^x \ge a^y$ then, if you choose an $n$ large enough, then any $q_n$ is larger than $r_n$. This implies also that $\lim_{n\to\infty}q_n \geq \lim_{n\to\infty} r_n $and thus concludes the statement.
Any hints?
|
Given a finite sequence $\bm{a}=\langle a_i\rangle_{i=1}^n$ in $\mathbb{N}$ and a sequence $\langle x_t\rangle_{t=1}^\infty$ in $\mathbb{N}$, the Milliken–Taylor system generated by $\bm{a}$ and $\langle x_t\rangle_{t=1}^\infty$ is
\begin{multline*} \qquad \mathrm{MT}(\bm{a},\langle x_t\rangle_{t=1}^\infty)=\biggl\{\sum_{i=1}^na_i\cdot\sum_{t\in F_i}x_t:F_1,F_2,\dots,F_n\text{ are finite non-empty} \\[-8pt] \text{subsets of $\mathbb{N}$ with }\max F_i\lt\min F_{i+1}\text{ for }i\ltn\biggr\}.\qquad \end{multline*}
It is known that Milliken–Taylor systems are partition regular but not consistent. More precisely, if $\bm{a}$ and $\bm{b}$ are finite sequences in $\mathbb{N}$, then, except in trivial cases, there is a partition of $\mathbb{N}$ into two cells, neither of which contains $\mathrm{MT}(\bm{a},\langle x_t\rangle_{t=1}^\infty)\cup \mathrm{MT}(\bm{b},\langle y_t\rangle_{t=1}^\infty)$ for any sequences $\langle x_t\rangle_{t=1}^\infty$ and $\langle y_t\rangle_{t=1}^\infty$.
Our aim in this paper is to extend the above result to allow negative entries in $\bm{a}$ and $\bm{b}$. We do so with a proof which is significantly shorter and simpler than the original proof which applied only to positive coefficients. We also derive some results concerning the existence of solutions of certain linear equations in $\beta\mathbb{Z}$. In particular, we show that the ability to guarantee the existence of $\mathrm{MT}(\bm{a},\langle x_t\rangle_{t=1}^\infty)\cup \mathrm{MT}(\bm{b},\langle y_t\rangle_{t=1}^\infty)$ in one cell of a partition is equivalent to the ability to find idempotents $p$ and $q$ in $\beta\mathbb{N}$ such that $a_1\cdot p+a_2\cdot p+\cdots+a_n\cdot p=b_1\cdot q+b_2\cdot q+\cdots+b_m\cdot q$, and thus determine exactly when the latter has a solution.
AMS 2000 Mathematics subject classification: Primary 05D10. Secondary 22A15; 54H13
|
Let $F_n$ be the free group on $n$ letters.
The question is as in the title: letting $i:\text{Aut}(F_n) \hookrightarrow \text{Aut}(F_{n+1})$ be the natural injection, does there exist a homomorphism $\phi: \text{Aut}(F_{n+1}) \rightarrow \text{Aut}(F_n)$ such that $\phi \circ i = \text{id}$? My guess is "no", but I have no idea how to prove it.
As "evidence", here are two similar situations in other analogous contexts:
Let $S_n$ be the symmetric group on $n$ letters. Then the injection $S_n \hookrightarrow S_{n+1}$ does not split (at least for $n$ sufficiently large). This follows easily from the simplicity of the alternating group.
The injection $\text{SL}(n,\mathbb{Z}) \hookrightarrow \text{SL}(n+1,\mathbb{Z})$ does not split. One way to see this is to use the fact that for $n \geq 3$, all normal subgroups of $\text{SL}(n,\mathbb{Z})$ are either finite or finite-index (this is a consequence of the Margulis normal subgroup theorem, but it can be proved in more elementary ways as well; I do not know who to attribute it to).
The above two proofs work because we understand normal subgroups of $S_n$ and $\text{SL}(n,\mathbb{Z})$. Such an understanding seems entirely out of reach for $\text{Aut}(F_n)$.
|
On the existence of full dimensional KAM torus for nonlinear Schrödinger equation
1.
School of Mathematical Sciences, Dalian University of Technology, Dalian 116024, China
2.
College of Science, The Institute of Aeronautical Engineering and Technology, Binzhou University, Binzhou 256600, China
3.
School of Mathematical Sciences, Peking University, Beijing 100871, China
4.
School of Mathematical Sciences, Fudan University, Shanghai 200433, China
$ \begin{eqnarray} \sqrt{-1}u_{t}-u_{xx}+V*u+\epsilon f(x)|u|^4u = 0, \ x\in\mathbb{T} = \mathbb{R}/2\pi\mathbb{Z}, ~~~~~~~~~~~~~~~~~~~~~~~~~~~(1)\end{eqnarray} $
$ V* $
$ \widehat{(V* u})_n = V_{n}\widehat{u}_n, V_n\in[-1, 1] $
$ f(x) $
$ 0\leq|\epsilon|\ll1 $
$ (V_n)_{n\in\mathbb{Z}} $
$ x $ Mathematics Subject Classification:Primary: 37K55; Secondary: 35Q56, 35K55. Citation:Hongzi Cong, Lufang Mi, Yunfeng Shi, Yuan Wu. On the existence of full dimensional KAM torus for nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems - A, 2019, 39 (11) : 6599-6630. doi: 10.3934/dcds.2019287
References:
[1] [2]
P. Baldi, M. Berti and R. Montalto,
KAM for autonomous quasi-linear perturbations of KdV,
[3]
J. Bourgain,
Construction of approximative and almost periodic solutions of perturbed linear Schrödinger and wave equations,
[4] [5]
J. Bourgain,
Recent progress in quasi-periodic lattice Schrödinger operators and Hamiltonian partial differential equations,
[6]
J. Bourgain,
Green's Function Estimates for Lattice Schrödinger Operators and Applications, Annals of Mathematics Studies, 158. Princeton University Press, Princeton, NJ, 2005.
doi: 10.1515/9781400837144.
Google Scholar
[7] [8]
H. Z. Cong, J. J. Liu, Y. F. Shi and X. P. Yuan,
The stability of full dimensional KAM tori for nonlinear Schrödinger equation,
[9] [10] [11]
R. Feola and M. Procesi,
Quasi-periodic solutions for fully nonlinear forced reversible Schrödinger equations,
[12] [13]
J. S. Geng and W. Hong,
Invariant tori of full dimension for second KdV equations with the external parameters,
[14]
J. S. Geng and X. D. Xu,
Almost periodic solutions of one dimensional Schrödinger equation with the external parameters,
[15]
T. Kappeler and J. Pöschel,
[16]
S. B. Kuksin, Hamiltonian perturbations of infinite-dimensional linear systems with imaginary spectrum,
[17]
S. B. Kuksin,
[18]
S. B. Kuksin, Fifteen years of KAM for PDE., in
[19]
J. J. Liu and X. P. Yuan,
A KAM theorem for Hamiltonian partial differential equations with unbounded perturbations,
[20] [21] [22] [23] [24] [25]
show all references
References:
[1] [2]
P. Baldi, M. Berti and R. Montalto,
KAM for autonomous quasi-linear perturbations of KdV,
[3]
J. Bourgain,
Construction of approximative and almost periodic solutions of perturbed linear Schrödinger and wave equations,
[4] [5]
J. Bourgain,
Recent progress in quasi-periodic lattice Schrödinger operators and Hamiltonian partial differential equations,
[6]
J. Bourgain,
Green's Function Estimates for Lattice Schrödinger Operators and Applications, Annals of Mathematics Studies, 158. Princeton University Press, Princeton, NJ, 2005.
doi: 10.1515/9781400837144.
Google Scholar
[7] [8]
H. Z. Cong, J. J. Liu, Y. F. Shi and X. P. Yuan,
The stability of full dimensional KAM tori for nonlinear Schrödinger equation,
[9] [10] [11]
R. Feola and M. Procesi,
Quasi-periodic solutions for fully nonlinear forced reversible Schrödinger equations,
[12] [13]
J. S. Geng and W. Hong,
Invariant tori of full dimension for second KdV equations with the external parameters,
[14]
J. S. Geng and X. D. Xu,
Almost periodic solutions of one dimensional Schrödinger equation with the external parameters,
[15]
T. Kappeler and J. Pöschel,
[16]
S. B. Kuksin, Hamiltonian perturbations of infinite-dimensional linear systems with imaginary spectrum,
[17]
S. B. Kuksin,
[18]
S. B. Kuksin, Fifteen years of KAM for PDE., in
[19]
J. J. Liu and X. P. Yuan,
A KAM theorem for Hamiltonian partial differential equations with unbounded perturbations,
[20] [21] [22] [23] [24] [25]
[1] [2]
Alp Eden, Elİf Kuz.
Almost cubic nonlinear Schrödinger equation: Existence, uniqueness and scattering.
[3]
Jason Murphy, Fabio Pusateri.
Almost global existence for cubic nonlinear Schrödinger equations in one space dimension.
[4] [5]
Kazumasa Fujiwara, Tohru Ozawa.
On the lifespan of strong solutions to the periodic derivative nonlinear Schrödinger equation.
[6] [7]
Silvia Cingolani, Mónica Clapp.
Symmetric semiclassical states to a magnetic nonlinear Schrödinger equation via equivariant Morse theory.
[8]
Jian Zhang, Shihui Zhu, Xiaoguang Li.
Rate of $L^2$-concentration of the blow-up solution
for critical nonlinear Schrödinger equation with potential.
[9]
Jean-Claude Saut, Jun-Ichi Segata.
Asymptotic behavior in time of solution to the nonlinear Schrödinger equation with higher order anisotropic dispersion.
[10]
Zhong Wang.
Stability of Hasimoto solitons in energy space for a fourth order nonlinear Schrödinger type equation.
[11]
Congming Peng, Dun Zhao.
Global existence and blowup on the energy space for the inhomogeneous fractional nonlinear Schrödinger equation.
[12] [13] [14]
Daniela De Silva, Nataša Pavlović, Gigliola Staffilani, Nikolaos Tzirakis.
Global well-posedness for a periodic nonlinear Schrödinger equation in 1D and 2D.
[15]
Minoru Murai, Kunimochi Sakamoto, Shoji Yotsutani.
Representation formula
for traveling waves
to a derivative nonlinear Schrödinger equation
with the periodic boundary condition.
[16]
Pavel I. Naumkin, Isahi Sánchez-Suárez.
On the critical nongauge invariant nonlinear Schrödinger equation.
[17] [18]
Alexander Komech, Elena Kopylova, David Stuart.
On asymptotic stability of solitons
in a nonlinear Schrödinger equation.
[19] [20]
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top]
|
Event detail Harmonic Analysis Seminar: On multilinear oscillatory integral operator inequalities
Seminar | April 10 | 1:10-2 p.m. | 736 Evans Hall
Michael Christ, UCB
The title refers to inequalities of the form $\int _{[0,1]^d} \prod _{j=1}^d f_j(x_j) \,e^{i\lambda \psi (x)}\,dx = O(|\lambda |^{-\gamma } \prod _j \|f_j\|_{L^{p_j}})$ for large $\lambda \in {\mathbb R}$. Here $\psi :{\mathbb R}^d\to {\mathbb R}$ is a smooth phase function, and the exponent γ depends on ψ and on the exponents $p_j$. These inequalities are well understood in the bilinear case $d=2$, and sharp bounds have been obtained by Phong-Stein-Sturm and Gilula-Gressman-Xiao for certain parameter ranges for $d >2$. Nonetheless, the case $d\ge 3$ remains largely mysterious. I will argue that the most basic question in this context remains unaddressed for $d\ge 3$, and will present recent partial results and examples for $d=3$ with an outline of the proofs.
|
You can compute the integral$$f(t) = \int_{t-1/2}^{t}\chi_{[0,1]}(u)du$$in cases, for different ranges of $t$.I think it's easier (personally) if we manipulate the rectangular pulse function instead of doing the substitution for $u$. I'm assuming that$$\chi_{[a,b]}(x) = \begin{cases} \hfill 1 \hfill & \text{ if } x\in [a,b] \\ \hfill 0 \hfill & \text{ if } x\notin [a,b] \\ \end{cases}.$$With this, we can see that\begin{align}\chi_{[0,1]}(t-s) &= \begin{cases} \hfill 1 \hfill & \text{ if } t-s\in [0,1] \\ \hfill 0 \hfill & \text{ if } t-s\notin [0,1] \\ \end{cases}\\ &= \begin{cases} \hfill 1 \hfill & \text{ if } s\in [t-1,t] \\ \hfill 0 \hfill & \text{ if } s\notin [t-1,t] \\ \end{cases}\\ &= \chi_{[t-1,t]}(s).\end{align}So we have,$$f(t) = \int_{0}^{1/2}\chi_{[0,1]}(t-s)ds = \int_{0}^{1/2}\chi_{[t-1,t]}(s)ds.$$For $t\leq 0$, $\chi_{[t-1,t]} = 0$ in the range of integration, so $f(t) = 0$.
For $t\in (0,\frac{1}{2}]$, $f(t) = \int_{0}^{t}1 ds = t$.
For $t \in (\frac{1}{2},1)$, $f(t) = \int_{0}^{1/2}1 ds = \frac{1}{2}$.
For $t \in [1,\frac{3}{2})$, $f(t) = \int_{t-1}^{1/2}1 ds = \frac{1}{2} - (t-1) = \frac{3}{2} - t$.
For $t \geq \frac{3}{2}$, $f(t) = 0$.
Visually, the function $f(t)$ looks like a trapezoid. It's the amount of overlap between the two rectangular pulses as you slide them over one another (with the variable $t$). The overlap takes on a constant value of $\frac{1}{2}$ when the small rectangle is completely inside the larger rectangle.
|
I came up with this question when considering the field extension $\mathbb{Q}(\sqrt[\leftroot{-2}\uproot{2}4]{2})|\mathbb{Q}$. I know that the minimal polynomial of $\sqrt[\leftroot{-2}\uproot{2}4]{2}$ over $\mathbb{Q}$ is $x^4-2$ due to Eisenstein, and thus the above extension has degree 4. Now I'm trying to look at this from another angle, considering the tower $\mathbb{Q}(\sqrt[\leftroot{-2}\uproot{2}4]{2})\supseteq\mathbb{Q}(\sqrt{2})\supseteq\mathbb{Q}$. It is obvious that the right extension has degree 2, but for the left extension I also what to show this. I think that the minimal polynomial of $\sqrt[\leftroot{-2}\uproot{2}4]{2}$ over $\mathbb{Q}(\sqrt{2})$ is $x^2-\sqrt{2}$. After all I know for this to be true I need to show that $\sqrt[\leftroot{-2}\uproot{2}4]{2}\notin\mathbb{Q}(\sqrt{2})$, but how do I do this?
Instead of first showing that $\sqrt[4]{2} \not\in {\mathbb Q}(\sqrt2)$ with the goal of establishing that $[{\mathbb Q}(\sqrt[4]{2}) : {\mathbb Q}(\sqrt2)] = 2$, it is probably easier to go the other way around.
In the tower of extensions ${\mathbb Q}(\sqrt[4]{2}) : {\mathbb Q}(\sqrt{2}) : {\mathbb Q}$, you have already established that $[{\mathbb Q}(\sqrt[4]{2}) : {\mathbb Q}] = 4$ and also that $[{\mathbb Q}(\sqrt{2}) : {\mathbb Q}] = 2$. Therefore $[{\mathbb Q}(\sqrt[4]{2}) : {\mathbb Q}(\sqrt2)] = 4/2 = 2$.
Now, if you're still interested in the minimum polynomial of $\sqrt[4]{2}$ over ${\mathbb Q}(\sqrt{2})$: you now know it must have degree $2$, so it must be $X^2 - \sqrt{2}$.
If $\sqrt[4]{2}\in\mathbb Q[\sqrt{2}]$, then there would be an element $k$ that has $k^2 = \sqrt{2}$.
Any $k$ can be written as $k:= a+b\sqrt{2}$. Consider the square of this: \begin{align*} k^2 & = (a+b\sqrt{2})^2 = a^2+2ab\sqrt{2}+2b^2 = (a^2+2b^2)+2ab\sqrt{2} \end{align*} So, for $k^2 = \sqrt{2}$, we need to have that $ab = \frac{1}{2}$, and $a^2+2b^2 = 0$. As the second equation is only satisfied when $a = b= 0$, it follows that no such $a,b$ exist.
The other answers already cover the problem very well, but I'd like to present another approach.
You want to prove that $x^2-\sqrt{2}$ is irreducible in $\mathbb{Q}(\sqrt{2})[x]$.
Note that $\mathbb{Q}(\sqrt{2})$ is the field of fractions of $\mathbb{Z}[\sqrt{2}]$, which is a Euclidean domain. Moreover, $\sqrt{2}$ is a prime element in $\mathbb{Z}[\sqrt{2}]$. Thus, you can apply Eisenstein's generalized criterion to $x^2-\sqrt{2}$ (with respect to the prime element $\sqrt{2}$) to show that it is irreducible in $\mathbb{Q}(\sqrt{2})[x]$.
This method does not generalize as well as the ones from the other answers, but I thought it worthwhile to point it out.
|
This conjecture is tested for all odd natural numbers less than $10^8$:
If $n>1$ is an odd natural number, then there are natural numbers $a,b$ such that $n=a+b$ and $a^2+b^2\in\mathbb P$.
$\mathbb P$ is the set of prime numbers.
I wish help with counterexamples, heuristics or a proof.
Addendum: For odd $n$, $159<n<50,000$, there are $a,b\in\mathbb Z^+$ such that $n=a+b$ and both $a^2+b^2$ and $a^2+(b+2)^2$ are primes.
As hinted by pisco125 in a comment, there is a weaker version of the conjecture:
Every odd number can be written $x+y$ where $x+iy$ is a Gaussian prime.
Which give arise to a function:
$g:\mathbb P_G\to\mathbb O'$, given by $g(x+iy)=x+y$, where $\mathbb O'$ is the odd integers with $0,\pm 2$ included.
The weaker conjecture is then equivalent with that $g$ is onto.
The reason why the conjecture is weaker is that any prime of the form $p=4n-1$ is a Gaussian prime. The reason why $0,\pm 2$ must be added is that $\pm 1 \pm i$ is a Gaussian prime.
|
i don't have a background in Probability or Mathematics so i may be looking at a simple problem without knowing it. I have the following independent Events and their probabilities:
Event $A \rightarrow 83\%$ Event $B \rightarrow 25\%$ Event $C \rightarrow 41\%$ Event $D \rightarrow 68\%$ Event $E \rightarrow 11\%$ Event $F \rightarrow 47\%$
I know that if the events where all equal probability (for example $50\%$), the probability of getting them all right would be:
$0.5^6 = 1.56\%$
I also understand that if we wanted to know the probability of getting $5$ out of $6$ right the probability would be:
$0.5^5 \cdot 0.5^1 \cdot \dfrac{6!}{5! \cdot 1!} = 9.38\%.$
And the same reasoning goes for getting $4$ out of $6$ right:
$0.5^4\cdot0.5 ^ 2\cdot \dfrac{6!}{4! \cdot 2!} = 23.44\%.$
But how do we compute it when the events have a different set of known probabilities? For example, how do we compute the probability of getting $4$ out of $6$ events right with the above set of probabilities. Thanks
|
Yes. Let $T$ denote the transition kernel. Suppose $\pi$ and $\nu$ are distinct stationary distributions. Let $\tau'$ be defined by $\tau'(A) = \min(\pi(A),\nu(A))$: then $\tau'$ is a finite measure. Let $p = \tau'(\mathbb{R}^+)$; since $\pi \neq \nu$, we have $p < 1$, and since the chain is irreducible, $p > 0$. Let $\tau = \tau'/p$, $\tilde{\pi} = (\pi-\tau')/(1-p)$ and $\tilde{\nu} = (\nu - \tau')/(1-p)$, so that
$\pi = p\tau + (1-p)\tilde{\pi}$
and
$\nu = p\tau + (1-p)\tilde{\nu}$.
Note that $\tilde{\pi} = \max(0, \pi-\nu)/(1-p)$ and $\tilde{\nu} = \max(0,\nu-\pi)/(1-p)$, so that $\max(\tilde{\pi},\tilde{\nu}) = 0$
Now we show that $\tau$ is also a stationary distribution of the chain. Observe that $\pi= T\pi = pT\tau + (1-p)T\tilde{\pi}$ so $\pi \geq pT\tau$. Similarly, $\nu \geq pT\tau$. Therefore, $pT\tau \leq \min(\pi,\nu) = p\tau$. Combined with the fact that $\tau(\mathbb{R}^+) = T\tau(\mathbb{R}^+)$, we have $T\tau = \tau$.
This, in turn, implies that $\tilde{\pi}$ and $\tilde{\nu}$ are also stationary distributions. However, the fact that $\max(\tilde{\pi},\tilde{\nu}) = 0$ then implies that the chain is reducible--a contradiction.
|
36 7 Summary Bessel's integral form: is it e to the power of a cosine or sine?
Hello all,
This is knowledge needed to solve my take-home final exam but I just want to ask about the definition of Bessel's integrals. This is not a problem on the exam. Wikipedia says the integral is defined as: $$J_n(x) = \frac {1} {2\pi} \int_{-\pi}^{\pi} e^{i(xsin(\theta) - n\theta)} \, d\theta$$ My professor wrote it as: $$J_m(Z) = \frac {1} {2\pi} \int_{-\pi}^{\pi} e^{i(Zcos(\theta))} e^{(- im\theta)} \, d\theta$$ Ignoring notation differences and I understand that cosine and sine form an orthogonal basis and are essentially the same as they can be easily expressed in terms of each other, but how do I justify that these two expressions are EXACTLY the same without any modifications with negative signs and such? Thanks! Jesse
This is knowledge needed to solve my take-home final exam but I just want to ask about the definition of Bessel's integrals. This is not a problem on the exam. Wikipedia says the integral is defined as:
$$J_n(x) = \frac {1} {2\pi} \int_{-\pi}^{\pi} e^{i(xsin(\theta) - n\theta)} \, d\theta$$
My professor wrote it as:
$$J_m(Z) = \frac {1} {2\pi} \int_{-\pi}^{\pi} e^{i(Zcos(\theta))} e^{(- im\theta)} \, d\theta$$
Ignoring notation differences and I understand that cosine and sine form an orthogonal basis and are essentially the same as they can be easily expressed in terms of each other, but how do I justify that these two expressions are EXACTLY the same without any modifications with negative signs and such?
Thanks!
Jesse
|
Scattering of radial data in the focusing NLS and generalized Hartree equations
Department of Mathematics & Statistics, Florida International University, Miami, FL 33199, USA
We consider the focusing nonlinear Schrödinger equation $ i u_t + \Delta u + |u|^{p-1}u = 0 $, $ p>1, $ and the generalized Hartree equation $ iv_t + \Delta v + (|x|^{-(N-\gamma)}\ast |v|^p)|v|^{p-2}u = 0 $, $ p\geq2 $, $ \gamma<N $, in the mass-supercritical and energy-subcritical setting. With the initial data $ u_0\in H^1( \mathbb R^N) $ the characterization of solutions behavior under the mass-energy threshold is known for the NLS case from the works of Holmer and Roudenko in the radial [
In this work we give an alternative proof of scattering for both NLS and gHartree equations in the radial setting in the inter-critical regime, following the approach of Dodson and Murphy [
Keywords:Scattering, nonlinear Schrödinger equation, generalized Hartree equation, virial, Morawetz identity. Mathematics Subject Classification:Primary: 35Q55, 35Q40; Secondary: 37K40, 37K05. Citation:Anudeep Kumar Arora. Scattering of radial data in the focusing NLS and generalized Hartree equations. Discrete & Continuous Dynamical Systems - A, 2019, 39 (11) : 6643-6668. doi: 10.3934/dcds.2019289
References:
[1] [2] [3] [4] [5] [6]
T. Cazenave,
[7]
J. Colliander and S. Roudenko,
Mass concentration window size and Strichartz norm divergence rate for the
[8] [9] [10]
T. Duyckaerts, J. Holmer and S. Roudenko,
Scattering for the non-radial 3D cubic nonlinear Schrödinger equation,
[11] [12] [13] [14]
J. Holmer and S. Roudenko, On blow-up solutions to the 3D cubic nonlinear Schrödinger equation,
[15]
J. Holmer and S. Roudenko,
A sharp condition for scattering of the radial 3D cubic nonlinear Schrödinger equation,
[16]
C. E. Kenig and F. Merle,
Global well-posedness, scattering and blow-up for the energy-critical, focusing, non-linear Schrödinger equation in the radial case,
[17] [18] [19] [20] [21] [22]
P.-L. Lions, Compactness and topological methods for some nonlinear variational problems of mathematical physics,
[23]
V. Moroz and J. Van Schaftingen,
Groundstates of nonlinear Choquard equations: Existence, qualitative properties and decay asymptotics,
[24] [25] [26] [27]
T. Tao,
[28]
C. L. Xiang, Uniqueness and nondegeneracy of ground states for Choquard equations in three dimensions,
show all references
References:
[1] [2] [3] [4] [5] [6]
T. Cazenave,
[7]
J. Colliander and S. Roudenko,
Mass concentration window size and Strichartz norm divergence rate for the
[8] [9] [10]
T. Duyckaerts, J. Holmer and S. Roudenko,
Scattering for the non-radial 3D cubic nonlinear Schrödinger equation,
[11] [12] [13] [14]
J. Holmer and S. Roudenko, On blow-up solutions to the 3D cubic nonlinear Schrödinger equation,
[15]
J. Holmer and S. Roudenko,
A sharp condition for scattering of the radial 3D cubic nonlinear Schrödinger equation,
[16]
C. E. Kenig and F. Merle,
Global well-posedness, scattering and blow-up for the energy-critical, focusing, non-linear Schrödinger equation in the radial case,
[17] [18] [19] [20] [21] [22]
P.-L. Lions, Compactness and topological methods for some nonlinear variational problems of mathematical physics,
[23]
V. Moroz and J. Van Schaftingen,
Groundstates of nonlinear Choquard equations: Existence, qualitative properties and decay asymptotics,
[24] [25] [26] [27]
T. Tao,
[28]
C. L. Xiang, Uniqueness and nondegeneracy of ground states for Choquard equations in three dimensions,
[1] [2] [3] [4]
Alp Eden, Elİf Kuz.
Almost cubic nonlinear Schrödinger equation: Existence, uniqueness and scattering.
[5]
Nakao Hayashi, Elena I. Kaikina, Pavel I. Naumkin.
Large time behavior of solutions to the generalized derivative nonlinear Schrödinger equation.
[6]
Jianqing Chen.
Sharp variational characterization and a Schrödinger equation with Hartree type nonlinearity.
[7]
Georgios Fotopoulos, Markus Harju, Valery Serov.
Inverse fixed angle scattering and backscattering for a nonlinear Schrödinger equation in 2D.
[8]
Satoshi Masaki.
A sharp scattering condition for focusing mass-subcritical nonlinear
Schrödinger equation.
[9]
Chenmin Sun, Hua Wang, Xiaohua Yao, Jiqiang Zheng.
Scattering below ground state of focusing fractional nonlinear Schrödinger equation with radial data.
[10] [11]
Benjamin Dodson.
Global well-posedness and scattering for the defocusing, cubic nonlinear Schrödinger equation when $n = 3$ via a linear-nonlinear decomposition.
[12]
Hristo Genev, George Venkov.
Soliton and blow-up solutions to the time-dependent Schrödinger-Hartree equation.
[13]
Abdelwahab Bensouilah, Van Duong Dinh, Mohamed Majdoub.
Scattering in the weighted $ L^2 $-space for a 2D nonlinear Schrödinger equation with inhomogeneous exponential nonlinearity.
[14]
Pavel I. Naumkin, Isahi Sánchez-Suárez.
On the critical nongauge invariant nonlinear Schrödinger equation.
[15]
Alexander Komech, Elena Kopylova, David Stuart.
On asymptotic stability of solitons
in a nonlinear Schrödinger equation.
[16] [17]
Kimitoshi Tsutaya.
Scattering theory for the wave equation of a Hartree type in three space dimensions.
[18]
J. Colliander, Justin Holmer, Monica Visan, Xiaoyi Zhang.
Global existence and scattering for rough solutions to generalized nonlinear Schrödinger equations on $R$.
[19]
Kun Cheng, Yinbin Deng.
Nodal solutions for a generalized quasilinear Schrödinger equation with critical exponents.
[20]
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top]
|
@HarryGindi So the $n$-simplices of $N(D^{op})$ are $Hom_{sCat}(\mathfrak{C}[n],D^{op})$. Are you using the fact that the whole simplicial set is the mapping simplicial object between cosimplicial simplicial categories, and taking the constant cosimplicial simplicial category in the right coordinate?
I guess I'm just very confused about how you're saying anything about the entire simplicial set if you're not producing it, in one go, as the mapping space between two cosimplicial objects. But whatever, I dunno. I'm having a very bad day with this junk lol.
It just seems like this argument is all about the sets of n-simplices. Which is the trivial part.
lol no i mean, i'm following it by context actually
so for the record i really do think that the simplicial set you're getting can be written as coming from the simplicial enrichment on cosimplicial objects, where you take a constant cosimplicial simplicial category on one side
@user1732 haha thanks! we had no idea if that'd actually find its way to the internet...
@JonathanBeardsley any quillen equivalence determines an adjoint equivalence of quasicategories. (and any equivalence can be upgraded to an adjoint (equivalence)). i'm not sure what you mean by "Quillen equivalences induce equivalences after (co)fibrant replacement" though, i feel like that statement is mixing category-levels
@JonathanBeardsley if nothing else, this follows from the fact that \frakC is a left quillen equivalence so creates weak equivalences among cofibrant objects (and all objects are cofibrant, in particular quasicategories are). i guess also you need to know the fact (proved in HTT) that the three definitions of "hom-sset" introduced in chapter 1 are all weakly equivalent to the one you get via \frakC
@IlaRossi i would imagine that this is in goerss--jardine? ultimately, this is just coming from the fact that homotopy groups are defined to be maps in (from spheres), and you only are "supposed" to map into things that are fibrant -- which in this case means kan complexes
@JonathanBeardsley earlier than this, i'm pretty sure it was proved by dwyer--kan in one of their papers around '80 and '81
@HarryGindi i don't know if i would say that "most" relative categories are fibrant. it was proved by lennart meier that model categories are Barwick--Kan fibrant (iirc without any further adjectives necessary)
@JonathanBeardsley what?! i really liked that picture! i wonder why they removed it
@HarryGindi i don't know about general PDEs, but certainly D-modules are relevant in the homotopical world
@HarryGindi oh interesting, thomason-fibrancy of W is a necessary condition for BK-fibrancy of (R,W)?
i also find the thomason model structure mysterious. i set up a less mysterious (and pretty straightforward) analog for $\infty$-categories in the fappendix here: arxiv.org/pdf/1510.03525.pdf
as for the grothendieck construction computing hocolims, i think the more fundamental thing is that the grothendieck construction itself is a lax colimit. combining this with the fact that ($\infty$-)groupoid completion is a left adjoint, you immediately get that $|Gr(F)|$ is the colimit of $B \xrightarrow{F} Cat \xrightarrow{|-|} Spaces$
@JonathanBeardsley If you want to go that route, I guess you still have to prove that ^op_s and ^op_Delta both lie in the unique nonidentity component of Aut(N(Qcat)) and Aut(N(sCat)) whatever nerve you mean in this particular case (the B-K relative nerve has the advantage here bc sCat is not a simplicial model cat)
I think the direct proof has a lot of advantages here, since it gives a point-set on-the-nose isomorphism
Yeah, definitely, but I'd like to stay and work with Cisinski on the Ph.D if possible, but I'm trying to keep options open
not put all my eggs in one basket, as it were
I mean, I'm open to coming back to the US too, but I don't have any ideas for advisors here who are interested in higher straightening/higher Yoneda, which I am convinced is the big open problem for infinity, n-cats
Gaitsgory and Rozenblyum, I guess, but I think they're more interested in applications of those ideas vs actually getting a hold of them in full generality
@JonathanBeardsley Don't sweat it. As it was mentioned I have now mod superpowers, so s/he can do very little to upset me. Since you're the room owner, let me know if I can be of any assistance here with the moderation (moderators on SE have network-wide chat moderating powers, but this is not my turf, so to speak).
There are two "opposite" functors:$$ op_\Delta\colon sSet\to sSet$$and$$op_s\colon sCat\to sCat.$$The first takes a simplicial set to its opposite simplicial set by precomposing with the opposite of a functor $\Delta\to \Delta$ which is the identity on objects and takes a morphism $\langle k...
@JonathanBeardsley Yeah, I worked out a little proof sketch of the lemma on a notepad
It's enough to show everything works for generating cofaces and codegeneracies
the codegeneracies are free, the 0 and nth cofaces are free
all of those can be done treating frak{C} as a black box
the only slightly complicated thing is keeping track of the inner generated cofaces, but if you use my description of frak{C} or the one Joyal uses in the quasicategories vs simplicial categories paper, the combinatorics are completely explicit for codimension 1 face inclusions
the maps on vertices are obvious, and the maps on homs are just appropriate inclusions of cubes on the {0} face of the cube wrt the axis corresponding to the omitted inner vertex
In general, each Δ[1] factor in Hom(i,j) corresponds exactly to a vertex k with i<k<j, so omitting k gives inclusion onto the 'bottom' face wrt that axis, i.e. Δ[1]^{k-i-1} x {0} x Δ[j-k-1] (I'd call this the top, but I seem to draw my cubical diagrams in the reversed orientation).
> Thus, using appropriate tags one can increase ones chances that users competent to answer the question, or just interested in it, will notice the question in the first place. Conversely, using only very specialized tags (which likely almost nobody specifically favorited, subscribed to, etc) or worse just newly created tags, one might miss a chance to give visibility to ones question.
I am not sure to which extent this effect is noticeable on smaller sites (such as MathOverflow) but probably it's good to follow the recommendations given in the FAQ. (And MO is likely to grow a bit more in the future, so then it can become more important.) And also some smaller tags have enough followers.
You are asking posts far away from areas I am familiar with, so I am not really sure which top-level tags would be a good fit for your questions - otherwise I would edit/retag the posts myself. (Other than possibility to ping you somewhere in chat, the reason why I posted this in this room is that users of this room are likely more familiar with the topics you're interested in and probably they would be able to suggest suitable tags.)
I just wanted to mention this, in case it helps you when asking question here. (Although it seems that you're doing fine.)
@MartinSleziak even I was not sure what other tags are appropriate to add.. I will see other questions similar to this, see what tags they have added and will add if I get to see any relevant tags.. thanks for your suggestion.. it is very reasonable,.
You don't need to put only one tag, you can put up to five. In general it is recommended to put a very general tag (usually an "arxiv" tag) to indicate broadly which sector of math your question is in, and then more specific tags
I would say that the topics of the US Talbot, as with the European Talbot, are heavily influenced by the organizers. If you look at who the organizers were/are for the US Talbot I think you will find many homotopy theorists among them.
|
Forgot password? New user? Sign up
Existing user? Log in
πA−iln(B+C)\large \frac{\pi}{A} - i \ln {\left(B+\sqrt{C}\right)} Aπ−iln(B+C)
There is no real value for sin−1(2)\sin^{-1}(2) sin−1(2) but it has a complex one. And it is of the form as described above where, AAA, BBB and CCC are positive integers. Evaluate A+B+CA+B+CA+B+C.
Problem Loading...
Note Loading...
Set Loading...
|
Dear Uncle Colin,
I'm trying to sew a traditional football in the form of a truncated icosahedron. If I want a radius of 15cm, how big do the polygons need to be?
-- Plugging In Euler Characteristic's Excessive
Hello, PIECE, and thank you for your message!
Getting an exact answer to that is a little tricky, but we can come up with a pretty good approximation: if we assume the ball's surface area is the same as that of a sphere, we can work it out.
Now, a truncated icosahedron is made of 12 regular pentagons and 20 regular hexagons, all of the same side length, which I'll call $E$.
Finding the area of a regular polygon boils down to trigonometry: you can split any regular polygon with $n$ sides into $2n$ right-angled triangles with an apex angle of $\frac{\pi}{n}$ opposite a base of $\frac 12 E$. The area of each triangle is $\frac1 8 E^2 \cot\left(\frac{\pi}{n}\right)$, so the polygon area is $\frac{n}{4}E^2 \cot\left(\frac{\pi}{n}\right)$.
In particular, a regular pentagon has an area of $\frac{5}{4}E^2 \cot\left(\frac{\pi}{5}\right)$, and the hexagon is $\frac{6}{4}E^2 \cot\left(\frac{\pi}{6}\right)$, which is $\frac{3}{2}\sqrt{3}E^2$.
Multiplying the pentagon area by 12 and the hexagon area by 20 gives an unholy mess that gives a total surface area of about $72.607E^2$. Don't tell the Mathematical Ninja.
The surface area of a sphere of radius 15cm is $4\pi r^2 = 900π \mathrm{cm}^2$.
So, we have $900 \pi \approx 72.607E^2$, and we can rearrange to find $E^2 \approx 38.94 \mathrm{cm}^2$ and $E\approx 6.24$cm((The accurate answer is around 6.05cm, but it's even more of a pain to work out.)).
Hope that helps!
-- Uncle Colin
|
I further show that this sum distribution is impossible for any pair of dice, no matter how many faces they have, what numbers are on those faces, and what weights each face has, barring the trivial case where one die always rolls the same result.
Translate the dice weightings into generating functions
$$A(x) = \sum a_i x^i$$
and likewise for $B(x)$. Since dice sum is distribution convolution is generating function product, we're required to have
$$A(x) B(x) = \frac{x^2+x^3+\dots + x^{11} + x^{12}}{11}$$
Express the geometric series on the RHS as
$$A(x) B(x) = \frac{x^2}{11} \cdot \frac{1-x^{11}}{1-x}$$
The roots of $1-x^{11}$ are the powers of the primitive elevent roots of unity $\omega = e^{2\pi i/11}$, which are $1, \omega, \omega^2, \dots, \omega^{11}$. The $1-x$ in the denominator removes the root $x=1$. So, we can factor into linear terms
$$A(x) B(x) = \frac{x^2}{11} \cdot \prod_{j=1}^{10} (x-\omega^j)$$
Now, because of unique factorization, these linear terms must be split across $A$ and $B$. Moreover, since the coefficients of $A$ and $B$ are probabilities, the resulting polynomials must have non-negative real coefficients.
At this point, we're down to a finite number $2^{10}$ of possibilities, so we could try all of them by computer and check that none work. But, we can simplify the search by hand first.
First, a real polynomial must equal its conjugate, so its complex roots must come in conjugate pairs. Therefore, the conjugate roots must "stick" together when being split into $A$ and $B$. So, we can group them into quadratic units
$$(x-\omega^j)(x-\overline{\omega^j}) = x^2 - 2\mathrm{Re}(\omega^j)x + 1$$
for $j=1,2,3,4,5$.
Now, note that for a product of quadratic units, the coefficient of $x$ is the sum of all $-2\mathrm{Re}(\omega^j)$ of the $j$'s included. Since this coefficient must be positive, the sum of the real parts of the included roots of unity must be negative.
These real parts are roughly $0.84, 0.42, -0.14, -0.65, -0.96$. Already we see that $0.84$ must be grouped with $-0.96$ to make a negative sum, since the other negatives aren't enough. Then, $0.42$ must be grouped with $-0.65$, since the $-0.14$ and the $-0.12$ from the previous pairing aren't enough to make it negative.
So, we have three groups $\{\omega^1,\omega^5\},\{\omega^2,\omega^4\},\{\omega^3\}$ split into two sides. Putting them all onto one side would result in a trivial monomial on the other side, giving a die that always rolls the same. So, we must have one group alone and the other two groups together. Checking all three possibilities directly, each one gives a polynomial that contains negative coefficients. So, the desired split is not possible, and no solution exists.
|
The reason is that there are two ways of thinking about "points".
Let $A$ be a ring. Then, define:
A scheme-theoretic/topological point of Spec $A$ is a prime ideal of $A$. A geometric/functorial point of Spec $A$ is an equivalence class of morphisms Spec $K\rightarrow$ Spec $A$, where $K$ is a field, and two morphisms:$$p_1 : \text{Spec }K_1\rightarrow \text{Spec }A,\qquad p_2 : \text{Spec }K_2\rightarrow\text{Spec }A$$are equivalent if there exists a field $K$ containing both $K_1,K_2$ as subfields, and a morphism Spec $K\rightarrow$ Spec $A$ making the obvious diagrams commute.
Note that if $A$ is a $k$-algebra, then geometric points of Spec $A$ are just morphisms Spec $\overline{k}\rightarrow\text{Spec }A$. Further, if $A$ is a finite type $k$-algebra (for example $k[x,y]/(f)$) then a geometric point of $\text{Spec }k[x,y]/(f)$ is nothing but a pair $(a,b)\in\overline{k}^2$ satisfying the equation $f$, which is precisely how Silverman defines a "point".
Here's an example that will explain everything:
Let $k = \mathbb{Q}$, and let $A = \mathbb{Q}[x]$, $B = \mathbb{Q}[y]$, and suppose $X = \text{Spec }A$ and $Y = \text{Spec }B$. Consider the map $f : Y\rightarrow X$ given by $x\mapsto y^2$. This map is degree 2, and sends $y = a$ in $Y$ to $x = a^2$ in $X$.
First, lets consider the scheme-theoretic/topological point $q\in X$ corresponding to $x = -1$. Thus, $q$ is the prime ideal $(x+1)\subset\mathbb{Q}[x]$. It's not hard to check that the only prime of $B$ lying over $q$ is $p := (x^2+1)$, whose residue field is $\mathbb{Q}(i)$, so $p$ has "inertia degree" 2 over $q$.
Now, lets consider the geometric point $Q : $Spec $\overline{\mathbb{Q}}\rightarrow$ Spec $A$ given by sending $x\mapsto -1$. It's not hard to see that sending $y\mapsto i$ and sending $y\mapsto -i$ define two distinct morphisms $P_1,P_2 : \overline{\mathbb{Q}}\rightarrow$ Spec $B$ (ie, two distinct geometric points) which lie above $Q$.
Thus, while there is only one topological point above $x = -1$, there are two geometric points, each of which makes an appearance as a summand of the equation in Prop 2.6. This makes up for the fact that the inertia degree doesn't appear. In fact, one may
define the inertia degree as the number of geometric points lying over $x = -1$ who have the same image $P\in Y$.
Ie, in the situation of our morphism $f : Y\rightarrow X$, the two equivalent ways of writing the formula would be:
$deg(f) = 2\cdot e_f(p) = 2\cdot 1$ (2 is the inertia degree), or $deg(f) = e_f(P_1) + e_f(P_2) = 1 + 1$
|
In electromagnetism,
charge density is a measure of electric charge per unit volume of space, in one, two or three dimensions. More specifically: the linear, surface, or volume charge density is the amount of electric charge per unit length, surface area, or volume, respectively. The respective SI units are C·m −1, C·m −2 or C·m −3. [1]
Like any density, charge density can depend on position, but charge and thus charge density can be negative. It should not be confused with the charge carrier density, the number of charge carriers (e.g. electrons, ions) in a material per unit volume, not including the actual charge on the carriers
In chemistry, it can refer to the charge distribution over the volume of a particle; such as a molecule, atom or ion. Therefore, a lithium cation will carry a higher charge density than a sodium cation due to the lithium cation's having a smaller ionic radius, even though sodium has more protons (11) than lithium (3).
Definitions Continuous charges
Continuous charge distribution. The volume charge density ρ is the amount of charge per unit volume (cube), surface charge density σ is amount per unit surface area (circle) with outward unit normal
n̂
,
d
is the dipole moment
between two point charges, the volume density of these is the polarization density
P
. Position vector
r
is a point to calculate the electric field
;
r′
is a point in the charged object.
Following are the definitions for continuous charge distributions.
[2] [3]
The linear charge density is the ratio of an infinitesimal electric charge d
Q (SI unit: C) to an infinitesimal line element,
\lambda_q = \frac{d Q}{d \ell}\,,\quad
similarly the surface charge density uses a surface area element d
S
\sigma_q = \frac{d Q}{d S}\,,\quad
and the volume charge density uses a volume element d
V \rho_q =\frac{d Q}{d V}\,,\quad
Integrating the definitions gives the total charge
Q of a region according to line integral of the linear charge density λ ( q r) over a line or 1d curve C, Q=\int\limits_L \lambda_q(\bold{r}) \,d\ell
similarly a surface integral of the surface charge density σ
( q r) over a surface S, Q=\int\limits_S \sigma_q(\bold{r}) \,dS
and a volume integral of the volume charge density ρ
( q r) over a volume V, Q=\int\limits_V \rho_q(\bold{r}) \,dV
where the subscript
q is to clarify that the density is for electric charge, not other densities like mass density, number density, probability density, and prevent conflict with the many other uses of λ, σ, ρ in electromagnetism for wavelength, electrical resistivity and conductivity.
Within the context of electromagnetism, the subscripts are usually dropped for simplicity: λ, σ, ρ. Other notations may include: ρ, ρ
, ρ s , ρ v , ρ L , ρ S etc. V Average charge densities
The total charge divided by the length, surface area, or volume will be the average charge densities:
\langle\lambda_q \rangle = \frac{Q}{\ell}\,,\quad \langle\sigma_q\rangle = \frac{Q}{S}\,,\quad\langle\rho_q\rangle = \frac{Q}{V}\,. Free, bound and total charge
In dielectric materials, the total charge of an object can separate into "free" and "bound" charges.
Bound charges set up electric dipoles in response to an applied electric field E, and polarize other nearby dipoles tending to line them up, the net accumulation of charge from the orientation of the dipoles is the bound charge. They are called bound because they cannot be removed: in the dielectric material the charges are the electrons bound to the nuclei. [3]
Free charges are the excess charges which can move into electrostatic equilibrium, i.e. when the charges are not moving and the resultant electric field is independent of time, or constitute electric currents. [2] Total charge densities
In terms of volume charges densities, the
total charge density is: \rho = \rho_f + \rho_b\,.
as for surface charge densities:
\sigma = \sigma_f + \sigma_b\,.
where subscripts "f" and "b" denote "free" and "bound" respectively.
Bound charge
The bound surface charge is the charge piled-up at the surface of the dielectric, given by the dipole moment perpendicular to the surface:
[3] q_b = \frac{\bold{d} \cdot\mathbf{\hat{n}}}{|\bold{s}|}
where
s is the separation between the point charges constituting the dipole. Taking infinitesimals: d q_b = \frac{d\bold{d}}{|\bold{s}|}\cdot\mathbf{\hat{n}}
and dividing by the differential surface element
dS gives the bound surface charge density: \sigma_b = \frac{d q_b}{d S} = \frac{d\bold{d}}{|\bold{s}| dS} \cdot\mathbf{\hat{n}} = \frac{d\bold{d}}{dV} \cdot\mathbf{\hat{n}} = \bold{P} \cdot\mathbf{\hat{n}}\,.
where
P is the polarization density, i.e. density of electric dipole moments within the material, and dV is the differential volume element.
Using the divergence theorem, the bound volume charge density within the material is
q_b = \iiint \rho_b dV = -{\scriptstyle S}\bold{P} \cdot \mathbf{\hat{n}} dS = -\iiint\nabla\cdot\mathbf{P}dV
hence:
\rho_b = - \nabla\cdot\mathbf{P}\,.
The negative sign arises due to the opposite signs on the charges in the dipoles, one end is within the volume of the object, the other at the surface.
A more rigorous derivation is given below.
[3] Free charge density
The free charge density serves as a useful simplification in Gauss's law for electricity; the volume integral of it is the free charge enclosed in a charged object - equal to the net flux of the electric displacement field
D emerging from the object: \Phi_D = {\scriptstyle S}\bold{D} \cdot\bold{\hat{n}}dS = \iiint \rho_f dV
See Maxwell's equations and constitutive relation for more details.
Homogeneous charge density
For the special case of a homogeneous charge density ρ
0, independent of position i.e. constant throughout the region of the material, the equation simplifies to: Q=V\cdot \rho_0.
The proof of this is immediate. Start with the definition of the charge of any volume:
Q=\int\limits_V \rho_q(\bold{r}) \,dV.
Then, by definition of homogeneity, ρ
( q r) is a constant denoted by ρ (to differ between the constant and non-constant densities), and so by the properties of an integral can be pulled outside of the integral resulting in: q, 0 Q=\rho_{q,0} \int\limits_V \,dV = \rho_0 V
so,
Q=V \cdot \rho_{q,0}.
The equivalent proofs for linear charge density and surface charge density follow the same arguments as above.
Discrete charges
For a single point charge
q at position r 0 inside a region of 3d space R, like an electron, the volume charge density can be expressed by the Dirac delta function: \rho_q(\bold{r}) = q \delta(\mathbf{r} - \mathbf{r}_0)
where
r is the position to calculate the charge.
As always, the integral of the charge density over a region of space is the charge contained in that region. The delta function has the
sifting property for any function f: \int_R d^3 \mathbf{r} f(\mathbf{r})\delta(\mathbf{r} - \mathbf{r}_0) = f(\mathbf{r}_0)
so the delta function ensures that when the charge density is integrated over
R, the total charge in R is q: Q =\int_R d^3 \mathbf{r} \, \rho_q =\int_R d^3 \mathbf{r} \, q \delta(\mathbf{r} - \mathbf{r}_0) = q \int_R d^3 \mathbf{r} \, \delta(\mathbf{r} - \mathbf{r}_0) = q
This can be extended to
N discrete point-like charge carriers. The charge density of the system at a point r is a sum of the charge densities for each charge q at position i r , where i i = 1, 2, ..., N: \rho_q(\bold{r})=\sum_{i=1}^N\ q_i\delta(\mathbf{r} - \mathbf{r}_i)\,\!
The delta function for each charge
q in the sum, i δ( r − r ), ensures the integral of charge density over i R returns the total charge in R: Q=\int_R d^3 \mathbf{r} \sum_{i=1}^N\ q_i\delta(\mathbf{r} - \mathbf{r}_i) = \sum_{i=1}^N\ q_i \int_R d^3 \mathbf{r} \delta(\mathbf{r} - \mathbf{r}_i) = \sum_{i=1}^N\ q_i
If all charge carriers have the same charge
q (for electrons q = − e, the electron charge) the charge density can be expressed through the number of charge carriers per unit volume, n( r), by \rho_q(\bold{r})= q n(\mathbf{r})\,.
Similar equations are used for the linear and surface charge densities.
Charge density in special relativity
In special relativity, the length of a segment of wire depends on velocity of observer because of length contraction, so charge density will also depend on velocity. Anthony French
[4] has described how the magnetic field force of a current-bearing wire arises from this relative charge density. He used (p 260) a Minkowski diagram to show "how a neutral current-bearing wire appears to carry a net charge density as observed in a moving frame." When a charge density is measured in a moving frame of reference it is called proper charge density. [5] [6] [7]
It turns out the charge density
ρ and current density J transform together as a four current vector under Lorentz transformations. Charge density in quantum mechanics
In quantum mechanics, charge density
ρ is related to wavefunction q ψ( r) by the equation \rho_q(\bold{r}) = q |\psi(\mathbf r)|^2
where
q is the charge of the particle and |ψ( r)| 2 = ψ*( r) ψ( r) is the probability density function i.e. probability per unit volume of a particle located at r.
When the wavefunction is normalized - the average charge in the region
r ∈ R is Q= \int_R q |\psi(\mathbf r)|^2 \, {\rm d}^3 \bold{r}
where d
3 r is the integration measure over 3d position space. Application
The charge density appears in the continuity equation for electric current, also in Maxwell's Equations. It is the principal source term of the electromagnetic field, when the charge distribution moves this corresponds to a current density.
See also References ^ ^ a b ^ a b c d ^ A. French (1968) Special Relativity, chapter 8 Relativity and electricity, pp 229–65, W. W. Norton. ^ Richard A. Mould (2001) Basic Relativity, §62 Lorentz force, Springer Science & Business Media ISBN 0387952101 ^ Derek F. Lawden (2012) An Introduction to Tensor Calculus: Relativity and Cosmology, page 74, Courier Corporation ISBN 0486132145 ^ Jack Vanderlinde (2006) Classical Electromagnetic Theory, § 11.1 The Four-potential and Coulomb’s Law, page 314, Springer Science & Business Media ISBN 1402027001 External links
[1] - Spatial charge distributions
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
|
also, if you are in the US, the next time anything important publishing-related comes up, you can let your representatives know that you care about this and that you think the existing situation is appalling
@heather well, there's a spectrum
so, there's things like New Journal of Physics and Physical Review X
which are the open-access branch of existing academic-society publishers
As far as the intensity of a single-photon goes, the relevant quantity is calculated as usual from the energy density as $I=uc$, where $c$ is the speed of light, and the energy density$$u=\frac{\hbar\omega}{V}$$is given by the photon energy $\hbar \omega$ (normally no bigger than a few eV) di...
Minor terminology question. A physical state corresponds to an element of a projective Hilbert space: an equivalence class of vectors in a Hilbert space that differ by a constant multiple - in other words, a one-dimensional subspace of the Hilbert space. Wouldn't it be more natural to refer to these as "lines" in Hilbert space rather than "rays"? After all, gauging the global $U(1)$ symmetry results in the complex line bundle (not "ray bundle") of QED, and a projective space is often loosely referred to as "the set of lines [not rays] through the origin." — tparker3 mins ago
> A representative of RELX Group, the official name of Elsevier since 2015, told me that it and other publishers “serve the research community by doing things that they need that they either cannot, or do not do on their own, and charge a fair price for that service”
for example, I could (theoretically) argue economic duress because my job depends on getting published in certain journals, and those journals force the people that hold my job to basically force me to get published in certain journals (in other words, what you just told me is true in terms of publishing) @EmilioPisanty
> for example, I could (theoretically) argue economic duress because my job depends on getting published in certain journals, and those journals force the people that hold my job to basically force me to get published in certain journals (in other words, what you just told me is true in terms of publishing)
@0celo7 but the bosses are forced because they must continue purchasing journals to keep up the copyright, and they want their employees to publish in journals they own, and journals that are considered high-impact factor, which is a term basically created by the journals.
@BalarkaSen I think one can cheat a little. I'm trying to solve $\Delta u=f$. In coordinates that's $$\frac{1}{\sqrt g}\partial_i(g^{ij}\partial_j u)=f.$$ Buuuuut if I write that as $$\partial_i(g^{ij}\partial_j u)=\sqrt g f,$$ I think it can work...
@BalarkaSen Plan: 1. Use functional analytic techniques on global Sobolev spaces to get a weak solution. 2. Make sure the weak solution satisfies weak boundary conditions. 3. Cut up the function into local pieces that lie in local Sobolev spaces. 4. Make sure this cutting gives nice boundary conditions. 5. Show that the local Sobolev spaces can be taken to be Euclidean ones. 6. Apply Euclidean regularity theory. 7. Patch together solutions while maintaining the boundary conditions.
Alternative Plan: 1. Read Vol 1 of Hormander. 2. Read Vol 2 of Hormander. 3. Read Vol 3 of Hormander. 4. Read the classic papers by Atiyah, Grubb, and Seeley.
I am mostly joking. I don't actually believe in revolution as a plan of making the power dynamic between the various classes and economies better; I think of it as a want of a historical change. Personally I'm mostly opposed to the idea.
@EmilioPisanty I have absolutely no idea where the name comes from, and "Killing" doesn't mean anything in modern German, so really, no idea. Googling its etymology is impossible, all I get are "killing in the name", "Kill Bill" and similar English results...
Wilhelm Karl Joseph Killing (10 May 1847 – 11 February 1923) was a German mathematician who made important contributions to the theories of Lie algebras, Lie groups, and non-Euclidean geometry.Killing studied at the University of Münster and later wrote his dissertation under Karl Weierstrass and Ernst Kummer at Berlin in 1872. He taught in gymnasia (secondary schools) from 1868 to 1872. He became a professor at the seminary college Collegium Hosianum in Braunsberg (now Braniewo). He took holy orders in order to take his teaching position. He became rector of the college and chair of the town...
@EmilioPisanty Apparently, it's an evolution of ~ "Focko-ing(en)", where Focko was the name of the guy who founded the city, and -ing(en) is a common suffix for places. Which...explains nothing, I admit.
|
Preface: I am very new to Latex, and may have missed some things here.
I have a custom macro to create a horizontal line - not by my making, simply copied and pasted from the
exam package.
\newlength\linefillheight\newlength\linefillthickness\setlength\linefillheight{.25in}\setlength\linefillthickness{0.1pt}\newcommand\linefill{\leavevmode \leaders\hrule height \linefillthickness \hfill\kern\z@}
Unfortunately I have discovered that it doesn't work inside a tabbing environment:
\begin{document}\linefill\begin{tabbing}\linefill
The first \linefill generates a line, but the second does not. How can I change my Latex to have this work with the tabbing environment?
Here is an example .tex file that compiles and doesn't produce any horizontal lines, even though I want it to: (also note that \usepackage{examlines} refers to a custom .sty file I made that houses the code from the exam package for \linefill)
\documentclass[a4paper, 12pt]{article}\usepackage{mathptmx}\usepackage{examlines}\newcommand{\tab}{\hspace*{1em}}\begin{document}\begin{tabbing}\textbf{Question 1} \\ \textbf{a.} \tab \= Let $y = \left(- 3 x^{2} - 3 x\right)^{3}$. Find $\frac{dy}{dx}$. \\ \\ \> \linefill \\ \> \linefill \\ \> \linefill \\ \> \linefill \\ \> \linefill \\ \textbf{b.} \tab Let $f(x) = e^{2 x^{2} + 9 x + 5}$. Evaluate $f'(-1)$. \\ \\ \> \linefill \\ \> \linefill \\ \> \linefill \\ \> \linefill \\ \> \linefill \\ Let $f(x) = \left(- 3 x^{2} - 3 x\right)^{3} = u^{3}, u = - 3 x^{2} - 3 x$ \\ $f'(x) = 3 u^{2} \times u'$ \\ $f'(x) = - 81 x^{2} \left(x + 1\right)^{2} \left(2 x + 1\right)$ \\ \\$f'(x) = \left(4 x + 9\right) e^{2 x^{2} + 9 x + 5}$ \\ $f'(-1) = \frac{5}{e^{2}}$ \\ \end{tabbing}\end{document}
Here is the code in
examlines.sty: (I don't use all of it)
%--------------------------------------------------------------------% \fillwithlines% \fillwithlines takes one argument, which is either a length or \fill% or \stretch{number}, and it fills that much vertical space with% horizontal lines that run the length of the current line. That is,% they extend from the current left margin (which depends on whether% we're in a question, part, subpart, or subsubpart) to the right% margin.%% The distance between the lines is \linefillheight, whose default value% is set with the command%% \setlength\linefillheight{.25in}%% This value can be changed by giving a new \setlength command.%% The thickness of the lines is \linefillthickness, whose default value% is set with the command%% \setlength\linefillthickness{.1pt}%% This value can be changed by giving a new \setlength command.\newlength\linefillheight\newlength\linefillthickness\setlength\linefillheight{.25in}\setlength\linefillthickness{0.1pt}\newcommand\linefill{\leavevmode \leaders\hrule height \linefillthickness \hfill\kern\z@}\def\fillwithlines#1{% \begingroup \ifhmode \par \fi \hrule height \z@ \nobreak \setbox0=\hbox to \hsize{\hskip \@totalleftmargin \vrule height \linefillheight depth \z@ width \z@ \linefill}% % We use \cleaders (rather than \leaders) so that a given % vertical space will always produce the same number of lines % no matter where on the page it happens to start: \cleaders \copy0 \vskip #1 \hbox{}% \endgroup}%--------------------------------------------------------------------\newcommand{\e}{\mathrm{e}}
|
Ajay Kumar
Articles written in Proceedings – Mathematical Sciences
Volume 124 Issue 1 February 2014 pp 1-15
An Alexander dual of a multipermutohedron ideal has many combinatorial properties. The standard monomials of an Artinian quotient of such a dual correspond bijectively to some 𝜆-parking functions, and many interesting properties of these Artinian quotients are obtained by Postnikov and Shapiro (
Volume 126 Issue 4 October 2016 pp 479-500 Research Article
Multipermutohedron ideals have rich combinatorial properties. An explicit combinatorial formula for the multigraded Betti numbers of a multipermutohedron ideal and their Alexander duals are known. Also, the dimension of the Artinian quotient of an Alexander dual of a multipermutohedron ideal is the number of generalized parking functions. In this paper, monomial ideals which are certain variants of multipermutohedron ideals are studied. Multigraded Betti numbers of these variant monomial ideals and their Alexander duals are obtained. Further, many interesting combinatorial properties of multipermutohedron ideals are extended to these variant monomial ideals.
Volume 129 Issue 1 February 2019 Article ID 0010 Research Article
Let $S$ (or $T$ ) be the set of permutations of $[n] = \{1, . . . , n\}$ avoiding123 and 132 patterns (or avoiding 123, 132 and 213 patterns). The monomial ideals $I_{S} = \langle\rm{x}^\sigma = \prod^{n}_{i=1}x^{\sigma(i)}_{i} : \sigma \in S\rangle$ and $I_{T} = \langle\rm{x}^{\sigma} : \sigma \in T \rangle$ in the polynomial ring$R = k[x_{1}, . . . , x_{n}]$ over a field $k$ have many interesting properties. The Alexander dual $I^{[n]}_{S}$ of $I_{S}$ with respect to $\bf{n} = (n, . . . , n)$ has the minimal cellular resolution supported on the order complex $\Delta(\Sigma_{n})$ of a poset $\Sigma_{n}$. The Alexander dual $I^{[n]}_{T}$ also has the minimalcellular resolution supported on the order complex $\Delta(\tilde{\Sigma}_{n})$ of a poset $\tilde{\Sigma}_{n}$. The number of standard monomials of the Artinian quotient $\frac{R}{I^{[n]}_{S}}$ is given by the number of
Current Issue
Volume 129 | Issue 5 November 2019
Click here for Editorial Note on CAP Mode
|
This article provides answers to the following questions, among others:
What is the difference between elastic and plastic deformation? What is the atomic process of deformation? In which cases must springback be considered during the deformation process? What is meant by a slip system? What is the relationship between slip plane, slip direction and slip system? What is the difference between a normal stress and a shear stress? Why are only shear stresses responsible for the deformation process at the atomic level? What is meant by critical resolved shear stress? Introduction
The relatively good deformability of metals (also referred to as
ductility) compared to other materials is a significant feature. The reason for this lies in the special metallic bond. The good formability is the basis for many manufacturing processes such as bending, deep drawing, forging, etc.
Not every metal can be deformed equally well. The different degrees of ductility can be attributed mainly to the different lattice structures. In order to understand this, basic knowledge about the atomic processes during deformation is necessary.
In principle, a distinction can be made between
elastic deformation and plastic deformation. Elastic deformation
One speaks of an
elastic deformation when only a relatively low force ist action on the atoms in the respective material and therefore the atoms are only moved slightly. After removing the force, the atoms regain their initial positions. The deformed workpiece recovers completely back to its original shape after the elastic deformation.
Elastic deformation is a non-permanent deformation. The deformed material returns to its original shape after the force has been removed!
Mechanically loaded components in machines (e.g. cylinder head bolts in engines) should only be subjected to elastic deformations in order not to be permanently deformed.
Plastic deformation
In contrast to the elastic deformation, the applied force during a plastic deformation is relatively large. This leads to a sliding of individual atomic planes. The resulting atomic shifts are retained after removal of the force. The individual atomic planes no longer return to their original positions but have moved on by one or more atomic distances. The workpiece remains permanently deformed after removing the force.
A plastic deformation is a permanent deformation. The deformed material does not return to its original shape after the force has been removed!
In some manufacturing processes (e.g. forging, bending or deep drawing) such a plastic deformation is desired by means of which the corresponding components permanently obtain their desired shape.
Note that with every plastic deformation, the material is always elastically deformed to a certain extent (see animation above). Thus, the material springs back a little after removing the force, even at a plastic deformation. This is also called
springback.
Springback refers to the elastic portion which a deformed material recovers when the force is removed!
Such springback must be taken into account, for example, during bending. This makes it necessary to bend the component beyond the desired bending angle in order to compensate the springback.
Slip system
The atomic planes at which the atomic blocks shear during the plastic deformation are also called
slip planes. After the atomic blocks have emerged from the material by one or more atomic distances, they are visible under a microscope as slip steps.
Since the reflection behavior changes with the formation of the slip steps, this manifests in a matting of the surface. This is the reason why the bending point of polished pipes often appears dull.
Note that ultimately any plastic deformation process, regardless of the type of stress (whether train, pressure, bending, torsion or shearing) can be attributed to gliding of atomic blocks. Due to the strong electrostatic forces between the individual atoms and the associated stability, however, the shape of the unit cell does not change (permanently) during the deformation processes!
The cause of plastic deformation is the shearing off of atomic blocks on slip planes!
A metal is well deformable, if there are many slip planes with as many different directions of sliding as possible. This means that a deformation process can take place in many directions at the same time, without rupturing the atomic structure irreparably. The combination of slip plane and slip direction is also referred to as a
slip system. For high ductility a lattice structure should therefore have as many slip systems as possible.
A slip system is a combination of a slip plane and a slip direction. The more slip systems a lattice structure has, the more deformable is the respective metal.
The different types of lattice structures, such as face-centered cubic, body-centered cubic and hexagonal closest packed, each have different numbers of slip systems. This is primarily the cause of the different deformability of the lattice structures or the corresponding metals.
Tension
As discussed in the previous section, metal deformation processes are based on slipping of atomic planes. This is only possible if a force acts in a proper way. A mere “squeezing” of the atomic structure would only cause the atomic blocks to compress (
normal strain).
Slipping will only happen if the force acts in such a way that a lateral “shift” of the atomic structure occurs (
shear strain). It is therefore useful to divide forces according to their direction of action on surfaces. Forces acting perpendicular to surfaces or a cross section are referred to as normal forces (normal forces can in principle be further divided into tensile forces and compressive forces). Forces acting parallel to a surface or a cross section are called shear forces.
Only shear forces which are directed parallel to atomic planes (shear stresses) lead to a slipping of lattice planes and thus initiate a deformation process!
Whether a force is capable of causing an atomic layer to slip does not depend solely on the force alone. In addition, of course, it is still crucial how big the atomic plane is that should be sheared off. Because the larger the surface of the atomic layer, the more “bonding points” between two atomic levels arise and must be broken up to slide off. The force per bond or the
force per area is of importance!
Such area-related forces are then also referred to as
stresses. For normal forces, these stresses are therefore called normal stresses. In case of shear forces the stresses are called shear stresses. The distinction between these stresses also becomes clear in the symbolism. Normal stresses are denoted by the Greek letter σ (sigma), shear stresses are given the Greek letter τ (tau):
\begin{equation}
\label{spannung} \text{normal stress: }\boxed{\sigma=\frac{F_{\perp}}{A}} \;\;\;\;\;\; \text{shear stress: }\boxed{\tau=\frac{F_{\parallel}}{A}} \end{equation}
Normal stress acts on a cross section and shear stress in a cross section!
However, the fact that only shear stresses lead to a slipping of atomic planes does not mean that normal stresses acting on a material would not lead to deformation! The animation below shows that the externally applied normal stress (compressive stress) causes shear stress inside the material and atomic blocks to shear off.
By a resolution of forces, that can quickly be comprehended. The force, which has been applied from the outside, is broken down into a vertical and a parallel component regarding the slip plane. Although only normal stresses are applied from the outside, shear stresses are induced in the slip plane.
Normal stresses that are applied from the outside of a material induce shear stresses inside the material!
One must therefore always distinguish: While on a macroscopic level shear stresses as well as normal stresses can lead to deformations, the deformation process on a microscopic level can always be attributed to shear stresses.
To initiate a deformation process, a
critical resolved shear stress (CRSS) must be exceeded in a slip plane (and in particular in slip direction) in order to shear off the lattice plane. Due to the binding forces between the atoms, one can make theoretical predictions what critical shear stress is necessary. For metals, the theoretical CRSS are in the range of 1000 to 3000 N/mm² (1 to 3 GPa). Theoretically, therefore, a force of 1000 to 3000 newton per square millimeter must act in a slip plane in order to shear it off.
However, in reality only a fraction of this theoretical shear stress is needed to actually deform a material! The experimental values are in a single-digit range between 1 and 30 N/mm²! In practice, the deformation already starts at much lower shear stress than calculated theoretically. The article on deformation process in real crystal structures deals with this phenomenon more closely.
Note: The word “resolved” in the term “critical resolved shear stress” means that the force which acts in the slip plane must be resolved in slip direction! The
critical resolved shear stress is then calculated by this force! Only this dissolved stress, is decisive for the deformation process, since the CRSS is not only acting in the slip plane but also in slip direction! For more details see Schmid’s law.
|
I recently came across this in a textbook (NCERT class 12 , chapter: wave optics , pg:367 , example 10.4(d)) of mine while studying the Young's double slit experiment. It says a condition for the formation of interference pattern is$$\frac{s}{S} < \frac{\lambda}{d}$$Where $s$ is the size of ...
The accepted answer is clearly wrong. The OP's textbook referes to 's' as "size of source" and then gives a relation involving it. But the accepted answer conveniently assumes 's' to be "fringe-width" and proves the relation. One of the unaccepted answers is the correct one. I have flagged the answer for mod attention. This answer wastes time, because I naturally looked at it first ( it being an accepted answer) only to realise it proved something entirely different and trivial.
This question was considered a duplicate because of a previous question titled "Height of Water 'Splashing'". However, the previous question only considers the height of the splash, whereas answers to the later question may consider a lot of different effects on the body of water, such as height ...
I was trying to figure out the cross section $\frac{d\sigma}{d\Omega}$ for spinless $e^{-}\gamma\rightarrow e^{-}$ scattering. First I wrote the terms associated with each component.Vertex:$$ie(P_A+P_B)^{\mu}$$External Boson: $1$Photon: $\epsilon_{\mu}$Multiplying these will give the inv...
As I am now studying on the history of discovery of electricity so I am searching on each scientists on Google but I am not getting a good answers on some scientists.So I want to ask you to provide a good app for studying on the history of scientists?
I am working on correlation in quantum systems.Consider for an arbitrary finite dimensional bipartite system $A$ with elements $A_{1}$ and $A_{2}$ and a bipartite system $B$ with elements $B_{1}$ and $B_{2}$ under the assumption which fulfilled continuity.My question is that would it be possib...
@EmilioPisanty Sup. I finished Part I of Q is for Quantum. I'm a little confused why a black ball turns into a misty of white and minus black, and not into white and black? Is it like a little trick so the second PETE box can cancel out the contrary states? Also I really like that the book avoids words like quantum, superposition, etc.
Is this correct? "The closer you get hovering (as opposed to falling) to a black hole, the further away you see the black hole from you. You would need an impossible rope of an infinite length to reach the event horizon from a hovering ship". From physics.stackexchange.com/questions/480767/…
You can't make a system go to a lower state than its zero point, so you can't do work with ZPE. Similarly, to run a hydroelectric generator you not only need water, you need a height difference so you can make the water run downhill. — PM 2Ring3 hours ago
So in Q is for Quantum there's a box called PETE that has 50% chance of changing the color of a black or white ball. When two PETE boxes are connected, an input white ball will always come out white and the same with a black ball.
@ACuriousMind There is also a NOT box that changes the color of the ball. In the book it's described that each ball has a misty (possible outcomes I suppose). For example a white ball coming into a PETE box will have output misty of WB (it can come out as white or black). But the misty of a black ball is W-B or -WB. (the black ball comes out with a minus). I understand that with the minus the math works out, but what is that minus and why?
@AbhasKumarSinha intriguing/ impressive! would like to hear more! :) am very interested in using physics simulation systems for fluid dynamics vs particle dynamics experiments, alas very few in the world are thinking along the same lines right now, even as the technology improves substantially...
@vzn for physics/simulation, you may use Blender, that is very accurate. If you want to experiment lens and optics, the you may use Mistibushi Renderer, those are made for accurate scientific purposes.
@RyanUnger physics.stackexchange.com/q/27700/50583 is about QFT for mathematicians, which overlaps in the sense that you can't really do string theory without first doing QFT. I think the canonical recommendation is indeed Deligne et al's *Quantum Fields and Strings: A Course For Mathematicians *, but I haven't read it myself
@AbhasKumarSinha when you say you were there, did you work at some kind of Godot facilities/ headquarters? where? dont see something relevant on google yet on "mitsubishi renderer" do you have a link for that?
@ACuriousMind thats exactly how DZA presents it. understand the idea of "not tying it to any particular physical implementation" but that kind of gets stretched thin because the point is that there are "devices from our reality" that match the description and theyre all part of the mystery/ complexity/ inscrutability of QM. actually its QM experts that dont fully grasp the idea because (on deep research) it seems possible classical components exist that fulfill the descriptions...
When I say "the basics of string theory haven't changed", I basically mean the story of string theory up to (but excluding) compactifications, branes and what not. It is the latter that has rapidly evolved, not the former.
@RyanUnger Yes, it's where the actual model building happens. But there's a lot of things to work out independently of that
And that is what I mean by "the basics".
Yes, with mirror symmetry and all that jazz, there's been a lot of things happening in string theory, but I think that's still comparatively "fresh" research where the best you'll find are some survey papers
@RyanUnger trying to think of an adjective for it... nihilistic? :P ps have you seen this? think youll like it, thought of you when found it... Kurzgesagt optimistic nihilismyoutube.com/watch?v=MBRqu0YOH14
The knuckle mnemonic is a mnemonic device for remembering the number of days in the months of the Julian and Gregorian calendars.== Method ===== One handed ===One form of the mnemonic is done by counting on the knuckles of one's hand to remember the numbers of days of the months.Count knuckles as 31 days, depressions between knuckles as 30 (or 28/29) days. Start with the little finger knuckle as January, and count one finger or depression at a time towards the index finger knuckle (July), saying the months while doing so. Then return to the little finger knuckle (now August) and continue for...
@vzn I dont want to go to uni nor college. I prefer to dive into the depths of life early. I'm 16 (2 more years and I graduate). I'm interested in business, physics, neuroscience, philosophy, biology, engineering and other stuff and technologies. I just have constant hunger to widen my view on the world.
@Slereah It's like the brain has a limited capacity on math skills it can store.
@NovaliumCompany btw think either way is acceptable, relate to the feeling of low enthusiasm to submitting to "the higher establishment," but for many, universities are indeed "diving into the depths of life"
I think you should go if you want to learn, but I'd also argue that waiting a couple years could be a sensible option. I know a number of people who went to college because they were told that it was what they should do and ended up wasting a bunch of time/money
It does give you more of a sense of who actually knows what they're talking about and who doesn't though. While there's a lot of information available these days, it isn't all good information and it can be a very difficult thing to judge without some background knowledge
Hello people, does anyone have a suggestion for some good lecture notes on what surface codes are and how are they used for quantum error correction? I just want to have an overview as I might have the possibility of doing a master thesis on the subject. I looked around a bit and it sounds cool but "it sounds cool" doesn't sound like a good enough motivation for devoting 6 months of my life to it
|
But if you don't want to have a Google account: Chrome is really good. Much faster than FF (I can't run FF on either of the laptops here) and more reliable (it restores your previous session if it crashes with 100% certainty).
And Chrome has a Personal Blocklist extension which does what you want.
: )
Of course you already have a Google account but Chrome is cool : )
Guys, I feel a little defeated in trying to understand infinitesimals. I'm sure you all think this is hilarious. But if I can't understand this, then I'm yet again stalled. How did you guys come to terms with them, later in your studies?
do you know the history? Calculus was invented based on the notion of infinitesimals. There were serious logical difficulties found in it, and a new theory developed based on limits. In modern times using some quite deep ideas from logic a new rigorous theory of infinitesimals was created.
@QED No. This is my question as best as I can put it: I understand that lim_{x->a} f(x) = f(a), but then to say that the gradient of the tangent curve is some value, is like saying that when x=a, then f(x) = f(a). The whole point of the limit, I thought, was to say, instead, that we don't know what f(a) is, but we can say that it approaches some value.
I have problem with showing that the limit of the following function$$\frac{\sqrt{\frac{3 \pi}{2n}} -\int_0^{\sqrt 6}(1-\frac{x^2}{6}+\frac{x^4}{120})^ndx}{\frac{3}{20}\frac 1n \sqrt{\frac{3 \pi}{2n}}}$$equal to $1$, with $n \to \infty$.
@QED When I said, "So if I'm working with function f, and f is continuous, my derivative dy/dx is by definition not continuous, since it is undefined at dx=0." I guess what I'm saying is that (f(x+h)-f(x))/h is not continuous since it's not defined at h=0.
@KorganRivera There are lots of things wrong with that: dx=0 is wrong. dy/dx - what/s y? "dy/dx is by definition not continuous" it's not a function how can you ask whether or not it's continous, ... etc.
In general this stuff with 'dy/dx' is supposed to help as some kind of memory aid, but since there's no rigorous mathematics behind it - all it's going to do is confuse people
in fact there was a big controversy about it since using it in obvious ways suggested by the notation leads to wrong results
@QED I'll work on trying to understand that the gradient of the tangent is the limit, rather than the gradient of the tangent approaches the limit. I'll read your proof. Thanks for your help. I think I just need some sleep. O_O
@NikhilBellarykar Either way, don't highlight everyone and ask them to check out some link. If you have a specific user which you think can say something in particular feel free to highlight them; you may also address "to all", but don't highlight several people like that.
@NikhilBellarykar No. I know what the link is. I have no idea why I am looking at it, what should I do about it, and frankly I have enough as it is. I use this chat to vent, not to exercise my better judgment.
@QED So now it makes sense to me that the derivative is the limit. What I think I was doing in my head was saying to myself that g(x) isn't continuous at x=h so how can I evaluate g(h)? But that's not what's happening. The derivative is the limit, not g(h).
@KorganRivera, in that case you'll need to be proving $\forall \varepsilon > 0,\,\,\,\, \exists \delta,\,\,\,\, \forall x,\,\,\,\, 0 < |x - a| < \delta \implies |f(x) - L| < \varepsilon.$ by picking some correct L (somehow)
Hey guys, I have a short question a friend of mine asked me which I cannot answer because I have not learnt about measure theory (or whatever is needed to answer the question) yet. He asks what is wrong with \int_0^{2 \pi} \frac{d}{dn} e^{inx} dx when he applies Lesbegue's dominated convergence theorem, because apparently, if he first integrates and then derives, the result is 0 but if he first derives and then integrates it's not 0. Does anyone know?
|
If you need to include simple diagrams or figures in your document, the
picture environment may be helpful. This article describes circles, lines, and other graphic elements created with LaTeX.
Contents
Images can be "programmed" directly in your LaTeX file
\setlength{\unitlength}{1cm} \thicklines \begin{picture}(10,6) \put(2,2.2){\line(1,0){6}} \put(2,2.2){\circle{2}} \put(6,2.2){\oval(4,2)[r]} \end{picture}
The syntax of the
picture is \begin{picture}(width,height)(x-offset,y-offset)
the parameters are passed inside parentheses,
width and
height as you may expect, determine the width and the height of the picture; the units for this parameter are set by
\setlength{\unitlength}{1cm}. The second parameter is optional and establishes the coordinates for the lower-left corner. Below a description of other commands:
\put(6,2.2){\oval(4,2)[r]}
4,2. The parameter
[r] is optional, you can use
\put(2,2.2){\circle{2}}
In the next section the rest of the commands are described.
Different basic elements can be combined for more complex pictures
\setlength{\unitlength}{0.8cm} \begin{picture}(12,4) \thicklines \put(8,3.3){{\footnotesize $3$-simplex}} \put(9,3){\circle*{0.1}} \put(8.3,2.9){$a_2$} \put(8,1){\circle*{0.1}} \put(7.7,0.5){$a_0$} \put(10,1){\circle*{0.1}} \put(9.7,0.5){$a_1$} \put(11,1.66){\circle*{0.1}} \put(11.1,1.5){$a_3$} \put(9,3){\line(3,-2){2}} \put(10,1){\line(3,2){1}} \put(8,1){\line(1,0){2}} \put(8,1){\line(1,2){1}} \put(10,1){\line(-1,2){1}} \end{picture}
In this example several lines and circles are combined to create a picture, then some text is added to label the points. Below each command is explained:
\thicklines
\thinlines which has the opposite effect.
\put(8,3.3){{\footnotesize $3$-simplex}}
\put(9,3){\circle*{0.1}}
\put(10,1){\line(3,2){1}}
Arrows can also be used inside a picture environment, let's see a second example
\setlength{\unitlength}{0.20mm} \begin{picture}(400,250) \put(75,10){\line(1,0){130}} \put(75,50){\line(1,0){130}} \put(75,200){\line(1,0){130}} \put(120,200){\vector(0,-1){150}} \put(190,200){\vector(0,-1){190}} \put(97,120){$\alpha$} \put(170,120){$\beta$} \put(220,195){upper state} \put(220,45){lower state 1} \put(220,5){lower state 2} \end{picture}
The syntax for vectors the same used for
line
\put(120,200){\vector(0,-1){150}}
Bézier curves are special curves that are drawn using three parameters, one start point, one end point and a control point that determines "how curved" it is.
\setlength{\unitlength}{0.8cm} \begin{picture}(10,5) \thicklines \qbezier(1,1)(5,5)(9,0.5) \put(2,1){{Bézier curve}} \end{picture}
Notice that the command
\qbezier (quadratic Bezier curve) is not inside a
\put command. The parameters that must be passed are:
Picture is the standard tool to create figures in LaTeX, as you see this is tool is sometimes too restrictive and cumbersome to work with, but it's supported by most of the compilers and no extra packages are needed. If you need to create complex figures, for more suitable and powerful tools see the TikZ package and Pgfplots package articles.
For more information see
|
Dear Uncle Colin,
I'm pretty good with quadratic inequalities and pretty good with absolute values, but when I get the two together, I get confused. For example, I struggled with the set of values satisfying $x^2 -\left| 5x-3\right| < 2 + x$. Can you help?
- Nasty Absolute Value Inequalities Ending Rongly
Hi, NAVIER, and thanks for your message!
These are nasty, but can be made simpler by rearranging and sketching.
I would begin by getting the absolute value on one side and everything else on the other: $ x^2 - x - 2 < \left| 5x-3\right|$.
The left-hand side is a quadratic that cuts the $x$-axis at $(-1,0)$ and $(2,0)$; the right-hand side is a steep V-shape that bounces off of the $x$-axis at $x= \frac{3}{5}$. The quadratic curve is below the V between the two points where they cross.
I would now treat this as two inequalities. When $x < \frac{3}{5}$, the absolute value part is $3-5x$, and the two cross when $x^2 - x - 2 = 3 - 5x$. Rearranging, this gives $x^2 + 4x - 5=0$, which has solutions at $x=-5$ and $x=1$. Only the first of those is in the domain we're looking at, so one of the crossing points is where $x=-5$.
The other comes from solving $x^2 - x - 2 = 5x - 3$ for $x > \frac{3}{5}$, which simplifies to $x^2 - 6x + 1$. This has solutions at $x = 3 \pm 2\sqrt{2}$, and only the larger of those is in the domain (since $3 - 2\sqrt{2} \approx 0.2 < \frac{3}{5}$).
As we know the set of values that works is all of the $x$s between the crossing points, our final answer is $-5 \lt x \lt 3 + 2\sqrt{2}$.
Hope that helps!
- Uncle Colin
|
Here, the substitution isn't appropriate, because, as you see, there's no factor of $x$ in the original integrand. It would have been a better choice to substitute $y = (x^2 + 1)$ if the integral
had been $$\int x(x^2 + 1)^3\,dx$$ because then we'd have $\;dy = 2x dx\implies dx = \frac 12 dy,\;$ giving us a very nice integral to work with: $$\;1/2 \int y^3 dy$$
But, alas! We don't have
that integral to work with. And there's not a really handy substitution to use that will simplify our work.
Instead, for this integral, try expanding the binomial (easy to do in this case), and use the power rule to integrate each term:
$$\int (x^2+1)^3 dx \quad = \quad \int (x^6 + 3x^4 + 3x^2 + 1) \,dx \quad = \quad\dfrac{x^7}{7} + \frac{3x^5}{5} + x^3 + x + C$$
|
The American Petroleum Institute, in its standard 521, outlines limits for exposure of personnel to heat radiation from flares. As hydrocarbons and hydrogen are commonly flared, and also commonly used as rocket fuel, the data is relevant. This publication is used throughout the oil industry worldwide (and therefore is in far wider use than anything produced by any space agency.)
Here are the limits from the 1997 edition (a bit easier to interpret for the purpose of this question than the latest edition.) The odd numbers are a result of conversion from round numbers of $\frac{BTU}{hft^2}$. For comparison, solar radiation is about 1 $\frac{kW}{m^2}$.
9.45 $\frac{kW}{m^2}$ - Exposure must be limited to a few (approx. six) seconds, sufficient for escape only. May consider tower or structure provide some degree of shielding.
6.31 $\frac{kW}{m^2}$ - Emergency actions lasting up to 1 minutes without shielding but with appropriate clothing.
4.73 $\frac{kW}{m^2}$ - Emergency actions lasting up to several minutes without shielding but with appropriate clothing.
1.58 $\frac{kW}{m^2}$ - Personnel with appropriate clothing can be continuously exposed
The latest edition reduces the times for 4.73 and 6.31 to 2-3 minutes and 30 seconds respectively, and rather unhelpfully for the point of view of this question, does not specify any time for 9.45kW/m2.
Let's take an example with a popular engine. According to Wikipedia, a SpaceX Merlin 1-C engine has a thrust of 420000 N and a nozzle velocity of $2600\ m/s$ at sea level, which means a propellant consumpton of $420000/2600=161\ kg/s$, about two thirds (by mass) of which is oxygen. The rest (say $50\ kg/s$) is kerosene. The Lower Heating Value (i.e. not considering heat recoverable by condensation of water produced in combustion) of kerosene is about $43\ MJ/kg$ so the power of a merlin 1-C is about $43 \times 50 = 2150\ MW$ or 2150000 kW.
Let's assume we want to at the $6.31\ \frac{kW}{m^2}$ distance and assume (as the API 521 standard does) that the radiation of a combustion source is identical in all directions. To keep the calculation simple, we will assume (for now) that the emissivity of the combustion source is 1: that is, perfect radiation.
We now need to calculate the radius of a sphere such that $6.31\ \frac{kW}{m^2}$ radiation will be experienced from a point source of 2150000kW. Such a sphere will have an area of $2150000 / 6.31 = 340729\ m^2$. As the area of a sphere is $4*\pi*r^2$, this works out as a distance of 165 m.
Two more things to consider: First, a Falcon 9 launch vehicle has 9 engines, not one. to factor this in, we need to multiply by $\sqrt{9}=3$ so we need to be at $165 \times 3=495 m$ distance. (say 500m.)
Secondly, the emissivity may be quite a bit less than 1 (values for combustion with oxygen are difficult to come by) but because of the square law it won't make much difference. Opaque smoke can make quite a difference to emissivity, but most rockets burn clean once they are clear of the launch pad. A low value for a smokeless flare burning heavy hydrocarbon would be 0.25 (1/4) so if this is was applicable to a rocket the distance would be halved to $250m.$
I reckon you would survive witnessing a Falcon 9 launch at a maximum radiation of 6.31kW/m2, though quite possibly with significant burns. It's a fairly short time before the rocket is well clear of the earth, but it would be hot and uncomfortable (painful) with 6.31 times the solar radiation in your face. I wouldn't be surprised if you turned and ran.
Most propellants are not that toxic. Perhaps the worst exhaust fumes would be from the Space Shuttle solid rocket boosters, which produced aluminium oxide in a fine white powder form which would be very bad for your lungs. I'm pretty sure the heat radiation would still be the limiting factor though.
EDIT 1: The Soyuz launcher has five (quadruple nozzled) engines, of 813 kilonewton thrust and 2.4km/s velocity, giving a total propellant consumption of 1694kg/s. That is marginally more than the 9x160=1440kg/s used by the Falcon 9. Therefore I find the claim in the comments that the launch can be watched from 400m surprising, though it does not conflict with an emissivity of 0.25. The emissivity is something of an unknown, and the cloud of debris and steam at the launch pad would shield the observer from the heat radiation until the rocket gained some height. It's still closer than I would like to be to a launch.
EDIT 2 I am receiving comments that my thermal calculations are an overestimate. I've checked the overall energy release and that at least is correct. So let's see what may be wrong:
1.The spherical radiation model is an oversimplification. In fact, most of the radiation will be downwards, so this would actually increase the thermal energy felt by an observer on the ground.
2.I took no separate account of the energy converted to thrust. Wikipedia indicates around 60% efficiency, leaving 40% energy available for emission. I checked this with my own expansion calculation:
Chamber pressure 6.77MPa (Merlin) 5.85MPa(Soyuz): consider 60Atm (approx 6MPa) for convenience
Specific heat ratio: Both CO2 and H2O are around 1.3.
Heat not converted to thrust = T2/T1 = 60^((1-1.3)/1.3)=0.389
This is surprisingly close to the Wikipedia efficiency value.
Given the general uncertainty of emissivity values, I do not consider a factor of 40% to be particularly significant.
3.After some thought, it occured to me that perhaps the most important difference between a flare (which as a combustion engineer in the oil industry I am very familiar with) and a rocket engine (which I am admittedly less familiar with) is the much greater turbulence with ambient air. This may lead to much greater mixing and a consequently lower emissivity.
I'm reluctant to make another guess at emissivity, but if it was as low as 1/25 (that's just 4% of the heat released being converted to thermal radiation!) my estimate for the minimum non fatal distance from a Falcon 9 would be $500/ \sqrt{25}=100m$ (at which distance your hearing would be severely damaged.)
It's notable that this is not much different from the radius of the cloud of dust and steam that forms at the launch pad. That debris cloud must be pretty hot (all that heat that doesn't get radiated has to go somewhere) so I think the risk of being killed by flying debris is irrelevant, as the heat would get you anyway.
|
In order to study the behavior of an RC circuit, I connected a resistor and a capacitor to an Arduino's I/O as shown:
The Arduino digital Output feeds the circuit with a square pulse of 2 sec duration.
(one second HIGH, one second LOW)
for a charge time of 1 sec:$$V_c = E(1-e^{-\dfrac{t}{\tau}}) = E(1-e^{-\dfrac{1}{0.83}})=0.7E$$Converting
where E is the power supply voltage
Converting
E value to a 10bit range, $$V_c = 0.7 \times 1024 = 717$$
Now, this is the graph I take from the analog input:
whose minimum value is 237
(0.23E), and maximum value = 784 (0.76E).
Assuming that the capacitor's value may differ a little, I may accept that 0.70E = 0.76E. But in that case,
shouldn't Vc start from zero?
Assuming that the capacitor is semi charged, shouldn't in any case max-min=0.7E?
(Before initiating, I discharged the capacitor connecting it with a resistor for several seconds.)
Any thoughts would be appreciated.
EDIT:Using several values of charge time, every time the graph seems to be positioned in the middle, meaning Vc(min)+Vc(max) = E/2.
In order to study the behavior of an RC circuit, I connected a resistor and a capacitor to an Arduino's I/O as shown:
The Arduino digital Output feeds the circuit with a square pulse of 2 sec duration.
(one second HIGH, one second LOW)
for a charge time of 1 sec:$$V_c = E(1-e^{-\dfrac{t}{\tau}}) = E(1-e^{-\dfrac{1}{0.83}})=0.7E$$Converting
E value to a 10bit range, $$V_c = 0.7 \times 1024 = 717$$
Now, this is the graph I take from the analog input:
whose minimum value is 237
(0.23E), and maximum value = 784 (0.76E).
Assuming that the capacitor's value may differ a little, I may accept that 0.70E = 0.76E. But in that case,
shouldn't Vc start from zero?
Assuming that the capacitor is semi charged, shouldn't in any case max-min=0.7E?
(Before initiating, I discharged the capacitor connecting it with a resistor for several seconds.)
Any thoughts would be appreciated.
In order to study the behavior of an RC circuit, I connected a resistor and a capacitor to an Arduino's I/O as shown:
The Arduino digital Output feeds the circuit with a square pulse of 2 sec duration.
(one second HIGH, one second LOW)
for a charge time of 1 sec:$$V_c = E(1-e^{-\dfrac{t}{\tau}}) = E(1-e^{-\dfrac{1}{0.83}})=0.7E$$
where E is the power supply voltage
Converting
E value to a 10bit range, $$V_c = 0.7 \times 1024 = 717$$
Now, this is the graph I take from the analog input:
whose minimum value is 237
(0.23E), and maximum value = 784 (0.76E).
Assuming that the capacitor's value may differ a little, I may accept that 0.70E = 0.76E. But in that case,
shouldn't Vc start from zero?
Assuming that the capacitor is semi charged, shouldn't in any case max-min=0.7E?
(Before initiating, I discharged the capacitor connecting it with a resistor for several seconds.)
Any thoughts would be appreciated.
EDIT:Using several values of charge time, every time the graph seems to be positioned in the middle, meaning Vc(min)+Vc(max) = E/2.
|
Generally, one does not have a metric space embedded in some larger space where the "limits" of sequences that do not converge in $X$ may converge. So it really makes no sense to talk about "points to which the series converges but are not in $X$". As I noted in comments, if you really want to put (scare) quotes, they belong around the word "point", in so far as there is no such "point" to which these Cauchy sequences converge.
The solution to this is essentially the same as the one used to construct the reals from the rationals by considering all "possible limits" of Cauchy sequences: one constructs the
completion of the space $X$ and embeds $X$ into that space. This completion, $Y$, is a complete metric space that comes equipped with an embedding $X\hookrightarrow Y$ such that (i) the embedding is uniformly continuous; (ii) [the image of] $X$ is dense in $Y$; and (iii) given any uniformly continuous function $f\colon X\to N$ into a complete metric space $N$, there is a unique uniformly continuous extension of $f$ to $\mathfrak{f}\colon Y\to N$. Viewing $X$ as a subspace of that $Y$, then you can talk about the limits of these Cauchy sequences in $X$ much like we can talk about the real limits of Cauchy sequences of rationals.
The universal property given above implies that if any such $Y$ exists, then it is unique up to a uniformly continuous homeomorphism, by the usual abstract nonsense arguments. So it suffices to construct any one such space. The standard construction mimics, as I mentioned above, the construction of $\mathbb{R}$ from $\mathbb{Q}$, specifically the construction of $\mathbb{R}$ as equivalence classes of rational Cauchy sequences. Namely, we let $C$ be the set of all Cauchy sequences of elements of $X$, and we define an equivalence relation on $C$ by letting $(x_n)\sim (y_n)$ if and only if $\lim\limits_{n\to\infty}d(x_n,y_n) = 0$. Then we let $Y$ be the quotient $C/\sim$, and define the metric by $$D\left(\overline{(x_n)},\overline{(y_n)}\right) = \lim_{n\to\infty}d(x_n,y_n).$$ One embeds $X$ into $Y$ by mapping $x$ to the class of the constant sequence $(x)$, and proves it has the appropriate properties.
Once you have this completion $Y$,
now you can talk about those $X$-Cauchy sequences converging to points in $Y$ that are not in $X$; they converge in $Y$ because $Y$ is complete.
|
Probably my questions are known or evident to the experts but I'm a bit puzzled. First of all there seem to be two kinds of zeta functions that go under the name of Shintani zeta functions.
First, there are zeta functions $\zeta^{SS}$ associated with so called prehomogenous vector spaces going back to important work by Sato and Shintani (see the original article or this book by Yukie) and then, second, zeta functions $\zeta^S$ that appeared in Shintani's work on special values of Dedekind zeta functions of totally real number fields at negative integers (see Shintani's article or Neukirch's book for example).
1) I'm mainly interested in the question if it is known (or expected) if the latter zeta functions $\zeta^S$ satisfy functional equations. (From what I understand the $\zeta^{SS}$ satisfy functional equations or are expected to satisfy in case it is not proven).
Let me just note that one can write Shintani zeta functions in the following form $$\Gamma(s)^n \zeta^S(s,z,x) = \int_0^\infty \cdots\int_0^\infty \sum_{z_1,\dots , z_n=0}^\infty e^{-\sum_{i=1}^n t_i L_i(z+x)}(t_1\cdots t_n)^{s-1} dt_1\cdots dt_n$$ where the $L_i(x)$ are linear forms, i.e. essentially we could say that we're looking at multivariable theta-like functions and Mellin transforms thereof. So the question can be rephrased in asking whether these theta-like functions occurring in the above integral satisfy a functional equation/theta inversion formula. (Note that these theta-like functions do in general not come from symplectic structures, i.e. they are not related to abelian varieties (at least as far as I see)).
2) But next to this question I'm also extremely interested in the relationship of these two kinds of zeta functions. In which cases do the two constructions agree? Is there anything known?
Thank you very much in advance!
EDIT: OK, so I could speak a bit with one of the absolute authorities in this field and I learned, that
1) one shouldn't expect functional equations for single functions $\zeta^S$ but rather for certain finite linear combinations and
2) one shouldn't expect relations between the two notions of "zeta" functions.
This doesn't destroy the applications I had in mind with my question but I have to rethink the question and will try to give a better and less naive version of it soon. Thank you so far very much for your helpful comments!
|
Difference between revisions of "Help:Editing"
m (→Tables)
Line 148: Line 148:
*[http://en.wikipedia.org/wiki/Wikipedia:Writing_better_articles Wikipedia:Writing better articles].
*[http://en.wikipedia.org/wiki/Wikipedia:Writing_better_articles Wikipedia:Writing better articles].
+ +
[[Category:Help:Editing]]
[[Category:Help:Editing]]
Revision as of 04:26, 22 July 2014
The MediaWiki software is extremely easy to use. Viewing pages is self-explanatory, but adding new pages and making edits to existing content is extremely easy and intuitive as well. No damage can be done that can't be easily fixed. Although there are some differences, editing SDIY wiki is much the same as editing on Wikipedia.
Contents 1 Editing the wiki 2 Use a sandbox page 3 Creating links and adding pages 4 Headings 5 Lists 6 Inserting files 7 Schematics 8 Tables 9 Formatting 10 Categories 11 Standard appendices 12 Templates 13 Talk pages 14 See also 15 Further reading 16 References 17 External links Editing the wiki
By default the enhanced editing toolbar is disabled. To enable it go to Preferences:Editing and tick
Enable enhanced editing toolbar.
At the top of any wiki the page, you will see some tabs titled Page, Discussion, Edit, History, Move And Watch. Clicking the edit tab opens the editor, a large text entry box in the middle of the page. This is where to enter plain text. Very little formatting code (known as "wiki markup") is required, unlike regular websites using HTML and CSS. At the top of this text entry box is a row of buttons with small icons on them. Holding the mouse cursor over an icon displays a tool-tip telling you its function. These buttons make it very simple to use the formatting features of the wiki software. You can achieve the same effect by typing the correct wiki code, however using the buttons makes it very simple and also eases the process of learning the correct code syntax. Please do your best to always fill in the edit summary field. An enhanced editor can be enabled in user preferences.
Use a sandbox page
Use the sandbox page to play around and experiment with editing. It isn't for formal wiki info, just a place to play and explore. Any content here won't be preserved. You can create your own sandbox area by appending "/sandbox" to the URL of your user page, or click the Sandbox link in the personal toolbar area, if enabled in your preferences. Your own sandbox is where to rough out articles until they're ready for posting. Don't do articles in rough in the main wiki. Sandboxes will be indexed by search engines like any other page, unless the first line is
__NOINDEX__ or uses the template
{{User sandbox}}.
The third and fourth buttons are create "Internal link" and "External link". The third button creates an internal link (aka a wikilink) which, in the editor, has the format
[[Eurocard]] ie. surrounded by double square brackets. Use a vertical bar "|" (the "pipe" symbol) to create a link with a different name to original article eg.
[[Printed circuit board|PCB]], only PCB appears on the page.
Only the first occurence of a link on the page needs to be the link, any further uses of the word/phrase can be in plain text. If the page doesn't exist already the link will be in shown with red text. Following a redlink opens up the editor window for creating that page within the wiki structure. Linking articles in a structured way is the preferred method of adding new pages to the wiki. Except for names, use ordinary sentence case for article titles.
Using the fourth button will make an external link to a page elsewhere on the Internet. This has the form
[http://www.google.com Google], ie. the URL, followed by a space, followed by linking text in single square brackets.
Every article is part of a network of connected topics. Establishing such connections via internal links is a good way to establish context. Each article should be linked from more general subjects, and contain links where readers might want to use them to help clarify a detail. Only create relevant links. When you write a new article, make sure that one or more other pages link to it. There should always be an unbroken chain of links leading from the Main Page to every article in the wiki.
Always preview your edits before saving them and also check any links you have made to confirm that they do link to where you expect.
See also Wikipedia:Manual of Style/Linking
Headings
Headings help clarify articles and create a structure shown in the table of contents.
Headings are hierarchical. The article's title uses a level 1 heading, so you should start with level 2 heading (
==Some heading==) and follow it with a level 3 (
===A sub-heading===, and just use
'''Text made bold''' after that). Whether extensive subtopics should be kept on one page or moved to individual pages is a matter of personal judgment.
Headings should not be links. This is because headings in themselves introduce information and let the reader know what subtopics will be presented; links should be incorporated in the text of the section.
Except for names, use ordinary sentence case for headings, also don't have all words with initial caps.
Lists
In an article, significant items should normally be mentioned naturally within the text rather than merely listed. Where a "bulleted list" is required each item/line of the list is preceded by an asterix (
*) or for indenting a sublist use two asterixes
**). For numbered lists use a hash sign (
#) and further hash signs for subsections. Lists of links are usually bulleted, giving each link on a new line.
Definition lists
Are useful for more than just terms and definitions. Use semi-colons and colons:
Some term - this line starts with ; And then a definition - this line starts with a : Inserting files
The sixth button will enable to insert an image (or other media type) into the text. Relevant images add interest to the article. Currently there are limitations on the allowable size of uploads.
<mp3>Synth_filter_sweep.mp3</mp3> An example of a classic analog synthesizer
sound - a sawtooth bass filter sweep with
gradually increasing resonance.
You can also insert MP3 clips by using the tag <mp3>, but it needs to be put in an inline styled table to format it neatly. Use <br> tags to format any caption.
File names should be clear and descriptive, without being excessively long. It is helpful to have descriptive names for editors. Very generic filenames should not be used when uploading, as sooner or later someone else will use same name and this will overwrite the first file.
For a large selection of freely usable media see Wikimedia Commons.
Hotlinking from from Wikimedia Commons is allowed. You can first upload your file there, but be sure to use a long descriptive or unique file name. This is to avoid name clash. When files have the same name, some other other file might be displayed locally instead of the one expected.
Hotlinking is not recommended because anyone could change, vandalise, rename or delete a hotlinked image. There is no control over what is served locally. If you do hotlink, then it is still necessary to follow any licensing conditions.
Generally hotlinking is wrong because it exploits another servers bandwidth to supply the files. For files on sites other than Wikimedia, don't link directly to those files without permission. Either download a copy from the other site and then upload it to the wiki, or link to the other site's page on which the file can be found.
Schematics
For quickly illustrating articles with simple schematics and sketches. There's some suggestions listed at Wikipedia:Project Electronics/Programs and at StackExchange EE:Good Tools for Drawing Schematics.
Tables
Use wiki markup not HTML nor images. The easiest way to work with tables is to use Microsoft Excel. Paste and edit or create a table in Excel, then copy the table into the tab-delimited string to wiki markup converter. Other methods are described at Commons:Convert tables and charts to wiki code.
Enabling the enhanced editor in user preferences, gives an
Insert a table button. Clicking this produces the following. {| class="wikitable" |- !header1!!header 2!!header 3 |- | row 1, cell 1|| row 1, cell 2|| row 1, cell 3 |- | row 2, cell 1|| row 2, cell 2|| row 2, cell 3 |}
Which displays as
header1 header 2 header 3 row 1, cell 1 row 1, cell 2 row 1, cell 3 row 2, cell 1 row 2, cell 2 row 2, cell 3
For more in depth information on table markup see Wikipedia:Help:Table.
Formatting
Be sure to keep your content meaningful. Relying on styling to indicate meaning is a bad practice (e.g. for machine readability such as by search engines, screen readers using text-to-speech, and text browsers).
Inline styling
Some HTML tags and inline styling is allowed, for example
<code>,
<div>,
<span> and
<font>. These apply anywhere you insert them - depending upon which fonts are installed. Here is an example using <span style="font-family:Courier;font-size:100%;color:#0000ff;background-color:#dddddd"></span>. For further information see Mediawiki:Help:Formatting.
Indenting text
Use a colon
: to indent text.
Subscript and superscript
Foo<sub>Bar</sub> gives Foo
Bar and
Bar<sup>Baz</sup> gives Bar
Baz. Inserting symbols
Symbols and other special characters can be inserted through HTML entities. For example Ω will show Ω and > will show >. These are case-sensitive. For a list of HTML entities see Wikipedia:List of HTML entities
Text boxes
For preformatted text (in a dashed box) simply indent it by one space. Inline styling allows more options e.g.
<div style="background: #eeffff; border: 1px solid #999999; padding: 1em; width:88%;">
LaTeX formulae
SDIY wiki supports embedding mathematical formulas using TeX syntax. Use
<m> tags.
For example (use edit to see the source): <m>\operatorname{erfc}(x) = \frac{2}{\sqrt{\pi}} \int_x^{\infty} e^{-t^2}\,dt = \frac{e^{-x^2}}{x\sqrt{\pi}}\sum_{n=0}^\infty (-1)^n \frac{(2n)!}{n!(2x)^{2n}}</m>
Categories
Add one or more categories to pages or uploaded file, by simply adding eg.
[[Category:Whatever]]. Categories themselves need to be categorised to create a hierarchy for navigating through the wiki.
Standard appendices
Information that can follow after the body of the article should follow in this order:
A list of works created by the subject of the article See also, a list of internal links to related articles Notes and references Further reading, a list of recommended relevant books, articles, or other publications that have not been used as sources External links, a list of recommended relevant websites that have not been used as sources Templates
Template:Main article A template is a page that gets included in another page, (this is called transclusion). This is useful for text that is often repeated. For example, create a page called "Template:Main article" with the text
''The main article for this is
[[{{{1}}}]].''
and then to use the template insert "{{Main article|Whatever}}" where you want that text to appear.
Talk pages
Don't leave visible notes and comments in the article. At the top of every article, the second tab entitled Discussion, opens the articles "Talk page". This is where to dicuss the article or leave notes for other editors. Remember to sign you posts on talk pages, (second from last button). In articles to leave notes or explanations use HTML commenting. These will be hidden except from other editors. An HTML comment, which has the form:
<!--This is a comment.-->, will work fine in Mediawiki.
See also Further reading References Convert from Microsoft Word to Media Wiki markup, stackoverflow
|
The fast signal diffusion limit in nonlinear chemotaxis systems
Institut für Mathematik, Universität Paderborn, 33098 Paderborn, Germany
$ n\geq2 $
$ \mathit{\Omega }\subset {\mathbb{R}}^n $
$ 0\not \equiv u_0\in W^{1, \infty}(\mathit{\Omega }) $
$ v_0\in W^{1, \infty}(\mathit{\Omega }) $
$ \varepsilon\in(0, 1) $
$ \begin{equation*} \begin{cases} u_t = \nabla\cdot((u+1)^{m-1}\nabla u)-\nabla \cdot(u\nabla v) \;\;\; & \text{in} \ \mathit{\Omega }\times\left(0, \infty \right), \\ \varepsilon v_t = \mathit{\Delta } v -v+u & \text{in} \ \mathit{\Omega }\times\left(0, \infty \right), \\ \frac{\partial u}{\partial \nu} = \frac{\partial v}{\partial \nu} = 0 & \text{on} \ \partial\mathit{\Omega }\times\left(0, \infty \right), \\ u(\cdot, 0) = u_0, \ v(\cdot, 0) = v_0 & \text{in} \ \mathit{\Omega } \end{cases} \end{equation*} $
$ \varepsilon = 0 $
$ v $
$ m>1+\frac{n-2}{n} $
$ \left\|{{u_0}}\right\|_{L^p(\mathit{\Omega })} $
$ p\in[1, \infty] $ Keywords:Chemotaxis, convergence, nonlinear parabolic equations, Keller-Segel system, global existence, boundedness. Mathematics Subject Classification:92C17, 35K55, 35B40. Citation:Marcel Freitag. The fast signal diffusion limit in nonlinear chemotaxis systems. Discrete & Continuous Dynamical Systems - B, doi: 10.3934/dcdsb.2019211
References:
[1] [2]
M. Freitag,
Blow-up profiles and refined extensibility criteria in quasilinear Keller-Segel systems,
[3] [4]
M. A. Herrero and J. J. L. Velázquez, A blow-up mechanism for a chemotaxis model,
[5]
W. Jäger and S. Luckhaus,
On explosions to solutions to a system of partial differential equations modelling chemotaxis,
[6] [7]
O. A. Ladyzhenskaya, V. A. Solonnikov and N. N. Uraltseva,
[8]
J. Liu, L. Wang and Z. Zhou,
Positivity-preserving and asymptotic preserving method for 2d Keller-Segel equations,
[9]
N. Mizoguchi and Ph. Souplet,
Nondegeneracy of blow-up points for the parabolic Keller-Segel system,
[10] [11] [12] [13] [14] [15]
M. M. Porzio and V. Vespri,
Hölder estimates for local solutions of some doubly nonlinear degenerate parabolic equations,
[16] [17]
Ph. Souplet and M. Winkler,
Blow-up profiles for the parabolic-elliptic Keller-Segel system in dimensions
[18] [19]
T. Suzuki,
Exclusion of boundary blowup for 2d chemotaxis system provided with Dirichlet boundary condition for the poisson part,
[20]
Y. Tao and M. Winkler,
Boundedness in a quasilinear parabolic-parabolic Keller-Segel system with subcritical sensitivity,
[21] [22]
M. Winkler, Blow-up profiles and life beyond blow-up in the fully parabolic Keller-Segel system, preprint.Google Scholar
[23]
M. Winkler,
Aggregation vs. global diffusive behavior in the higher-dimensional Keller-Segel model,
[24]
M. Winkler,
Finite-time blow-up in the higher-dimensional parabolic-parabolic Keller-Segel system.,
[25]
J. Wloka,
Partial Differential Equations, Cambridge University Press, Cambridge, 1987.
doi: 10.1017/CBO9781139171755.
Google Scholar
show all references
References:
[1] [2]
M. Freitag,
Blow-up profiles and refined extensibility criteria in quasilinear Keller-Segel systems,
[3] [4]
M. A. Herrero and J. J. L. Velázquez, A blow-up mechanism for a chemotaxis model,
[5]
W. Jäger and S. Luckhaus,
On explosions to solutions to a system of partial differential equations modelling chemotaxis,
[6] [7]
O. A. Ladyzhenskaya, V. A. Solonnikov and N. N. Uraltseva,
[8]
J. Liu, L. Wang and Z. Zhou,
Positivity-preserving and asymptotic preserving method for 2d Keller-Segel equations,
[9]
N. Mizoguchi and Ph. Souplet,
Nondegeneracy of blow-up points for the parabolic Keller-Segel system,
[10] [11] [12] [13] [14] [15]
M. M. Porzio and V. Vespri,
Hölder estimates for local solutions of some doubly nonlinear degenerate parabolic equations,
[16] [17]
Ph. Souplet and M. Winkler,
Blow-up profiles for the parabolic-elliptic Keller-Segel system in dimensions
[18] [19]
T. Suzuki,
Exclusion of boundary blowup for 2d chemotaxis system provided with Dirichlet boundary condition for the poisson part,
[20]
Y. Tao and M. Winkler,
Boundedness in a quasilinear parabolic-parabolic Keller-Segel system with subcritical sensitivity,
[21] [22]
M. Winkler, Blow-up profiles and life beyond blow-up in the fully parabolic Keller-Segel system, preprint.Google Scholar
[23]
M. Winkler,
Aggregation vs. global diffusive behavior in the higher-dimensional Keller-Segel model,
[24]
M. Winkler,
Finite-time blow-up in the higher-dimensional parabolic-parabolic Keller-Segel system.,
[25]
J. Wloka,
Partial Differential Equations, Cambridge University Press, Cambridge, 1987.
doi: 10.1017/CBO9781139171755.
Google Scholar
[1]
Hao Yu, Wei Wang, Sining Zheng.
Boundedness of solutions to a fully parabolic Keller-Segel system with nonlinear sensitivity.
[2]
Kentarou Fujie, Takasi Senba.
Global existence and boundedness in a parabolic-elliptic Keller-Segel system with general sensitivity.
[3]
Hao Yu, Wei Wang, Sining Zheng.
Global boundedness of solutions to a Keller-Segel system with nonlinear sensitivity.
[4]
Xie Li, Zhaoyin Xiang.
Boundedness in quasilinear Keller-Segel equations with nonlinear sensitivity and logistic source.
[5]
Miaoqing Tian, Sining Zheng.
Global boundedness versus finite-time blow-up of solutions to a quasilinear fully parabolic Keller-Segel system of two species.
[6]
Kentarou Fujie, Chihiro Nishiyama, Tomomi Yokota.
Boundedness in a quasilinear parabolic-parabolic
Keller-Segel system with the sensitivity $v^{-1}S(u)$.
[7]
Sachiko Ishida, Tomomi Yokota.
Boundedness in a quasilinear fully parabolic Keller-Segel system via maximal Sobolev regularity.
[8]
Marco Di Francesco, Donatella Donatelli.
Singular convergence of nonlinear hyperbolic chemotaxis systems to Keller-Segel type models.
[9]
Qi Wang, Jingyue Yang, Feng Yu.
Boundedness in logistic Keller-Segel models with nonlinear diffusion and sensitivity functions.
[10]
Piotr Biler, Ignacio Guerra, Grzegorz Karch.
Large global-in-time solutions of the parabolic-parabolic Keller-Segel system on the plane.
[11]
Mengyao Ding, Xiangdong Zhao.
$ L^\sigma $-measure criteria for boundedness in a quasilinear parabolic-parabolic Keller-Segel system with supercritical sensitivity.
[12]
Jan Burczak, Rafael Granero-Belinchón.
Boundedness and homogeneous asymptotics for a fractional logistic Keller-Segel equations.
[13]
Qi Wang.
Boundary spikes of a Keller-Segel chemotaxis system with saturated logarithmic sensitivity.
[14]
Tobias Black.
Global generalized solutions to a parabolic-elliptic Keller-Segel system with singular sensitivity.
[15]
Mengyao Ding, Sining Zheng.
$ L^γ$-measure criteria for boundedness in a quasilinear parabolic-elliptic Keller-Segel system with supercritical sensitivity.
[16]
Chao Deng, Tong Li.
Global existence and large time behavior of a 2D Keller-Segel system in logarithmic Lebesgue spaces.
[17]
Jinhuan Wang, Li Chen, Liang Hong.
Parabolic elliptic type Keller-Segel system on the whole space case.
[18] [19]
Wenting Cong, Jian-Guo Liu.
Uniform $L^{∞}$ boundedness for a degenerate parabolic-parabolic Keller-Segel model.
[20]
Kenneth H. Karlsen, Süleyman Ulusoy.
On a hyperbolic Keller-Segel system with degenerate nonlinear fractional diffusion.
2018 Impact Factor: 1.008
Tools Metrics Other articles
by authors
[Back to Top]
|
I want to calculate the hamming weight of a S-Box using this formula: $\text{hw}(f) = \sum_{x=0}^{2^n-1} f(x)$. Where $f: \{0, 1\}^n \rightarrow \{0, 1\}$
My problem is that I don't know how to get the $f$-function.
I found that helpful paper: THE DESIGN OF S-BOXES. At Page 9 (18) boolean functions are described. But she only prints the truth table (Table 2.1) without describing her $f$-function. She only says $f$ is a linear function, therefore I think $f$ depends on my S-Box?
For example, if I have this $2 \times 2$ S-Box:
0 1 2 3 1 3 0 2
$00 \rightarrow 01\\ 01 \rightarrow 11\\ 10 \rightarrow 00\\ 11 \rightarrow 10$
What are the $f$-functions and what is the hamming weight?
|
I see it's a very old question, but let me add my two cents. It's only an approximate solution and at times it involves some guesswork, but it turns out to be quite good. (Also, it doesn't need a computer and the math is pretty elementary.)
Let $a_n$ denote the probability of rolling total of $n$ in any number of rolls (now without the "stop when > 100" condition). After some thinking, we get a recurrence relation for these:$$ a_n = (a_{n-1} + a_{n-2} + \ldots + a_{n-6})/6,\quad a_{-5} = a_{-4} = \ldots = a_{-1} = 0, a_0 = 1. $$
This is a linear recurrence, which can be solved "easily" by forming the characteristic equation $6\lambda^6 - \lambda^5 - \lambda^4 - \lambda^3 - \lambda^2 - \lambda - 1 = 0$. If we denote its roots by $l_1$ to $l_6$, the explicit formula for the recurrence has the form of$$ a_n = \sum_{0 < i < 7} C_i l_i^n. $$
(The $C$'s are obtained from the boundary conditions.) Since the $a_n$'s represent probabilities, they should be in the interval of $<0;1>$. From this it seems to be reasonable that $|l_i| \leq 1$. If there were any that don't satisfy this condition, the $a_n$'s would be unbounded (since the powers, and even differences of two of them with different bases, would be unbounded).
Now we see it has a root of $\lambda = 1$. So, $a_n$'s eventually converge to $C_1$ (which we don't know, but we don't care), since all other $C_i l_i^n$ converge to 0 (because of the $|l_i|<1$ condition).
So probability of getting 101 is $a_{101}$. Getting 102 has a probability of $a_{102} - a_{101}/6$, since we can't obtain it by rolling 101+1. Similarly, rolling 103 has a probability of $a_{103} - (a_{102} + a_{101})/6$ (no 101+2 nor 102+1), etc. Now we guess that the $a_n$'s converge so well that $a_{101}$ through $a_{106}$ are essentially equal. That gives the 6:5:4:3:2:1 ratio, and furthermore, the (correct) limit of $a_n$'s, 2/7.
I know this is somewhat estimatory (some may say, "physicist's") approach, but even with that, I hope it could have some value.
|
What is the best way to prove the statement? I know the version of using nested intervals, but is there another way to approach the problem?
Let $X \subseteq \mathbb{R}$ be compact, and let $y \in \mathbb{R} \setminus X$. It suffices to prove $\mathbb{R} \setminus X$ is open, i.e. that $\mathbb{R} \setminus X$ contains an open ball about $y$. Since $\mathbb{R}$ is Hausdorff, for each $x\in X$, there exists a neighborhood $U_{x}$ of $x$ and $U_{y}^{x}$ of $y$ such that $U_{x}$ and $U_{y}^{x}$ are disjoint. The set of all $U_{x}$'s for each $x \in X$ form an open cover for $X$, so by the compactness of $X$, there exist finitely many $U_{x_{1}}, \ldots, U_{x_{n}}$ which cover $X$. Can you wrap things up from here?
Yes. Let $A$ be a compact set and let $p\in\mathbb R\setminus A.$ For each $x\in A,$ Let $B_x$ be the open interval $\left(x-\dfrac{|p-x|}{2},x+\dfrac{|p-x|}{2}\right)$ and let $V_x$ be the open interval $\left(p-\dfrac{|p-x|}{2},p+\dfrac{|p-x|}{2}\right).$ Then by the compactness of $A,$ there are $x_1,x_2,\ldots,x_n\in A$ such that $A\subseteq B_{x_1}\cup\cdots\cup B_{x_n}$ and it follows that the set $V_{x_1}\cap\cdots\cap V_{x_n}$ does not intersect $A$ and hence $p$ is an interior point of $\mathbb R\setminus A,$ which implies that $A$ is closed.
Note that a similar argument can be used to prove that every compact subset of a metric space is closed.
If $A \subseteq \mathbb R$ is not closed, take $a \in \overline{A} \setminus A$, and for each $n > 0$ consider $$U_n = ( - \infty , a - \tfrac{1}{n} ) \cup ( a + \tfrac{1}{n} , + \infty ).$$ Then $\bigcup_n U_n = \mathbb{R} \setminus \{ a \} \supseteq A$, however no finite subfamily of $\{ U_n : n > 0 \}$ can cover $A$ since $a$ is a limit point of $A$ (for each $n > 0$ there is an $x \in A$ such that $| x - a | < \frac{1}{n}$).
Therefore $A$ cannot be compact.
(To alter this for a a general metric space $(X,d)$, take $U_n = \{ x \in X : d(x,a) > \frac{1}{n} \}$.)
|
The fastest way to solve your problem instance is as outlined in the above comments.
First choose yourself a random message $m$ with $1<m<n-1$. Now compute $c\equiv m^d \pmod n$.
Try if any of the following equations holds, if an equation does hold you've found the public exponent $e$.$m \equiv c^3 \pmod n$
$m \equiv c^{17} \pmod n$ $m \equiv c^{65537} \pmod n$
If none of the above equations held you have two choices, based on the effort you're willing to spend and the probability that $e$ is rather small.
If you suspect $e<\frac{1}{3}N^{\frac{1}{4}}$, then you should use Wiener's attack on small decryption exponent RSA with the lost public exponent taking the role of the decryption exponent to find. Wikipedia explains the basics and Wiener's original attack. As Maarten points out in the comments below this attack is very fast and consumes moderate amounts of memory.
If you think / know that $e<2^{40}$ and/or you're not willing to implement Wiener's attack you can use the following approach, as you can always come back to Wiener's attack in case you think that you've tried long enough.
The brute-force approach would work as follows ($i=3$, optimized using fgrieu's comment): Set $c_m \gets (c * c) \bmod n$, Check if $c \equiv m \pmod n$ or $c_m \equiv m \pmod n$, if the first holds, output 1, if the second holds, output 2. Set $c_3 \gets (c * c_m) \bmod n$ Check if $c_{i}\equiv m \pmod n$ holds. If yes, output $i$ Set $c_{i+2}\gets (c_i*c_m) \bmod n$, goto step 3
If you can not apply Wiener's attack and you consider brute-force "way too inefficient" there are still two methods left:
Use your favorite factorization algorithm to factor $n$ and deduce $e$ from $(d,p,q)$ Use your favorite discrete logarithm algorithm to solve $c^e \equiv m \pmod n$ for $e$.
|
Bunuel wrote:
The height of an equilateral triangle is the side of a smaller equilateral triangle, as shown above. If the side of the large equilateral triangle is 1, what is AB?
A. 1 - √3/2
B. 0.25
C. 2 - √3
D. 1/3
E. 1 - √3/4
(The formula below we suggest you memorize. It will be used here twice.)
\({h_{eq}} = {{L\sqrt 3 } \over 2}\,\,\,\,\,\left( * \right)\,\,\,\,\,\,\,\,\left( {{\rm{height}}\,\,{\rm{of}}\,\,{\rm{an}}\,\,{\rm{equilateral}}\,\,{\rm{triangle}}\,\,{\rm{with}}\,\,{\rm{side}}\,\,L} \right)\)
\(? = AB = {L_{\,{\rm{large}}}} - {h_{{\rm{eq}}\,{\rm{small}}}}\,\, = \,\,\,1 - \,\,{?_{{\rm{temporary}}}}\)
\(\Delta \,{\rm{small}}\,\,:\,\,\,\,\,{L_{\,{\rm{small}}}} = {h_{\,{\rm{eq}}\,{\rm{large}}}}\,\,\mathop = \limits^{\left( * \right)} \,\,\,\,{{1 \cdot \sqrt 3 } \over 2}\,\,\,\,\,\,\,\,\,\, \Rightarrow \,\,\,\,\,\,\,{?_{{\rm{temporary}}}} = {h_{{\rm{eq}}\,{\rm{small}}}}\,\,\,\mathop = \limits^{\left( * \right)} \,\,\,{{\sqrt 3 } \over 2} \cdot {{\sqrt 3 } \over 2} = {3 \over 4}\)
\(? = 1 - {3 \over 4} = {1 \over 4}\)
This solution follows the notations and rationale taught in the GMATH method.
Regards,
Fabio.
_________________
Fabio Skilnik :: GMATH
method creator (Math for the GMAT)
Our high-level "quant" preparation starts here: https://gmath.net
|
I see often that people have troubles with the rule of product in combinatorics. I also see often that people who claim to not have troubles with it and try to explain it to the aforementioned clueless people just end up saying "it's just a formula, don't worry about where it came from and why we do it, just memorize it!"--not a good tactic at all.
I will say I myself don't have a deep understanding of it and will at the end of this answer tell you it is just a formula, but I hope I can help you understand the formula a bit more.
Recall from (likely) grade school where you made tree diagrams to express combinatorics problems visually. Take the following simplified question:
How many 'words' can you arrange out of the letters in the set $\{x,y,z\}$?
Our tree diagram will look like this:
Now if we count each step in our tree, we see in the first step of our tree (far left), we select one of three letters. Each is a root for its own subtree. That's $3$ trees started. So we have: $$(3\ trees).$$Now in the second step (middle), we select one of the letters we have left that isn't the original we started with, leaving us with $2$ new choices per tree. So that is: $$(3\ trees) \times (2\ choices) = 6 \ total.$$We now consider the final step where we choose the remaining letter. However, this step doesn't further split our tree's branches per se, it just extends them. So we get: $$(3\ trees) \times (2\ choices) \times (1\ more\ choice) = 6 \ total,$$ which is identical to our exact formula we had previously memorized, that is: $$3! = 3 \times 2 \times 1 = 6.$$
Now suppose we change around the question a little. Suppose this new condition:
After choosing $z$, choose $z$ to be an element from the set $\{a,b,c\}$.
(imagine this choice is like choosing a different configuration of AUE, but in this case instead of $3!$ choices for the configuration, we have only $3$).
Our tree diagram looks like this:
which is quite similar to before (notice we just appended the new choice onto the end since even if we put it directly after every $z$, it would have the same number of branches (try this yourself if you want!)) but with an additional step.
As with the other steps, we just multiplied the number of choices we already had by the number of new choices like so: $$(3\ trees) \times (2\ choices) \times (1\ more\ choice) \times (3\ choices\ for\ z)= 18\ total.$$
So perhaps now you have a better understanding of why we use the rule of product in the case of your question. As you can tell, it still is sort of a formula you have to know. When you have $\alpha$ ways of doing one task and $\beta$ ways of doing another, you then have $\alpha \times \beta$ ways of doing both.
For the case for addition, note very well that it is more associated with the word "or" than with the word "and". So if you want to calculate the number of ways to do $\alpha$
or $\beta$, then you will have to think about using addition, but even then it isn't so straightforward sometimes (google for inclusion-exclusion principle, among other things for more information on it).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.